abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
An integrated circuit comprising stacked capacitor memory cells having sub-lithographic, edge-defined word lines and a method for forming such an integrated circuit. The method forms conductors adjacent to sub-lithographic word lines in order to couple a stacked capacitor to the access transistor of the memory cell. The conductors are bounded by the word lines. The bit line and capacitor are formed with a single mask image in such a manner as to self-align the bit line and the capacitor and to maximize the capacitance of the memory device. The method may be used to couple any suitable circuit element to a semiconductor device in an integrated circuit having edge-defined, sub-lithographic word lines. |
I claim: 1. An integrated circuit formed using a lithographic process having a minimum lithographic dimension, comprising: a semiconductor device formed in a semiconductor substrate; a first conductor formed outwardly from the first semiconductor device, the first conductor having a width less than the minimum lithographic dimension; a second conductor formed outwardly from the first semiconductor device, the second conductor adjacent to the first conductor; and a circuit element coupled to the semiconductor device by the second conductor. 2. The integrated circuit according to claim 1 further comprising a bit line formed outwardly from the semiconductor device and wherein the circuit element is a storage capacitor, the semiconductor device is a transistor for coupling the bit line to the storage capacitor through the second conductor, and the first conductor is a word line for activating the transistor. 3. The integrated circuit according to claim 2 wherein the second conductor and the bit line are bounded by the word line. 4. The integrated circuit device according to claim 2 wherein the bit line and the storage capacitor are formed using a single mask image. 5. A memory device formed by a lithographic process having a minimum lithographic dimension, comprising: a plurality of transistors formed in a semiconductor substrate, the transistors having a shared drain, each the transistors having a gate and a source, the gate of each transistor extending outwardly from the semiconductor substrate; a plurality of word lines formed outwardly from the transistors, each word line having a width less than the minimum lithographic dimension, each word line connected to the gate of a different transistor for activating the transistor; a bit line and a plurality of conductors formed outwardly from the transistors, the bit line connected to the shared drain of the transistors, each conductor connected to the source of a different transistor, the bit line and the conductors adjacent to the word lines; and a plurality of storage capacitors formed outwardly from the bit line and the conductors, each storage capacitor coupled a source of a different transistor by a different conductor. 6. The memory device according to claim 5 wherein the conductors and the bit line are bounded by the word lines. 7. The memory device according to claim 5 wherein the bit line and the storage capacitors are formed using a single mask image. 8. A pair of memory cells for an integrated memory device formed using a lithographic process having a minimum lithographic dimension, comprising: two transistors formed in a semiconductor substrate, the transistors having a shared drain, each transistor having a gate and a source, the gate of each transistor extending outwardly from the semiconductor substrate; two word lines formed outwardly from the transistors, each word line having a width less than the minimum lithographic dimension, each word line connected to a gate of a different transistor, the word lines for activating the transistors; a bit line and two conductors formed outwardly from the transistors, the bit line connected to the shared drain of the transistors, each conductor connected to the source of a different transistors, the bit line and the two conductors adjacent to the word line; and two storage capacitors formed outwardly from the bit line and the conductors, each storage capacitor coupled a source of a different transistors by one of the conductors. 9. The pair of memory cells according to claim 8 wherein the conductors and the bit line are bounded by the word lines. 10. The pair of memory cells according to claim 8 wherein the bit line and the storage capacitors are formed by using a single mask image. 11. A semiconductor memory device formed using a lithographic process having a minimum lithographic dimension, comprising: a transistor formed in a semiconductor substrate; a word line formed outwardly from the transistor, the word line having a width less than the minimum lithographic dimension, wherein the word line is used to activate the transistor; a bit line formed outwardly from the transistor which is coupled to the transistor, wherein the bit line is adjacent to the word line; a conductor formed outwardly from the transistor which is coupled to the transistor, wherein the conductor is adjacent to the word line; and a storage capacitor formed outwardly from the bit line and the conductor, wherein the storage capacitor is coupled to the transistor by the conductor. 12. The semiconductor memory device of claim 11, wherein the conductor and the bit line are bounded by the word line. 13. The semiconductor memory device of claim 11, wherein the bit line is self aligned with the storage capacitor using a single mask image. 14. A method for forming an integrated circuit using a lithographic process having a minimum lithographic dimension, comprising the steps of: forming a semiconductor device in a semiconductor substrated; forming a first conductor outwardly from the semiconductor device, the first conductor having a width less than the minimum lithographic dimension; forming a second conductor outwardly from the semiconductor device, the second conductor adjacent to and bounded by the first conductor; and coupling a storage capacitor to the semiconductor device by the second conductor, wherein the second conductor is self-aligned with the storage capacitor by using a single mask image. 15. The method of claim 14, wherein forming a semiconductor device comprises forming a transistor for accessing the storage capacitor. 16. The method of claim 15, wherein forming a first conductor comprises forming a word line for activating the transistor. 17. A method of fabricating two memory cells having a shared bit line using a lithographic process having a minimum lithographic dimension, comprising: forming two transistors in a semiconductor substrate, the transistors having a shared drain, each transistor having a gate and a source, the gate extending outwardly from the semiconductor substrate; forming two word lines outwardly from the transistors, each word line having a width less than the minimum lithographic dimension, one word line connected to a gate of one of the two transistors and one word line connected to the other for activating the transistor; forming a bit line and two conductors outwardly from the transistors that couple to the transistor, the bit line coupled to the shared drain of the transistors, one conductor coupled to a source of one of the two transistors and one conductor coupled to a source of the other transistor, the bit line and the conductors adjacent to and bounded by the word lines; and forming two storage capacitors outwardly from the bit line and the conductors, one storage capacitor coupled to a source of one of the two transistors by one of the conductors and one storage capacitor coupled to a source of the other transistor by the other conductor, wherein the bit line is self-aligned with the one storage capacitor by using a single mask image. 18. An integrated circuit formed using a lithographic process having a minimum lithographic dimension, comprising: a transistor formed in a semiconductor substrate; a bit line formed outwardly from the transistor; a word line for activating the transistor formed outwardly from the transistor, the word line having a width less than the minimum lithographic dimension; a first conductor formed outwardly from the transistor, the second conductor adjacent to the word line, wherein the first conductor and the bit line are bounded by the word line; and a storage capacitor coupled to the transistor by the word line, wherein the transistor couples the bit line to the storage capacitor through the first conductor and wherein the bit line and the storage capacitor are formed using a single mask image. 19. A memory device formed by a lithographic process having a minimum lithographic dimension, comprising: a plurality of transistors formed in a semiconductor substrate, the transistors having a shared drain, each the transistors having a gate and a source, the gate of each transistor extending outwardly from the semiconductor substrate; a plurality of word lines formed outwardly from the transistors, each word line having a width less than the minimum lithographic dimension, each word line connected to the gate of a different transistor for activating the transistor; a bit line and a plurality of conductors formed outwardly from the transistors, the bit line connected to the shared drain of the transistors, each conductor connected to the source of a different transistor, the bit line and the conductors adjacent to the word lines; a plurality of storage capacitors formed outwardly from the bit line and the conductors, each storage capacitor coupled a source of a different transistor by a different conductor; and wherein the conductors and the bit line are bounded by the word lines and the bit line and the storage capacitors are formed using a single mask image. 20. A pair of memory cells for an integrated memory device formed using a lithographic process having a minimum lithographic dimension, comprising: two transistors formed in a semiconductor substrate, the transistors having a shared drain, each transistor having a gate and a source, the gate of each transistor extending outwardly from the semiconductor substrate; two word lines formed outwardly from the transistors, each word line having a width less than the minimum lithographic dimension, each word line connected to a gate of a different transistor, the word lines for activating the transistors; a bit line and two conductors formed outwardly from the transistors, the bit line connected to the shared drain of the transistors, each conductor connected to the source of a different transistors, the bit line and the two conductors adjacent to the word line and wherein the conductors and the bit line are bounded by the word lines; and two storage capacitors formed outwardly from the bit line and the conductors, each storage capacitor coupled a source of a different transistors by one of the conductors, wherein the bit line and the storage capacitors are formed by using a single mask image. |
FIELD OF THE INVENTION This invention pertains to the field of semiconductor devices, and in particular, pertains to a method for coupling to a semiconductor device in an integrated circuit having edge-defined, sub-lithographic conductors. BACKGROUND OF THE INVENTION Manufacturers of semiconductor memory devices continually strive to reduce the size of individual memory cells. By reducing the size of individual memory cells, faster and higher capacity memory devices can be constructed. Conventional memory cells typically comprise a substrate, a transistor formed in the substrate, a storage capacitor coupled to the transistor, and a word line and a bit line for accessing the memory cell. One limiting factor in reducing the size of the memory cell is the size of the access lines of the memory device. In order to resolve this limitation, manufacturers have developed memory devices with word line conductors that extend normal to the substrate. Furthermore, the word lines are narrower than the gate regions of the transistors to which they are coupled. This type of word line is known in the art as an edge-defined word line. The typical fabrication process of semiconductor memory device comprises a series of lithographic steps where material is either deposited on the device or removed from the device. The minimum dimension of the material which can be deposited or removed is known in the art as the minimum lithographic dimension. In a further attempt to reduce the size of a memory cell, manufacturers have developed word lines which are narrower than the minimum lithographic dimension. The development of sub-lithographic word lines which extend normal to the surface of the substrate has eliminated the size of a word line as a limiting factor in the reduction of the memory cell size. The development, however, has introduced complexities in the other areas of the memory cell. Traditionally, there are two ways of implementing the capacitor region of a memory cell. A memory cell may contain a trench capacitor or a stacked capacitor. A trench capacitor is formed by etching a hole in the substrate. The storage electrode of the trench capacitor is inside the hole and the plate electrode is the substrate. Because a trench capacitor is located in the substrate of the semiconductor device, it is formed before the word lines of the memory device are formed. Thus, with a trench capacitor, the semiconductor manufacturer can easily form edge-defined word lines on top of the substrate and the trench capacitor. Trench capacitors have several disadvantages. One disadvantage is the difficulty of fabricating them without introducing silicon crystal defects which result in leakage currents from the storage nodes. In contrast, stacked capacitor are formed on top of the cell transistor and therefore do not significantly affect leakage currents within the silicon. Because of their location, however, spacial interference with the cell wiring may limit the fraction of the cell area available for the capacitor. SUMMARY OF THE INVENTION For the above reasons, it is advantageous to build the storage capacitor of a memory cell on a plane above that of the wiring and to provide a method to connect the stacked capacitor, located above the sub-lithographic word lines, to the transistor, located, below the sub-lithographic word lines. The present invention allows the active regions of the substrate to be accessed through and above the complex, sub-lithographic word lines so that the active regions can be connected to a semiconductor device, such as a stacked capacitor, which is formed outwardly from the word lines. One aspect of the invention is a method for forming an integrated circuit using a lithographic process having a minimum lithographic dimension. The method comprises the steps of forming a semiconductor device in a semiconductor substrate, forming a first conductor outwardly from the semiconductor device, the first conductor having a width less than the minimum lithographic dimension, forming a second conductor outwardly from the semiconductor device, the second conductor adjacent to the first conductor, and coupling a circuit component to the semiconductor device by the second conductor. According to another feature of the invention, the step of coupling a circuit component to the semiconductor comprises the step of coupling a storage capacitor to the semiconductor and the step of forming a semiconductor device comprises the step of forming a transistor for accessing the storage capacitor. According to another feature of the invention, the step of forming a first conductor comprises the step of forming a word line for activating the transistor. According to another feature of the invention, the second conductor is aligned with the circuit component by using a single mask image. According to another feature of the invention, the second conductor is bounded by the first conductor. Another aspect of the invention is a method for forming a semiconductor memory device using a lithographic process having a minimum lithographic dimension. The method comprising the steps of forming a transistor in a semiconductor substrate, forming a word line outwardly from the transistor, the word line having a width less than the minimum lithographic dimension, the word line for activating the transistor, forming a bit line and a conductor outwardly from the transistor that couple to the transistor, said bit line and said conductor adjacent to the word line, and forming a storage capacitor outwardly from the bit line and the conductor, said storage capacitor coupled to the transistor by the conductor. Another aspect of the invention is a method for forming two memory cells having a shared bit line using a lithographic process having a minimum lithographic dimension. The method comprising the steps of forming two transistors in a semiconductor substrate, the transistors having a shared drain, each transistor having a gate and a source, the gate extending outwardly from the semiconductor substrate, forming two word lines outwardly from the transistors, each word line having a width less than the minimum lithographic dimension, each word line connected to a gate of a different transistor for activating the transistor, forming a bit line and two conductors outwardly from the transistors that couple to the transistor, the bit line coupled to the shared drain of the transistors, each conductor coupled to a source of a different transistor, the bit line and the conductors adjacent to the word line, and forming two storage capacitors outwardly from the bit line and the conductors, each storage capacitor coupled a source of a different transistor by a different conductor. Another aspect of the invention is an integrated circuit formed using a lithographic process having a minimum lithographic dimension. The integrated circuit comprising a semiconductor device formed in a semiconductor substrate, a first conductor formed outwardly from the first semiconductor device, the first conductor having a width less than the minimum lithographic dimension, a second conductor formed outwardly from the first semiconductor device, the second conductor adjacent to the first conductor, and a circuit element coupled to the semiconductor device by the second conductor. Another aspect of the present invention is a memory device formed by a lithographic process having a minimum lithographic dimension. The memory device comprising a plurality of transistors formed in a semiconductor substrate, the transistors having a shared drain, each the transistors having a gate and a source, the gate of each transistor extending outwardly from the semiconductor substrate, a plurality of word lines formed outwardly from the transistors, each word line having a width less than the minimum lithographic dimension, each word line connected to the gate of a different transistor for activating the transistor, a bit line and a plurality of conductors formed outwardly from the transistors, the bit line connected to the shared drain of the transistors, each conductor connected to the source of a different transistor, the bit line and the conductors adjacent to the word lines, and a plurality of storage capacitors formed outwardly from the bit line and the conductors, each storage capacitor coupled a source of a different transistor by a different conductor. Another aspect of the invention is a pair of memory cells for an integrated memory device formed using a lithographic process having a minimum lithographic dimension. The pair of memory cells comprising two transistors formed in a semiconductor substrate, the transistors having a shared drain, each transistor having a gate and a source, the gate of each transistor extending outwardly from the semiconductor substrate, two word lines formed outwardly from the transistors, each word line having a width less than the minimum lithographic dimension, each word line connected to a gate of a different transistor, the word lines for activating the transistors, a bit line and two conductors formed outwardly from the transistors, the bit line connected to the shared drain of the transistors, each conductor connected to the source of a different transistor, the bit line and the two conductors adjacent to the word line, and two storage capacitors formed outwardly from the bit line and the conductors, each storage capacitor coupled a source of a different transistors by one of the conductors. BRIEF DESCRIPTION OF THE DRAWING FIGS. 1A through 1F are perspective views of a portion of an integrated circuit that illustrate an embodiment of a method for forming the integrated circuit with self-aligned gate segments. FIGS. 2A and 2B are cross sectional views of an integrated circuit that illustrate an embodiment of a method for forming sub-lithographic word lines. FIGS. 3 through 17 are cross section and top views of an integrated circuit that illustrate one embodiment of a method for coupling to a semiconductor device in the integrated circuit having sub-lithographic, edge-defined word lines. Specifically, FIGS. 3, 5, 6A, and 7A through 17 are cross-sectional views of the integrated circuit throughout the illustrated embodiment. FIGS. 4A, 4B, and 6B are top views of the integrated circuit throughout the illustrated embodiment. FIG. 18 is a schematic diagram of one embodiment of a memory device according to the teachings of the present invention. DETAILED DESCRIPTION OF THE INVENTION In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific illustrative embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical and electrical changes may be made without departing from the spirit and scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense. Formation of Gate Segments FIGS. 1A through 1F are perspective views of a portion of an integrated circuit, indicated generally at 10, that illustrate an embodiment of a method for forming integrated circuit 10 according to the present invention. In the illustrated embodiment, integrated circuit 10 comprises a memory device with an array of storage cells having segmented gates that are self-aligned to shallow trench isolation regions. Specifically, the array of storage cells produced by this method can advantageously be used in a dynamic random access memory (DRAM) device with a shared, or folded, bit line structure. However, the teachings of the present invention are not limited to DRAM applications. The segmented, self-aligned gates can be used in other appropriate applications that call for conductors with a pitch that is less than the minimum lithographic dimension. These conductors are referred to as "sub-lithographic" conductors. Referring to FIG. 1A, a number of active regions 12 are established for layer of semiconductor material 14 by shallow trench isolation region 16. The method produces two cells for memory device 10 in each active region 12. Shallow trench isolation region 16 is formed by first etching a trench through nitride layer ("pad") 18, oxide layer 20 and into layer of semiconductor material 14. The trench is over-filled with, for example, an oxide in a chemical vapor deposition (CVD) process. Shallow trench isolation region 16 is completed by polishing a working surface of the oxide back to a surface of nitride layer 18 using, for example, an appropriate planarization technique such as chemical mechanical planarization. Referring to FIG. 1B, nitride layer 18 and oxide layer 20 are removed from layer of semiconductor material 14. This leaves a portion of shallow trench isolation region 16 extending outwardly from layer of semiconductor material 14 and surrounding and isolating active regions 12. This portion of shallow trench isolation region 16 is used to align the gate segments and confine the gate segments to active regions 12. Next, gate oxide layer 22 is formed in active regions 12 by, for example, growing a layer of silicon dioxide outwardly from layer of semiconductor material 14. Conductive layer 24 is formed outwardly from gate oxide layer 22 and covers active regions 12 and shallow trench isolation region 16. Conductive layer 24 typically comprises poly-silicon that is deposited using a chemical vapor deposition technique. A chemical/mechanical polish method is used to planarize the poly-silicon of conductive layer 24 to the level of the shallow trench isolation region 16, leaving poly-silicon in active regions 12 as shown in FIG. 1C. Referring to FIG. 1D, the method next defines the position of the gate segments. Photoresist layer 26 is deposited outwardly from shallow trench isolation region 16 and conductive layer 24. Photoresist layer 26 is exposed to produce, for example, a conventional word line pattern as shown. In a conventional application, the word line pattern is used to simultaneously form the gates of the access devices and the interconnections between gates of adjacent devices in the memory array. In this embodiment, a portion of the remaining photoresist layer 26 passes over shallow trench isolation region 16 as indicated at 28, for example. Referring to FIG. 1E, portions of conductive layer 24 are selectively removed to form two gate segments 30 in each active region 12. Photoresist layer 26 and exposed portions of gate oxide layer 22 are removed as shown in FIG. 1F. Thus, the method produces gate segments 30 that are self-aligned by shallow trench isolation region 16. Once gate segments 30 are formed, source/drain regions 32 are formed by, for example, ion implantation in layer of semiconductor material 14. Formation of Sub-Lithographic Word Lines FIGS. 2A and 2B are cross sectional views of an integrated circuit, indicated generally at 40, that illustrate an embodiment for forming sub-lithographic word lines. These word lines can be used, for example, to interconnect gate segments 30 of FIG. 1F to form an array for a memory device. Such sub-lithographic word lines have widths that are less than the minimum feature size of the process, and thus allow such memory arrays to be constructed with folded-digit-line architecture without the word lines being electrically shorted together. Of course, such reduced-area memory arrays can be constructed with a shared-digit-line architecture using conventional process technology to form conventional word lines. Thus, the techniques shown in FIGS. 2A and 2B are not required to form a shared-digit-line architecture. Referring to FIG. 2A, integrated circuit 40 includes gate segment 42 that is formed, for example, according to the technique described above with respect to FIGS. 1A through 1F. Gate segment 42 is capped with, for example, nitride pad layer 45. Insulative sidewalls 44 are formed adjacent to the exposed vertical sidewalls of gate segment 42 using conventional techniques. For example, a layer of silicon dioxide is deposited using a chemical vapor deposition (CVD) process on exposed surfaces. The layer is then anisotropically etched to form insulative sidewalls 44. Conductive material is deposited adjacent to the sidewalls to form contacts 46 for source/drain regions 48 and is conventionally polished back to the exposed surface of trench isolation regions 50 and pad layer 45. Contacts 46 are further etched back so as to become recessed with respect to the surface pad layer 45. Next, insulator layer 52, such as an oxide, is conventionally grown or deposited and then polished back to the surface of trench isolation region 50 and pad layer 45 to give the structure shown in FIG. 2A. Referring to FIG. 2B, mandrel 54 is conventionally formed on layer 52 and pad layer 45. In one embodiment, mandrel 54 is formed from intrinsic (undoped), poly-silicon. Mandrel 54 is then polished to smooth its upper surface. Next, groove 56 is etched in mandrel 54 to expose sidewalls 58a and 58b in mandrel 54. Sidewall 58a is over trench isolation region 50, and sidewall 58b is over gate segment 42. An anisotropic etch removes the exposed portion of pad layer 45 and thus exposes a region of gate 42. A conductive material such as poly-silicon is formed in groove 56. The conductive material is anisotropically etched to leave conductive sidewalls that become sub-lithographic word lines 60 and 62. In one embodiment, mandrel 54 is then removed. In another embodiment, mandrel 54 and word lines 60 and 62 are polished or etched to make the shape of word lines 60 and 62 more rectangular, and to center word line 60 over gate 42. Formation of Stacked Capacitors FIGS. 3 through 17 are cross section and plan views of an integrated circuit that show one embodiment of a method for forming stacked capacitors in a reduced-area memory array wherein the memory array has sub-lithographic, edge-defined word lines formed, for example, as described above with respect to FIGS. 2A and 2B. FIG. 3 illustrates one embodiment of an integrated circuit in which memory cells share a bit line. This embodiment is shown by way of example and is not a limitation of the present invention. Alternate embodiments exist, such as a conventional memory cell having its own bit line, and are within the scope of the present invention. Referring to FIG. 3, silicon substrate 134 provides a strong base for the semiconductor layers of integrated circuit 133. The term substrate refers to the base semiconductor layer or layers or structures of an integrated circuit which includes active or operable portions of semiconductor devices. In addition shallow trench isolation 120 provides support and isolation between the devices in integrated circuit 133. N+ diffusion regions 135, 136, and 137 are formed in substrate 134 by introducing any suitable N-type dopant into substrate 134. The N-type dopant, such as phosphorus, is typically introduced by diffusion or ion implantation. Device gates 138 and 139 typically comprise poly-silicon and are separated from substrate 134 by thin layers of oxide 150 and 151 respectively in order to limit the gate current to a negligible amount. In this configuration, N+ diffusion region 135, device gate 138, substrate 134, and N+ diffusion region 137 define a first transistor. Similarly, N+ diffusion region 136, device gate 139, silicon substrate 134, and N+ diffusion region 137 define a second transistor. The transistors are shown as exemplary only, in an alternate embodiment, any suitable semiconductor device may be formed in substrate 134 without departing from the scope of the present invention. The center N+ diffusion region 137 acts as a common source or drain while the N+ diffusion regions 135 and 136 act as independent sources or drains depending upon the voltage applied to the regions. In one embodiment, the transistors are essentially enhanced n-channel MOS transistors. Alternatively, any transistor configuration suitable for memory cell access may readily be used. Integrated circuit 133 comprises contact regions which can be any appropriate conductive material such as poly-silicon. These contact regions are coupled to the N+ diffusion regions. Contact region 140 is coupled to N+ diffusion region 137 while contact regions 141 and 142 are coupled with the N+ diffusion regions 135 and 136 respectively. The contact insulating layers 145 comprise a conventional thin film insulator such as silicon nitride, Si3 N4 and insulate contact regions 140, 141, and 142. Integrated circuit 133 comprises conductors 161 and 162 which extend normal to the substrate 134 and are formed outwardly from device gates 138 and 139. Conductors 161 and 162 are sub-lithographic, edge-defined word lines of a suitable conductor such as poly-silicon. In another embodiment, the edge-defined word lines comprise any suitable conductive material such as a conventional metal. Sub-lithographic, edge-defined word lines 161 and 162 are formed outwardly from device gates 138 and 139 using semiconductor fabrication techniques as are known in the art. "Passing" conductors 170 form a second pair of conductors which provide a conductive path to adjacent memory cells in integrated circuit 133. FIG. 4A, which is a top view of integrated circuit 133, illustrates the interconnection of the memory cells of integrated circuit 133. Specifically, FIG. 4A illustrates how conductors 161 and 162 are coupled with device gates 138 and 139 respectively within memory cell 250. FIG. 4A also illustrates how passing conductors 170 pass through memory cell 250 and are coupled to device gates 251 and 252 of adjacent memory cells 256 and 257. Note that memory cells 256 and 257 are only partially shown. Referring again to FIG. 3, conductors 161 and 162 are capped with insulator 180 and are lined with insulator 190. Insulator 195 insulates device gates 138 and 139. Any suitable semiconductor insulator such as SiO2 may be used for insulators 180, 190, or 195. In order to form stacked capacitors outwardly from substrate 134 of integrated circuit 133, a material with a high degree of etch selectivity is used. The suitable material, such as intrinsic poly-silicon 200, is deposited between the conductors 161 and 162 and passing conductors 170 by a conventional process such as chemical-vapor deposition (CVD). As is well-known in the art, CVD is the process by which gases or vapors are chemically reacted, leading to the formation of a solid on a substrate. The high degree of etch selectivity of a material such as intrinsic poly-silicon is advantageous because it allows intricate etching without disturbing the surrounding semiconductor regions. Next, a photoresist and a mask is used to reveal the plurality of semiconductor memory cells of substrate 134. FIG. 4B illustrates the layout of the mask. First, a photoresist is applied to the entire integrated circuit 133. Masked areas 260 illustrate the areas of photoresist 270 which are covered by a mask and therefore are not hardened when exposed to ultraviolet light. After exposing the resist and mask, the intrinsic poly-silicon 200 between conductors 161 and 162 and passing conductors 170 is removed by selectively etching the material. As illustrated in FIG. 5, three stud holes 300 are created in integrated circuit 133. Stud holes 300 extend into integrated circuit 133 toward substrate 134 and ultimately reveal contact insulating layers 145. The portions of the intrinsic poly-silicon 200 which are covered by the mask are not etched. With the mask still present on the surface of the wafer, the exposed contact insulating layers 145 are etched. This step exposes contact regions 140. FIG. 6A illustrates how contact insulating layers 145 are etched and how small portions of contact insulating layers 145 remain between insulator 190 and contact regions 141 and 142. At this point, photoresist 270 is removed. As illustrated in FIG. 6B, which is a top view of integrated circuit 133, after exposing contact regions 141 and 142, an insulator such as SiO2 is CVD deposited on the walls of the openings between conductors 161 and 162 and passing conductors 170. This step creates insulating sleeve 400 which lines insulator 190 and intrinsic poly-silicon 200 but covers the recently exposed surfaces of contact regions 140, 141, and 142. The additional layer of insulation is advantageous because it reduces the size of the three stud holes 300 and reduces parasitic capacitances of conductive connections between the active regions of substrate 134 and the stacked capacitors which will be formed. This deposition is followed by a directional (anisotropic) etch such as a dry reactive ion etch (RIE) which removes the recently deposited oxide from all horizontal surfaces but leaves it on the vertical surfaces. This removes the insulator from the recently exposed contact regions 140, 141, and 142. It is necessary to correctly time the etch so that it does not inadvertently etch the horizontal oxide layers 195 which insulates the base of conductors 161 and 162 and device gates 138 and 139. As a result of the directional etch, the three stud holes 300 are lined with an insulating sleeve 400. As illustrated in FIG. 7A, the next step in the process is to fill stud holes 300 with a conductive material such as doped poly-silicon 500 by conventional chemical-vapor deposition. Doped poly-silicon 500 is planarized so that it is flush with oxide caps 180 by chemical mechanical polishing (CMP). The doped poly-silicon 500 provides a conductive paths to contact regions 140, 141 and 142. In this manner, the conductive paths formed by doped poly-silicon 500 are bounded by conductors 161 and 162 and passing conductors 170. Next, as illustrated in FIG. 7B, the remaining portions of intrinsic poly-silicon 200, which were hidden by mask 270, are selectively etched. Insulator 550, which may be any conventional insulator such as SiO2, is deposited on the entire wafer to fill the void regions in the wafer where intrinsic poly-silicon 200 was removed. Insulator 550 is then planarized by a conventional process so that the insulator is planar with oxide caps 180 and doped poly-silicon 500. The resulting formation is shown in FIG. 7B and is virtually identical to FIG. 7A with the exception that intrinsic poly-silicon 200 has been replaced with oxide filler 550. At this point in the fabrication of the stacked capacitors, the process has effectively provided conductive paths between the sub-lithographic, edge-defined word lines to the active regions of the substrate. The remaining steps in the process form the stacked capacitors. As illustrated in FIG. 8, a thick layer of intrinsic poly-silicon 600 is CVD deposited on the entire wafer. This layer should be at least 0.5 microns thick. Next, a thin mask 650 is created by depositing a conventional thin film insulator such as Si3 N4 on the thick layer of intrinsic poly-silicon 600. The thin mask 650 should be approximately 500 angstroms thick. Next, a resist is applied to the wafer and is used to define openings over the doped poly-silicon 500. As illustrated in FIG. 9, three holes are etched in thin mask layer 650. The center hole 700 will ultimately be used for contacting the center region of doped silicon poly 500 and the outer holes 705 will be used to form stacked capacitors. Therefore, the sizes and shapes of outer holes 705 should be designed to maximize capacitor size and minimize contact size. One advantageous feature of thin mask layer 650 is that it will function as a single mask image during the subsequent forming of the stacked capacitors and bit line contact. Specifically, thin mask layer 650 allows separate etching steps for a bit line contact and for the stacked capacitors, yet the formations will inherently be self-aligned because of thin mask layer 650. This feature allows for different etching techniques to be used for stacked capacitors and a bit line contact yet maintains their alignment. Referring to FIG. 10, after etching the thin mask layer 650, the resist is stripped and a new resist and mask is applied which only exposes center hole 700. Once the new mask is applied, a bit line contact hole 810 is created by anisotropically etching the exposed area of thick layer of intrinsic poly-silicon 600 to reveal the doped poly-silicon 500 between the two conductors 161 and 162. After etching the thick layer of intrinsic poly-silicon 600, an insulator such as SiO2 is deposited and RIE etched to leave a bit line insulating liner 800 on the exposed wall of the intrinsic poly-silicon 600. Next, the resist is stripped to expose outer holes 705 of thin film insulator 650. As illustrated in FIG. 11, intrinsic poly-silicon 600 is etched to create two node areas 900. During this step, the thin film insulator 650 acts as a mask so a new mask and resist need not be applied. It is preferable that the etch have an isotropic component such that the etch is slightly nondirectional. The isotropic component effectively enlarges the size of node areas 900 relative to outer holes 705 in thin film insulator. After etching, the thin mask layer 650 is removed. Referring to FIG. 12, a conductive material such as N+ poly-silicon is deposited on integrated circuit 133. Since bit line contact hole 810 is smaller than node areas 900, partly due to bit line insulating liner 800 and partly due the isotropic component, the N+ poly-silicon completely fills the first bit line contact hole 810 and forms a liner in the newly created node areas 900. Filling the first bit line contact hole 810 forms a bit line contact stud 1010. The layer of N+ poly-silicon which is deposited in the node areas 900 forms two storage plates 1001 and 1002; therefore, the thickness of the N+ poly-silicon should be only enough to guarantee filling the first bit line contact hole 810. After creating storage plates 1001 and 1002 and bit line contact stud 1010, the N+ poly-silicon is CMP polished in order to guarantee that storage plates 1001 and 1002 are separated from bit line contact stud 1010. As illustrated in FIG. 13, the remaining intrinsic poly-silicon 600 is selectively etched after the N+ poly-silicon is planarized. This step produces openings in the isolation regions of the semiconductor wafer exposing the oxide filler 550. From this point, the inventive method follows conventional steps to form stacked capacitors outwardly from the storage plates 1000. Referring to FIG. 14, dielectric material 1200, which is any suitable dielectric material such as tantalum pentoxide, is deposited. In an alternate embodiment, any suitable dielectric material may be used. Next, the final plate conductor 1210 is deposited on the dielectric material 1200. In one embodiment, platinum is used as the final plate conductor 1210. In another embodiment, any suitable metallic conductor may be used. As illustrated in FIG. 15, planarizable insulator 1300, which is any suitable insulator such as SiO2, is deposited after the necessary capacitor materials are formed. The insulator 1300 is planarized such that the surface is sufficiently smooth. FIG. 16 illustrates a second bit line contact hole 1400 which is formed by applying a conventional contact mask and etching through planarizable insulator 1300, final plate conductor 1210, and through dielectric material 1200. In this manner, second bit line contact hole 1400 exposes bit line contact stud 1010. FIG. 17 illustrates the final configuration of the memory device. After forming second bit line contact hole 1400, a conformal insulator such as SiO2 is deposited to create a bit line contact insulating liner 1500. This deposition is followed by an anisotropic etch which removes the recently deposited oxide from the exposed surface of bit line contact stud 1010 but leaves the oxide on the other surfaces. Finally, a metal is deposited and patterned to form bit line metal 1510. As depicted in FIG. 17, the memory device comprises stacked capacitor C1 and stacked capacitor C2. The stacked capacitors C1 and C2 are accessed by transistors T1 and T2 respectively. Stacked capacitor C1 is coupled to transistor T1 by conductor 1521 which is adjacent to sub-lithographic, edge defined word line 161. Conductor 1521 comprises contact region 141 and doped poly-silicon 1531. Similarly, stacked capacitor C2 is coupled to transistor T2 by conductor 1522 which is adjacent to sub-lithographic, edge defined word line 162. Conductor 1522 comprises contact region 142 and doped poly-silicon 1532. Retrieving data stored in stacked capacitors C1 and C2 is accomplished by bit line 1520 which comprises doped poly-silicon 1530, contact region 140, bit line contact stud 1010, and bit line metal 1510. In an alternate embodiment, T1 and T2 may be any semiconductor device suitable for being outwardly formed from substrate 134. For example, in another embodiment, T1 and T2 may be diodes. Similarly, in another embodiment, stacked capacitors C1 and C2 may be any circuit element formed outwardly from word lines 161 and 162 and which is suitable for coupling to the first semiconductor device. For example, in an alternate embodiment, the circuit element may be a resistor, a diode, or a transistor. Memory Device FIG. 18 is a schematic diagram of a memory device, indicated generally at 2110. Memory device 2110 uses dual or folded digit lines to transfer data to and from memory cells via input/output (I/O) port 2112. Memory device 2110 includes word lines 2116, bit lines 2118, and bit complement lines 2120. A memory cell 2122 is coupled to each word line 2116 at the intersection with either a bit line 2118 or a bit complement line 2120. Sense amplifiers 2114 are coupled to a corresponding pair of bit line 2118 and bit complement line 2120. The operation of memory device 2110 is not tied to the folded digit line configuration shown in FIG. 2. Memory device 2110 may, alternatively, use an open digit line or other appropriate configuration for the array of memory cells that can be accessed through sense amplifiers 2114. Memory device 2110 further includes circuitry that selects a memory cell 2122 from memory device 2110 to receive input or provide output to an external device such as a microprocessor (not shown) at I/O port 2112. Address buffers 2124 receive an address at input port 2126 from the external device. Address buffers 2124 are coupled to row decoder 2128 and column decoder 2131. Column decoder 2131 includes input-output circuitry that is coupled to an external device at I/O port 2112. Row decoder 2128 is coupled to word lines 2116. Column decoder 2131 is coupled to bit lines 2118 and bit complement lines 2120. In operation, memory device 2110 receives an address of a selected cell at address buffers 2124. Address buffers 2124 identify a word line 2116 of a selected cell 2122 to row decoder 2128. Row decoder 2128 provides a voltage on word line 2116 to activate access transistors 2130 of each cell 2122 of the selected word line 2116. The charge on the capacitor 2132 is coupled to one of the bit lines 2118 or bit complement lines 2120. Sense amplifier 2114 senses a slight difference between the voltage on bit line 2118 and the voltage on bit complement line 2120 of the selected cell 2122 and drives bit line 2118 and bit complement line 2120 to the value of the power supply rails. Conclusion The present invention was described in terms of a integrated circuit having a plurality of memory cells comprising two storage capacitors and a shared bit line; however, the method and apparatus are applicable to memory cells having one or more storage capacitors and which have edge-defined word lines. Furthermore, one embodiment has two enhanced n-channel MOS transistors. One skilled in the art will recognize that other types of access transistors can readily be used without departing from the present invention. The illustrated embodiment coupled a stacked capacitor to an access transistor in an integrated circuit having sub-lithographic, edge-defined word lines. In alternate embodiments, any suitable circuit element may be coupled to any suitable semiconductor device without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is manifestly intended that this invention be limited only by the claims and equivalents thereof. |
A nonplanar semiconductor device having a semiconductor body formed on an insulating layer of a substrate. The semiconductor body has a top surface opposite a bottom surface formed on the insulating layer and a pair of laterally opposite sidewalls wherein the distance between the laterally opposite sidewalls at the top surface is greater than at the bottom surface. A gate dielectric layer is formed on the top surface of the semiconductor body and on the sidewalls of the semiconductor body. A gate electrode is formed on the gate dielectric layer on the top surface and sidewalls of the semiconductor body. A pair of source/drain regions are formed in the semiconductor body on opposite sides of the gate electrode. |
IN THE CLAIMS We claim: 1. A semiconductor device comprising: a semiconductor body formed on an insulating layer of a substrate, said semiconductor body having a top surface opposite a bottom surface formed on said insulating layer and a pair of laterally opposite sidewalls wherein the distance between said laterally opposite sidewalls at said top surface is greater than at said bottom surface; a gate dielectric layer formed on said top surface of said semiconductor <">body and on said sidewalls of said semiconductor body; a gate electrode formed on said gate dielectric layer on said top surface and sidewalls of said semiconductor body; and a pair of source/drain region formed in said semiconductor body on opposite sides of said gate electrode. 2. The semiconductor device of claim 1 wherein said distance between said, sidewalls at the bottom surface of said semiconductor body is approximately 1/2 to 2/3 of the distance between the sidewall on top surface of said semiconductor body. 3. The semiconductor device of claim 1 wherein the distance between said. sidewalls of said semiconductor body become smaller than at the top surface at approximately the mid portion of said semiconductor body. 4. The semiconductor device of claim 1 wherein the distance between said sidewalls is uniform at the top portion of said semiconductor body and becomes increasingly smaller towards the bottom portion of said semiconductor body. 5. The semiconductor device of claim 1 wherein the distance between said sidewall at the bottom portion of said semiconductor body is made sufficiently small so as to improve the short channel effects of said transistor. 6. The semiconductor device of claim 1 wherein the distance between said laterally opposite sidewalls at said top surface of said semiconductor body is approximately 30-20nm. 7. The semiconductor device of claim 1 wherein the distance between said laterally opposite sidewalls near at said bottom portion of said semiconductor body is approximately 15-10nm. 8. A semiconductor device comprising: a semiconductor body formed on an insulating layer of a substrate, said semiconductor body having a top surface opposite a bottom surface forrxted on said insulating layer, and a pair of laterally opposite sidewalls wherein said laterally opposite sidewalls -have a facet such that the bottom portion of said semiconductor body is thinner than the top portion of said semiconductor body; a gate dielectric layer formed on said top surface said semiconductor body and on said sidewalls of said, semiconductor body; a gate electrode formed, on said gate dielectric layer on said sidewalls of said semiconductor body and on said top surface of said semiconductor body; and a pair of source /drain regions formed in said semiconductor body on opposite sides of said gate electrode. 9. The semiconductor device of claim 8 wherein said semiconductor body comprises silicon. 10. The semiconductor device of claim 8 wherein the distance between said sidewalls near the bottom, surface of said semiconductor body is approximately 50-66% of the distance between said sidewalls at the top of said semiconductor body. 11. A method of forming a device comprising: forming a semiconductor body on an insulating layer of a substrate, said semiconductor body having a top surface opposite a bottom surface formed on said insulating layer and a pair of laterally opposite sidewalls wherein the distance between said laterally opposite sidewalls is less at the bottom surface of said semiconductor body than at the top surface of said semiconductor body; forming a gate dielectric layer on said top surface of said semiconductor body and on said sidewalls of said semiconductor body; forming a gate electrode on said gate dielectric layer on said top sxirface of said semiconductor body and adjacent to said gate dielectric layer on said sidewalls of said semiconductor body; and forming a pair of source/ drain regions in said semiconductor body on opposite sides of said gate electrode. 12. The method of claim 11 wherein the width at the bottom of said semiconductor body is approximately 1/2 to 2/3 of the width at the top of said semiconductor body. 13. The method of claim 11 wherein said distance between said sidewalls is uniform at the top portion of said semiconductor body and becomes increasingly smaller near the bottom portion of said semiconductor body. 14. The method of claim 11 wherein the distance between said sidewalls of said semiconductor body at the top surface is between 20-30nm and wherein the distance between said laterally opposite sidewalls near the bottom is between 10- 15nm. 15. A method of forming a transistor comprising: providing a substrate having an oxide insulating layer formed thereon and a semiconductor thin film formed on the oxide insulating layer; etching said semiconductor film to form a semiconductor body having a top surface opposite a bottom surface on said oxide insulating film and a pair laterally opposite sidewalls; etching said semiconductor body to reduce the distance between laterally opposite sidewalls near the bottom of said semiconductor body relative to the top of said semiconductor body; forming a gate dielectric layer on the top surface and sidewalls of said semiconductor body; forming a gate electrode on said gate dielectric layer on the top of said semiconductor body and adjacent to the gate dielectric layer on the sidewalls of said semiconductor body; and forming a pair of source/drain regions in said semiconductor body on opposite sides of said gate electrode. 16. The method of claim 15 wherein said etching of said semiconductor film stops on said oxide insulating layer. 17. The method of claim 15 wherein said semiconductor body comprises silicon and wherein said etching of said semiconductor film is a dry etching process which utilizes a chemistry comprising HBr/O2. 18. The method of claim 15 wherein the etching of said semicon_ductor body- reduces the distance between the laterally opposite sidewalls near the bottom portion of said semiconductor body without significantly etching the top portion of said semiconductor body. 19. The method of claim 18 wherein said semiconductor body is silicon and is etched by a dry etching process utilizing a chemistry comprising HBr/O2. 20. The method of claim 18 wherein the power utilized during said etching of said semiconductor body to reduce the thickness of the bottom p ortion utilizes an RF bias between 50-70 watts. 21. The method of claim 18 wherein the etching process utilized to reduce the distance between the sidewalls on the bottom portion of said semiconductor body utilizes a total HBr/O2 gas flow between 150-180 mL/min. 22. The method of claim 15 further comprising after etching said semiconductor body to reduce the distance between laterally opposite sidewalls of said semiconductor body near the bottom portion, exposing said sem conductor body to a wet chemistry comprising NHyOH. 23. The method claim 15 wherein said etching of said semiconductor film to form said body utilizes a first process gas chemistry and a first RJF bias and said etching of said semiconductor body to reduce the thickness of sadd bottom portion utilizes a second process gas and a second RF bias wherein said second RF bias is less than said first RF bias. 24. The method of claim 23 wherein said first process gas is the same as said second process gas. 25. The method of claim 24 wherein said first and second process gas comprises HBr /Ar/ O2. |
NONPLANAR DEVICE WITH THINNED LOWER BODY PORTION AND METHOD OF FABRICATIONBACKGROUND OF THE INVENTION1. FIELD OF THE INVENTIONThe present invention relates to the field of semiconductor devices and more particularly to a nonplanar tri-gate transistor having a thinned lower body portion and method of fabrication.2. DISCUSSION OF RELATED ARTIn order to increase the performance of modern integrated circuits, such as microprocessors, silicon on insulator (SOI) transistors have been proposed. Silicon on insulator (SOI) transistors have an advantage in that they can be operated in a fully depleted manner. Fully depleted transistors have an advantage of ideal subthreshold gradients for optimized ON current/OFF current ratios.An example of a proposed SOI transistor which can be operated in a fully depleted manner is a tri-gate transistor 100, such as illustrated in Figure 1. Tri- gate transistor 100 includes a silicon body 104 formed on an insulating substrate 102 having a buried oxide layer 103 formed on a monocrystalline silicon substrate 105. A gate dielectric layer 106 is formed on the top and sidewalls of the silicon body 104 as shown in Figure 1. A gate electrode 108 is formed on the gate dielectric layer and surrounds the body 104 on three sides, essentially providing a transistor 100 having three gate electrodes (G1, G2, G3), one on each of the sidewalls of the silicon body 104 and one on the top surface of the silicon body 104. A source region 110 and a drain region 112 are formed in the silicon body 104 on opposite sides of the gate electrode 108 as shown in Figure 1.An advantage of the tri-gate transistor 100 is that it exhibits good short channel effects (SCE). One reason tri-gate transistor 100 achieves good short channel effects is that the nonplanarity of the device places the gate electrode 108 in such a way as to surround the active channel region. That is, in the tri-gate device, the gate electrode 1O8 is in contact with three sides of the channel region. Unfortunately, the fourth side, the bottom part of the channel is isolated from the gate electrode by the buried oxide layer 103 and thus is not under close gate control.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is an illustration of a nonplanar or tri-gate device.Figures 2A and 2B illustrate a tri-gate or nonplanar device with a thinned lower body portion in accordance with the present invention.Figure 3A illustrates a nonplanar device having multiple thinned lower body portions.Figure 3B is an illustration of a nonplanar device having a thinned lower body portion and including sidewall spacers, source/drain extensions and suicided source /drain regions.Figures 4A-4H illustrate a method of forming a nonplanar device with a thinned lower body portion in accordance with an embodiment of the present invention. Figures 5A-5D illustrate embodiments of the present invention, the profile etch can thin the lower body portion.DETAILED DESCRIPTION OF THE PRESENT INVENTIONThe present invention is a novel nonplanar device with a thinned lower body portion and a method of fabrication. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. In other instances, well known semiconductor processes and manufacturing techniques have not been described in particular detail in order to not unnecessarily obscure the present invention.Embodiments of the present invention include a nonplanar or tri-gate transistor having a semiconductor body which is wrapped around on three sides by a gate dielectric layer and a gate electrode. In embodiments of the present invention, the bottom portion of the semiconductor body is made thinner than the top portion of the semiconductor body. Making the bottom portion of the semiconductor body thinner than the top portion increases the gate control over the bottom portion of the body resulting in better short channel effects. In an embodiment of the present invention, a semiconductor film is etched into a semiconductor body utilizing a dry etching process which utilizes a first process gas chemistry and a first RF bias. After forming the semiconductor body, the lower portion of the body is thinned utilizing the same etch chemistry and equipment but utilizing a lower RF bias in order to inwardly taper or facet the lower body portion.Figures 2A and 2B illustrate a nonplanar or tri-gate device 200 having a semiconductor body "with a thinned lower body portion. Figure 2A is an overhead /side view of transistor 200 while Figure 2B is an illustration of a cross-sectional view taken through the gate electrode. Transistor 200 is formed on a substrate 202 and includes a semiconductor body or fin 204. A gate dielectric layer 206 is formed on the top surface 234 and sidewalls 230 and 232 of a semiconductor body 204. A gate electrode 208 is formed on the gate dielectric layer 206 and surrounds the semiconductor body or fin on three sides. A source regions 210 and a drain region 212 are formed in the semiconductor body on opposite sides of the gate electrode 208 as shown in Figure 2A.As is readily apparent from Figures 2A and 2B, the semiconductor body 204 has a bottom portion 222 which is thinner than the top portion 224. That is, the distance between the sidewalls 230 and 232 is greater at the top sxirface 234 than at the bottom surface 236. In an embodiment of the present invention, sidewalls 230 and 232 of the top portion 224 are substantially vertical and are spaced a uniform distance apart while the sidewalls 230 and 232 of the bottom portion 222, are faceted or inwardly tapered to reduce the distance between the sidewalls 230 and 232 in the bottom portion. In an embodiment of the present invention, the distance between the sidewalls 230 and 232 near the bottom surface is between 1/2 to 2/3 the distance between the sidewalls 230 and 232 near the top surface 234. In an embodiment of the present invention, the sidewalls 230 and 232 begin to taper inwardly at approximately the midpoint of the height 238 of the semiconductor body 204 (i.e., sidewalls start tapering inwardly at the midpoint between the top surface 234 and bottom surface 236). In an embodiment of the present invention, the distance between the sidewalls 230 and 232 at the top surface 234 is between 20-30 nanometers while the distance between, the sidewalls 230 and 232 near the bottom surface 236 is between 10-15 nanometers. In an embodiment of the present invention, the bottom portion 222 of the semiconductor body 204 is made sufficiently thin so that the gate control of the bottom portion is made similar to the gate control of the top portion.. In an embodiment of the present invention, the bottom portion 222 of the semiconductor body 204 is made sufficiently thin relative to the top portion to improve tKe short channel effects of transistor 200.Additionally, as illustrated in Figures 5A-5D, other semiconductor body profiles or shapes may be utilized to improve the short channel effects (SCE) of the tri-gate or nonplanar transistor 200. For example, as illustrated in Figure 5 A, the semiconductor body 204 can have a pair of sidewalls 230 and 232 which continually taper inward from the top surface 234 to the bottom surface 236. Additionally, in an embodiment of the present invention, as illustrated in Figure 5B the semiconductor body 204 can have sidewalls 230 and 232 wKich continually taper inward from the top surface to the bottom surface and reach the bottom surface 236 at a point or substantially at point 502. In yet another embodiment of the present invention as illustrated in Figure 5C, the semiconductor body 204 can have a pair of sidewalls 230 and 232 which include an upper vertical portion 510 separated <">by uniform distance, a middle inwardly tapered portion 512 and a lower portion 514 of vertical sidewalls separated by a second distance which is less than the distance separating the top portion sidewalls 510. In yet another embodiment of the present invention, the semiconductor body can have an upper portion 224 where the sidewalls 230 and 232 are faceted or tapered inwardly and a bottom portion 222 where the sidewalls 230 and 232 are vertical or substantially vertical. In each of the example illustrated in Figures 5A-5D, the distance between the sidewalls 230 and 232 of semiconductor body 204 on the top surface is greater than the distance between the semiconductor body on the bottom surface. In this way, the gate electrode 208 can have better control of the semiconductor body at the bottom surface and thereby improve the short channel effects of the device.In an embodiment of the present invention, the tri-gate transistor 200 is formed on an insulating substrate 202 which includes a lower monocrystalline silicon substrate 250 upon which is formed an insulating layer 252, such as a silicon dioxide film. In an embodiment of the present invention, insulating layer 252 is a buried oxide layer of an SOI substrate. The tri-gate transistor 200, however, can be formed on any well known insulating substrate, such as substrates formed from silicon dioxide, nitrides, oxides, and sapphires.Semiconductor body 204 is formed on insulating layer 252 of insulating substrate 202. Semiconductor body 204 can be formed on any well known material, such as but not limited to silicon (Si), germanium (Ge), silicon germanium (SixGey), gallium arsenide (GaAs), InSb, GaP and GaSb. Semiconductor body 204 can be formed of any well known material which can be reversely altered from an insulating state to a conductive state by applying external electrical controls. Semiconductor body 204 is ideally a single crystalline film when best electrical performance of transistor 200 is desired . For example, semiconductor body 204 is a single crystalline film when transistor 200 is used in higher performance applications, such as high density circuit, such as a microprocessor. Semiconductor body 204, however, can be a polycrystalline film when transistor 200 is used in applications requiring less stringent performance, such as liquid crystal displays. Insulator 252 isolate semiconductor body 204 from the monocrystalline silicon substrate 250. In an embodiment of the present invention, semiconductor body 204 is a single crystalline silicon film.Gate dielectric layer 206 is formed on and around three sides of semiconductor body 204 as shown in Figures 2A and 2B. Gate dielectric layer 206 is formed on or adjacent to sidewall 230, on the top surface 234 of body 204 and on or adjacent to sidewall 232 of body 204 as shown in Figures 2A and 2B. Gate dielectric layer 206 can be any well known gate dielectric layer. In an embodiment of the present invention, the gate dielectric layer is a silicon dioxide (SiO2), silicon oxynitride (SiOxNy) or a silicon nitride (Si3N4) dielectric layer. In an embodiment of the present invention, the gate dielectric layer 206 is a silicon oxynitride film formed to a thickness between 5-2[theta]A. In an embodiment of the present invention, gate dielectric layer 206 is a high k gate dielectric layer, such as a metal oxide dielectric, such as but not limited to tantalum pentaoxide (TaO5), titanium oxide (TiO2) and hafnium oxide (HfO). Gate dielectric layer 206 can be other types of high k dielectric layers, such as but not limited to PZT and BST.Gate electrode 208 is formed on and around gate dielectric layer 206 as shown in Figures 2 A and 2B. Gate electrode 208 is formed on or adjacent to gate dielectric layer 206 formed on sidewall 230 of semiconductor body 204 is formed on gate dielectric layer 206 formed on the top surface 234 of semiconductor body 204 and is formed adjacent to or on gate dielectric layer 206 formed on sidewall 232 of semiconductor body 204. Gate electrode 208 has a pair of laterally opposite sidewalls 260 and 262 separated by a distance which defines the gate length (Lg) 264 of transistor 200. In an embodiment of the present invention, laterally opposite sidewalls 260 and 262 of gate electrode 208 run in a direction perpendicular to sidewalls 230 and 232 of semiconductor body Z04.Gate electrode 208 can be formed of any suitable gate electrode material. In an embodiment of the present invention, gate electrode 208 comprises a polycrystalline silicon film doped to a concentration density between IxIO<19> atoms/cm<3> to IxIO<20> atoms/cm<3>. In an embodiment of the present invention, the gate electrode can be a metal gate electrode, such as but not limited to tungsten, tantalum, titanium and their nitrides. In an embodiment of the present invention, the gate electrode is formed from a material having a midgap WOrkf unction between 4.5 to 4.8 eV. It is to be appreciated that gate electrode 208 need not necessarily be a single material and can be a composite stack of thin films, such as but not limited to polycrystalline silicon/metal electrode or metal/polycrystalline silicon electrode.Transistor 200 has a source region 210 and a drain region.212. Source region 210 and drain region 212 are formed in semiconductor 2O4 on opposite sides of gate electrode 208 as shown in Figure 2A. Source region 210 and drain region 212 are formed to an n type conductivity type when forming a NMOS transistor and are formed to a p type conductivity when forming a PMOS device. In an embodiment of the present invention, source region 210 and drain region 212 have a doping concentration between IxIO<19> atoms/cm<3> to IxIO<21> atoms/cm<3>. Source region 210 and drain region 212 can be formed of the uniform concentration or can include subregions of different concentrations or dopant profiles, such as tip regions (e.g., source/drain extensions) and contact regions. In an embodiment of the present invention, when transistor 200 is a symmetrical transistor, source region 210 and drain region 212 have the same doping concentration and profile. In an embodiment of the present invention, when transistor 200 is formed as an asymmetrical transistor, then the doping concentration profile of the source region 210 and drain region 212 may vary in order to any particular electrical characteristics as well krtown in the art. Source region 210 and drain region 212 can be collectively referred to as a pair of source /drain regions.The portion of semiconductor body 204 located between source region 210 and drain region 212, defines the channel region 270 of transistor 200. The channel region 270 can also be defined as the area of the semiconductor body 204 surrounded by the gate electrode 208. At times however, the source /drain region may extend slightly beneath the gate electrode through, for example, diffusion to define a channel region slightly smaller than the gate electrode length (Lg). In an embodiment of the present invention channel region 270 is intrinsic or undoped monocrystalline silicon. In an embodiment of the present invention, channel region 270 is doped monocrystalline silicon. When channel region 270 is doped it is typically doped to a conductivity level of between IxIO<16> to IxIO<19> atoms/cm<3>. In an embodiment of the present invention, when the channel region is doped it is typically doped to the opposite conductivity type of the source region 210 and the drain region 212. For example, when the source and drain regions are n type conductivity the channel region would be doped to p type conductivity. Similarly, when the source and drain regions are p type conductivity the channel region would be n type conductivity. In this manner a tri-gate transistor 200 can be formed into either a NMOS transistor or a PMOS transistor respectively. Channel region 270 can be uniformly doped or can be doped non-uniformly or with differing concentrations to provide particular electrical and performance characteristics. For example, channel regions 270 can include well-known "halo" regions, if desired.By providing a gate dielectric and a gate electrode which surrounds the semiconductor body on three sides, the tri-gate transistor is characterized in having three channels and three gates, one gate and channel (G1) which extends between the source and drain regions on side 230 of silicon body 204, a second gate and channel (G2) which extends between the source and drain regions on the top surface of silicon body 204, and a third gate and channel (G3) which extends between the source and drain regions on the sidewall of silicon body 204. The gate "width" (Gw) of transistor 200 is the sum of the widths of the three channel regions. That is, the gate width of transistor 200 is equal to the length of sidewall 230 of silicon body 204, plus the length of top surface 234 of silicon body of 204, plus the length of sidewall 232 of silicon body 204. Larger "width" transistors can be obtained by using multiple devices coupled together (e.g., multiple silicon bodies 204 surrounded by a single gate electrode 208) as illustrated in Figure 3A.Because the channel region 270 is surrounded on three sides by gate electrode 208 and gate dielectric 206, transistor 200 can be operated in a fully depleted manner wherein when transistor 200 is turned "on" the channel region 270 fully depletes thereby providing the advantageous electrical characteristics and performance of a fully depleted transistor. That is, when transistor 200 is turned "ON" a depletion region is formed in channel region 270 along with an inversion layer at the surfaces of region 270 (i.e., an inversion layer is formed on the side surfaces and top surface of the semiconductor body). The inversion layer has the same conductivity" type as the source and drain regions and forms a conductive channel between the source and drain regions to allow current to flow therebetween. The depletion region depletes free carriers from beneath the inversion layer. The depletion region extends to the bottom of channel region 270, thus the transistor can be said to be a "fully depleted" transistor. In embodiments of the present invention, the lower portion 222 of the semiconductor body 204 has been thinned relative to the upper portion so that the gate electrode can better control the lower portion of the semiconductor body. By thinning the lower portion, the two sidewall gates G1 and G3 can more easily deplete free carriers from beneath the inversion layer formed on the sidewalls of the lower portion of the semiconductor body 204. By thinning the lower portion 222 of semiconductor body 204, the two gates G1 and G3 from the sidewall can control the channel region in a manner similar to the way the three gates G1, G2 and G3 control the channel in the upper portion 224 of the semiconductor body 204. Thinning the bottom part of the body or fin not only decreases the thickness of a semiconductor between the two gates, but also decreases the width of that part of the body which is in contact with ttie buried oxide. These effects combined decrease the short channel effects in the tri-gate device having a thinned lower body portion.Transistor 200 of the present invention, can be said to be an nonplanar transistor because the inversion layer of channel 270 is formed in both the horizontal and vertical directions in semiconductor body 204. The semiconductor device of the present invention, can also be considered to be a nonplanar device because the electric field from gate electrode 208 is applied from both horizontal (G2) and vertical sides (G1 and G3).As stated above the gate width of transistor 200 is equal to the sum of the three gate widths created from semiconductor body 204 of transistor 200. In order to fabricate transistors with larger gate widths, transistor 200 can include1O additional or multiple semiconductor bodies or fins 204 as illustrated in Figure 3A. Each semiconductor body or fin 204 bias a gate dielectric layer 206 formed on its top surface and sidewalls as shown in Figure 3A. Gate electrode 208 is formed on and adjacent to each gate dielectric layer 206 on each semiconductor body 204. Each semiconductor body 204 includes a source region 210 and drain region 212 formed in the semiconductor body 204 on opposite sides of gate electrode 208 as shown in Figure 3A. In an embodiment of the present invention, each semiconductor body 208 is formed with same width and height (thickness) as other semiconductor bodies 204. In an embodiment of the present invention, each source region 210 and drain region 212 of the semiconductor bodies 204 are electrically coupled together by semiconductor material used to form semiconductor body 204 to form a source landing pad 310 and a drain landing pad 312 as shown in Figure 3A. Alternatively, the source regions 210 and drain regions 212 can be coupled together by higher levels of metallization (e.g., metal 1, metal 2, metal 3) used to electrically interconnect various transistors 200 together in the functional circuits. The gate width of transistor 200 as shown in Figure 3A would be equal to the sum of t_he gate width created by each of the semiconductor bodies 204. In this way, a nonplanar or tri-gate transistor 200 can be formed with any gate width desired. In an embodiment of the present invention, each of the semiconductor bodies 204 include a bottom portion 222 which is thinner than the top portion 224 as described above.In an embodiment of the present invention, the source 210 and drain 212 can include a silicon or other semiconductor film 350 formed on and around semiconductor body 204 as shown in Figure 3B. For example, semiconductor film 350 can be a silicon film or silicon alloy, such as silicon germanium (SixGeY). In an embodiment of the present invention, the semiconductor film 350 is a single crystalline silicon film formed of the same conductivity type as a source region 210 and drain region 212. In an embodiment of the present invention, the semiconductor film can be a silicon alloy, such as silicon germanium where silicon comprises approximately 1-99 atomic percent of the alloy. The semiconductor film 350 need not necessarily be a single crystalline semiconductor film and in embodiment can be a poly crystalline film. In an embodiment of the present invention, semiconductor film 350 is formed on the source region 210 and the drain region 212 of semiconductor body 204 to form "raised" source and drain regions. Semiconductor film 350 can be electrically isolated from a gate electrode 208 by a pair of dielectric sidewalls spacers 360, such as silicon nitride or silicon oxide or composites thereof. Sidewall spacers 360 run along laterally opposite sidewalls 260 and 262 of gate electrode 208 as shown in Figure 3B thereby isolating the semiconductor film 350 from the gate electrode 208. In an embodiment of the present invention, sidewall spacer 360 have a thickness of between 20-200A. By adding a silicon or semiconductor film of the source and drain regions 210 and 212 of the semiconductor body and forming "raised" source and drain regions, the thickness of the source and drain regions is increased thereby reducing the source /drain contact resistance to transistor 200 improving its electrical characteristics and performance.In an embodiment of the present invention, a silicide film 370, such as but not limited to titanium silicide, nickel silicide, cobalt silicide is formed on the source region 210 and drain region 212. In an embodiment of the present invention, silicide 370 is formed on silicon film 350 on semiconductor body 204 as shown in Figure 3B. Silicide film 370, however, can be formed directly onto silicon body 204, if desired. Dielectric spacers 360 enables silicide 370 to be formed on semiconductor body 204 or silicon film 250 in a self-aligned process (i.e., a salicide process).In an embodiment of the present invention, if desired, the silicon film 350 and /or the silicide film 370 can also be formed on the top of gate electrode 208 when gate electrode 208 is a silicon or silicon germanium film. The formation of silicon film 350 and suicide film 370 on the gate electrode 208 reduces the contact resistance of the gate electrode thereby improving the electrical performance of transistor 200.Figures 4A-4H illustrate a method of forming a nonplanar transistor having a thinned lower body portion. The fabrication of the transistor begins with substrate 402. A silicon or semicortductor film 408 is formed on substrate 402 as shown in Figure 4A. In an embodiment of the present invention, the substrate 402 is an insulating substrate, such as stiown in Figure 4A. In an embodiment of the present invention, insulating substrate 402 includes a lower monocrystalline silicon substrate 404 and a top insulating layer 406, such as a silicon dioxide film or silicon nitride film. Insulating layer 406 isolates semiconductor film 408 from substrate 404, and in embodiment is formed to a thickness between 200-2000A. Insulating layer 406 is sometimes referred to as a "buried oxide" layer. When a silicon or semiconductor film 408 is formed on an insulating substrate 402, a silicon or semiconductor on insulating (SOI) substrate is created.Although semiconductor film 408 is ideally a silicon film, in other embodiments it can be other types of semiconductor films, such as but not limited to germanium (Ge), a silicon germanium alloy (SixGey), gallium arsenide (GaAs), InSb, GaP and GaSb. In an embodiment of the present invention, semiconductor film 408 is an intrinsic (i.e., undoped) silicon film. In other embodiments, semiconductor film 408 is doped to a p type or n type conductivity with a concentration level between lxl0<16>-lxlO<19> atoms /cm<3>. Semiconductor film 408 can be insitu doped (i.e., doped while it is deposited) or doped after it is formed on substrate 402 by for example ion-implantation. Doping after formation enables both PMOS and NMOS tri-gate devices to be fabricated easily on the same insulating substrate. The doping level of the semiconductor body at this point can be used to set the doping level of the cKannel region of the device. Semiconductor film 408 is formed to a thickness which is approximately equal to the height desired for the subsequently formed semiconductor body or bodies of the fabricated tri-gate transistor. In an embodiment of the present invention, semiconductor film 408 has a thickness or height 409 of less than 30 nanometers and ideally less than 20 nanometers. In an embodiment of the present invention, semiconductor film 408 is formed to the thickness approximately equal to the gate "length" desired, of the fabricated tri-gate transistor. In an embodiment of the present invention, semiconductor film 408 is formed thicker than desired gate length of the device. In an embodiment of the present invention, semiconductor film 480 is formed to a thickness which will enable the fabricated tri-gate transistor to be operated in a fully depleted manner for its designed gate length (Lg).Semiconductor film 408 can be formed on insulating substrate 402 in any well-known method. In one method of forming a silicon on insulator substrate, known as the SIMOX technique, oxygen atoms are implanted at a high dose into a single crystalline silicon substrate and then anneal to form the buried oxide 4O6 within the substrate. The portion of the single crystalline silicon substrate above the buried oxide becomes the silicon film 408. Another technique currently used to form SOI substrates is an epitaxial silicon film transfer technique which is generally referred to as bonded SOI. In this technique a first silicon wafer has a thin oxide grown on its surface that will later serve as the buried oxide 406 in the SOI structure. Next, a high, dose hydrogen implant is made into the first silicon wafer to form a high stress region below the silicon surface of the first wafer. This first wafer is then flipped over and bonded to the surface of a second silicon wafer. The first wafer is then cleaved along the high stress plain created by tine hydrogen implant. This results in a SOI structure with a thin silicon layer on top, the buried oxide underneath, all on top of the single crystalline silicon substrate. Well-known smoothing techniques, such as HCl smoothing or chemical mechanical polishing (CMP) can be used to smooth the top surface of semiconductor film 408 to its desired thickness.At this time, if desired, isolation regions (not shown) can be formed into SOI substrate in order to isolate the various transistors to be formed therein from one another. Isolation regions can be formed by etching away portions of tbie substrate film 408 surrounding a tri-gate transistor, by for example well-known photolithographic and etching techniques, and then back filling the etched regions with an insulating film, such as SiO2.In an embodiment of trie present invention, a hard mask material 410 formed on semiconductor film 408 as shown in Figure 4A. Hard mask material 410 is a material which can provide a hard mask for the etching of the semiconductor film 408. A hard mask material is a material which can retain its profile during etching of the semiconductor film 408. A hard mask material 410 is a material which will not etch or only slightly etch during the etching of semiconductor film 408. In an embodiment of the present invention, the hard mask material is formed of a material such that the etchant used to etch the semiconductor film 408 will etch thin film 408 at least five times faster than the hard mask material and ideally at least ten times faster. In an embodiment of the present invention, when semiconductor film 408 is a silicon film, the hard mask material 410 can be a silicon nitride or silicon oxynitride film. Hard mask material 410 is formed to a thickness sufficient to retain its profile during the entire etch of semiconductor film 408 but not too thick to cause difficulty in its patterning;. In an embodiment of the present indention, the hard mask material 410 is formed, to a thickness between 3 nanometers to 20 nanometers and ideally to a thickness less than 10 nanometers.Next, as also shown in Figure 4A, a photoresist mask 412 is formed on hard mask layer 410. Photoresist mask 412 contains a feature pattern to be transferred into the semiconductor film 408. The photoresist mask 412 can be formed by any well known techniques, such as by blanket depositing a photoresist material by masking, exposing and developing the photoresist film into a photoresist mask 412 having a desired pattern for the semiconductor film 408 to be patterned. Photoresist mask 412 is typically formed of an organic compound. Photoresist mask 412 is formed to a thickness sufficient to retain its profile while patterning the hard mask film 410 tout yet is not formed to thick to prevent lithograptiic patterning into the smallest dimensions (i.e., critical dimensions) possible with photolithography system and process used.Next, as shown in Figure 4B, the hard mask material 410 is etched in alignment with photoresist mask 412 to form a hard mask 414 as shown in Figure 4B. Photoresist mask 412 prevents the underlying portion of hard mask material 410 from becoming etched. In an embodiment of the present invention, trie hard mask is etched with an etchant which can etch the hard mask material but does not etch the underlying semiconductor film 208. The hard mask material is etched with an etchant that has almost perfect selectivity to the underlying semiconductor film 208. That is, in an embodiment of the present invention, the hard mask etchant etches the hard mask material at least one hundred times faster than the underlying semiconductor film 208 (i.e., an etchant has a hard mask to semiconductor film selectivity of at least 50:1). When the hard mask material 414 is a silicon nitride or silicon oxynitride film, hard mask material 410 can be etched into a hard mask 414 utilizing a dry etch process, such as a reactive ion etching/ecr plasma etching. In an embodiment of the present invention, a silicon nitride or silicon oxynitride hard mask is reactive ion etched utilizing chemistry comprising CHF3 and O2 and Ar/CH2F2 and C4F8 and Ar and O2.Next, as shown in Figure 4C, after hard mask film 410 has been patterned into a hard mask 414, photoresist mask 412 can be removed by well known techniques. For example, photoresist mask 412 can be removed utilizing a "piranha" clean solution which includes sulfuric acid and hydrogen peroxide. Additionally"/ residue from photoresist mask 412 can be removed with an O2 ashing.Although not required, it is desirable to remove photoresist mask 412 prior to etching semiconductor film 408 so that a polymer film from the photoresist does not form on the sidewalls of the patterned semiconductor film 408. It is desirable to first remove the photoresist mask 412 prior to etching of the semiconductor film 408 because dry etching processes can erode the photoresist mask and cause a polymer film to develop on the sidewalls of the semiconductor body which can be hard to remove and which can detrimentally device performance. By first removing the photoresist film 412 prior to patterning the semiconductor thin film 408, the semiconductor thin film 408 can be patterned and pristine sidewalls maintained.Next, as shown in Figure 4D, semiconductor film 408 is etched in alignment with hard mask 414 to form a semiconductor body 416 having a pair of laterally opposite sidewalls 418 and 420. Hard mask 414 prevents the underlying portion of semiconductor film 208 from becoming etched during the etching process. The etch is continued until the underlying insulating substrate is reached . In an embodiment of the present invention, the etch "end points" on the buried oxide layer 406. Semiconductor film 208 etched with an etchant which etches semiconductor 208 without significantly etching hard mask 414. In an embodiment of the present invention, semiconductor film 408 is anisotropically etched so that semicond"uctor body 416 has nearly vertical sidewalls 418 and 420 formed in alignment with the sidewalls of hard mask 414 thereby providing an almost perfect fidelity with hard mask 414. When hard mask 414 is a silicon nitride or silicon oxynitride hard mask and semiconductor film 408 is a silicon film, silicon film 408 can. be etched utilizing a dry etch process comprising HBr/Ar/O,. In an embodiment of the present invention, semiconductor body 408 is etched utilizing an electron cyclotron residence (ECR) plasma etcher. In an embodiment of the present invention, an ECR plasma etcher using a chemistry comprising HBr/O2 with a pressure between 0.2 to 0.8 pascal and the RF power of approximately 120 watts is used to etch a silicon thin film 408 into a silicon body 416. Such an etch process produces a substantially anisotropic etch to provide substantially vertical sidewalls 418 and 420 as shown in Figure 4D. Additionally, such an etch has a high selectivity (approximately 20:1) to the buried oxide layer 406 so that the buried oxide layer etches very little and can be used as an etch stop and for end point detection. The ability to end point detect is important to insure that all of the semiconductor film clears from the buried oxide layer becau.se the thickness 409 of the thin film across the wafer may vary and the etch rate of different width semiconductor bodies may also vary. In an embodiment of the present invention, an RF bias of between 100-120 watts is used. The RF bias controls the electron energy in the etch which in turn controls the anisotropic profile of the etch.Next, as shown in Figure 4F, the semiconductor body 416 is etched so as the reduce the distance between the sidewalls 418 and 420 in the lower portion of the semiconductor body 416. The etching of a semiconductor body to thin the lower portion of the semiconductor body can be referred to as the "profile" etch. In an embodiment of the present invention, the profile etch is utilized to inwardly taper or form facets 422 and 424 on the sidewalls 418 and 420 as illustrated in Figure 4E. It is to be appreciated that in other embodiments of the present invention, the profile etch can thin the lower body portion, as illustrated in Figures 5A-5D. In an embodiment of the present invention, a plasma etch process which produces an isotropic etch is utilized to reduce the distance between the sidewalls in lower portion of the semiconductor body as compared to the upper portion of the semiconductor body. In an embodiment of the present invention, the same plasma etch equipment and etch chemistry is used during the profile etch as is used during the patterning of the semiconductor film 408 except that the RF bias is decreased so that the vertical directionality of the ions is reduced. In an embodiment of the present invention, when semiconductor body 416 is a silicon body, the profile etch can be accomplished utilizing an ECR plasma etcher with a chemistry comprising HBr/O2 and a pressure between 0.2 to 0.8 pascal with an RF bias between 50-70 watts.Next, as also shown in Figure 4F, the hard mask 414 is removed from semiconductor body 416 having a thinned lower body portion. In an embodiment of the present invention, when hard mask 414 is a silicon nitride or silicon oxynitride film, a wet chemistry comprising phosphoric acid and Di water can be used to remove the hard mask. In an embodiment of the present invention, the hard mask etch comprises between 80-90% phosphoric acid (by volume) and Di water heated to a temperature between 150-170[deg.]C and ideally to 160[deg.]C is used. Such an etchant will have an almost perfect selectivity between the silicon nitride hard mask 214 and buried oxide layer 406.Next, if desired, after removing hard mask 414 as illustrated in Figure 4F, semiconductor body 416 can be exposed to a wet etchant to clean the body 416. In an embodiment of the present invention, a silicon body 416 is exposed to a wet etchant comprising ammonia hydroxide (NH4OH) to remove any line edge roughness or pitting which may have developed during the patterning of the ' silicon body 416. In an embodiment of tine present invention, a silicon body 416 is exposed for a period of time of between 30 seconds to 2 minutes to an etchant comprising between 0.1-1% of ammonia .hydroxide by volume at a temperature between 20-30 degrees Celsius in order to provide a semiconductor body 416 with pristine sidewalls 418 and 420.Next, as illustrated in Figure 4G, a gate dielectric layer 430 is formed on sidewalls 418 and 420 and the top surface of semiconductor body 416. The gate dielectric layer can be a deposited dielectric or a grown dielectric. In an embodiment of the present invention, the gate dielectric layer 426 is a silicon oxynitride dielectric film grown by a dry/ wet oxidation process. In an embodiment of the present invention, the silicon oxide film is grown to a thickness between 5-15A. In an embodiment of the present invention, the gate dielectric layer 430 is a deposited dielectric, such as but not limited to a high dielectric constant film, such as a metal oxiide dielectric, such as tantalum pentaoxide (Ta2O5), titanium oxide (TiO2), hafnium oxide, zirconium oxide, and aluminum oxide. Additionally, in an embodiment of the present invention, gate dielectric layer 430 can be other high k dielectric films, such as but limited to PZT and BST. Any well known technique can be utilized to deposit a high k dielectric, such as but not limited to chemical vapor deposition, atomic layer deposition and sputtering.Next, gate electrode 432 is formed on the gate dielectric layer 430 formed on the top surface of semiconductor body 416 and is formed on or adjacent to the gate dielectric layer 430 formed on or adjacent to sidewalls 418 and 420 as shown in Figure 4G. The gate electrode 432 has a top surface opposite a bottom surface formed on insulating layer 406 and has a pair of laterally opposite sidewalls 434 and 436 which define the gate length of the device. Gate electrode 432 can be formed by blanket depositing a suitable gate electrode material over the substrate and then patterning the gate electrode material with well known photolithograph and etching techniques to form a gate electrode 432 from the gate electrode material. In an embodiment of the present invention, the gate electrode material comprises polycrystalline silicon. In another embodiment of the present invention, the gate electrode material comprises a polycrystalline silicon germanium alloy. In yet other embodiments of the present invention, the gate electrode material can comprise a metal film, such as but not limited to tungsten, tantalum and their nitrides. In an embodiment of the present invention, the photolithography process used to find the gate electrode 432 utilizes the minimum or smallest dimension lithography process used to fabricate the nonplanar transistor (that is, in an embodiment of the present invention, the gate length (Lg) of the gate electrode 432 has a minimum feature dimension of the transistor defined by photolithography). In an embodiment of the present invention, the gate length is less than or equal to 30 nanometers and ideally less than 20 nanometers. It is to be appreciated that although the gate dielectric layer and gate electrode, as illustrated in Figures 4G and 4H, are formed with a "subtractive" process whereby undesired portions are etched away, the gate electrode can be formed with a replacement gate process whereby a sacrificial gate electrode is first formed, an interlayer dielectric formed adjacent thereto, the sacrificial gate electrode then removed to form an opening in which the gate electrode is then formed as is well known in the art.Next, as shown in Figure 4HE, a source region 440 and a drain region 442 are then formed in the semiconductor body 416 on opposite sides of gate electrode 432. For a PMOS transistor, the semiconductor body are doped to a p type conductivity with a concentration between IxIO<20> to IxIO<21> atoms/cm<3>. For an NMOS nonplanar transistor, the semiconductor body 416 is doped with n type conductivity to a concentration between IxIO<20> to IxIO<21> atmos/cm<3> to form the source/drain regions. In an embodiment of the present invention, the source/drain regions can be formed by ion implantation. In an embodiment of the present invention, the ion implantation occurs in a vertical direction (i.e., a direction perpendicular to the substrate) as shown in Figure 4H. The gate electrode 432 is a polysilicon gate electrode and can be doped during the ion implantation process. The gate electrode 432 acts as a mask to prevent the ion implantation step from doping the channel region of the nonplanar transistor. Again, the channel region is a portion of the semiconductor body 416 located beneath or surrounded by the gate electrode 432. If the gate electrode 432 is a metal electrode a dielectric hard mask can be used to block the doping during ion implantation process. In other embodiments or other methods, such as solid source diffusion may be used to dope the semiconductor body to form the source and drain regions. In embodiments of the present invention, the source/drain regions may also include subregions, such as source /drain extensions and source/drain contact regions. In such a case, the semiconductor body 416 would be doped on either side of the gate electrode 432 to form the source /drain extensions and then a pair of sidewall spacers such as illustrated in Figure 3B would be formed along the sidewalls of the gate electrode and a second doping step utilized to form heavily doped source /drain contact region as is well known in the art. Additionally, if desired at this time, additional silicon and /or suicide can be formed onto the semiconductor bodies 416 to form raised source /drain regions and reduce the contact resistance of the device. This completes the fabrication of a nonplanar device Having a semiconductor body with a thinned lower portion to improve device performance. |
Operational attestation of a tool chain is described. One example of a storage medium includes instructions to receive source code that processes a security workload of a tenant; selecting at least one first computing node to provide computation for the workload; processing source code through a certifiable tool chain to generate machine code for a first computing node, including performing one or more transformations of the source code through one or more translators to generate translated code and to generate an attestation associated with each transcoding, and receive a machine code of the first compute node and generate a proof associated with the first compute node; and providing each attestation of the first stage and the second stage for verification. |
1.A device comprising:means for receiving source code for processing a tenant's security workload;means for selecting at least a first computing node of a plurality of computing nodes to provide computing for the workload;Means for processing the source code through a provable toolchain to generate machine code for the first computing node, comprising:in a first stage, means for performing one or more transformations of the source code by one or more translators to generate transformed code and to generate an attestation associated with each code transformation, andin a second stage, means for receiving machine code for the first computing node and generating an attestation associated with the first computing node; andMeans for providing each of said attestations of said first stage and said second stage for verification.2.The apparatus of claim 1, wherein the first stage is a build stage and the second stage is a runtime stage for processing the source code.3.The apparatus of claim 1, wherein the attestation from the first stage and the second stage is represented between the received source code and machine code generated for the first computing node the proof chain.4.The apparatus of claim 1, wherein each attestation of the first stage includes at least a measurement or identity of the received code, a measurement or identity of the converted code, and a conversion of the received code to the The proof of the converter of the converted code.5.The apparatus of claim 1, wherein at least the first proof of the first stage further comprises one or more security assertions.6.5. The apparatus of claim 4, wherein the first stage further comprises performing one or more inspections of the source code and generating an attestation associated with each code inspection.7.The apparatus of claim 1, wherein the proof of the second stage includes at least a proof of the received machine code and a proof of the first computing node.8.The apparatus of claim 1, further comprising:means for receiving security data for verification in response to the attestation of the first stage and the second stage; andMeans for performing computation of the secure workload using the received secure data.9.A system that includes:one or more processors for processing data;memory for storing data; anda plurality of computing units providing computation for supported workloads, wherein the system is used to:Receive source code for processing the tenant's security workload;selecting at least one first compute node of the plurality of compute nodes to provide computation for the workload;The source code is processed by a provable toolchain associated with the tenant to generate machine code for the first compute node, comprising:In a first stage, one or more transformations are performed on the source code by one or more translators to generate transformed code and to generate an attestation associated with each code transformation, andIn a second stage, receiving machine code for the first computing node and generating a proof associated with the first computing node; andEach of the attestations of the first stage and the second stage are provided for verification.10.9. The system of claim 9, wherein the plurality of computing nodes comprise one or more processing devices, one or more hardware accelerators, or both.11.9. The system of claim 9, wherein the first phase is a build phase and the second phase is a runtime phase for processing the source code.12.10. The system of claim 9, wherein the attestation from the first stage and the second stage represents a gap between the received source code and machine code generated for the first computing node the proof chain.13.9. The system of claim 9, wherein each attestation of the first stage includes at least a measurement or identity of the received code, a measurement or identity of the converted code, and the conversion of the received code to the The proof of the converter of the converted code.14.14. The system of claim 13, wherein at least the first attestation of the first stage further comprises one or more security assertions.15.14. The system of claim 13, wherein the first stage further comprises performing one or more inspections of the source code and generating an attestation associated with each code inspection.16.9. The system of claim 9, wherein the proof of the second stage includes at least a proof of the received machine code and a proof of the first computing node.17.A device comprising:one or more processors for processing the code; andMemory for storing computer instructions including instructions for:receiving business logic for processing a tenant's security workload, the apparatus including a provable toolchain associated with the tenant;selecting one or more of the plurality of computing nodes to provide computing for the workload;Process business logic through the toolchain to generate machine code for the one or more computing nodes, including:During the build phase, one or more transformations of the business logic are performed by one or more transformers to generate transformed code and to generate an attestation associated with each code transformation, andin a runtime phase, receiving machine code for the one or more computing nodes and generating an attestation associated with each of the one or more computing nodes;providing each of the attestations for the build phase and the runtime phase for verification; andThe computation of the secure workload is performed using the one or more compute nodes.18.18. The apparatus of claim 17, wherein the attestation from the build phase and the runtime phase represents between the received business logic and machine code generated for the one or more compute nodes proof chain between.19.18. The apparatus of claim 17, wherein each attestation of the build phase includes at least a measurement or identity of the received code, a measurement or identity of the converted code, and the conversion of the received code to the conversion The code for the converter's proof.20.18. The apparatus of claim 17, wherein each proof of the runtime phase includes at least a proof of the received machine code and a proof of one of the one or more computing nodes. |
Proof of Operation via Toolchaintechnical fieldEmbodiments described herein relate generally to the field of electronics and, more particularly, to proof of operation through toolchains.Background techniqueCloud service providers (CSPs) are increasingly extending their services to new customers, including serving security-conscious customers who were hesitant to take advantage of cloud services in the past. This drives the trend of CSPs providing trusted computing services based on trusted computing technology.At the same time, cloud service providers are expanding their computing devices from only CPUs (central processing units) to various other types of computing nodes, including various processing units and hardware accelerators. To take advantage of these heterogeneous computing environments, provide further support for the establishment of a toolchain that allows customers to write their business logic in a way that the toolchain can compile it for any of these accelerators.However, these trends don't inherently work well together, trusted computing enables a computing device to prove the logic it's running and expect customers to identify it, and code deployment via toolchains results in a computing node's code running is a derivative of the customer-supplied code, not the customer's code itself.Description of drawingsThe embodiments described herein are shown by way of example, and not limitation, in the figures of the accompanying drawings, wherein like reference numerals refer to like elements.1 is an illustration of attestation of operations associated with a toolchain in accordance with some embodiments;2 is an illustration of applying a toolchain to generate a chain of attestation assertions for code to be run on end compute nodes, according to some embodiments;Figure 3 is an illustration of proof generation by a provable toolchain, according to some embodiments;4 is an illustration of measurements and attestations associated with the operation of a provable toolchain, according to some embodiments;5 is an illustration of a proof-of-delivery summarizing proofs by a provable toolchain, according to some embodiments;6 is a flow diagram illustrating a process for attesting transformation or evaluation through a provable toolchain in accordance with some embodiments;7 illustrates one embodiment of an exemplary computing architecture for attestation of operations through a toolchain, in accordance with some embodiments;8A and 8B illustrate a one-time hash-based signature scheme and a multiple-time hash-based signature scheme, respectively; and9A and 9B illustrate a one-time signature scheme and a many-time signature scheme, respectively.Detailed waysThe embodiments described herein relate to the attestation of operations of the toolchain.In order to extend services to security-conscious customers, cloud service providers (CSPs) may offer trusted computing services based on trusted computing technologies, such as where customer workloads and computing devices prove to customers that, for example, computing nodes are in terms of patching The latest, relevant compute nodes are applying these techniques to protect workloads that are processing workloads using stronger isolation techniques between the capabilities of the logic that the customer intends to deploy and has not been tampered with. Customers can then verify this proof before publishing sensitive data to compute nodes.However, CSP support is expanding from the operation of CPUs (Central Processing Units) to the operations of various computing nodes, including various types of processing devices such as GPUs (Graphics Processing Units) and hardware accelerators such as FPGAs (Field Programmable Gate Arrays). ) and ASIC (Application-Specific Integrated Circuit)), which can improve the performance of certain workloads. To take full advantage of these heterogeneous computing environments, support the establishment of toolchains (such as one API toolkit, etc.), allowing customers to use provable toolchains and then It is written for any such processing unit and hardware accelerator in a manner that compiles or translates the client's business logic (original source code) (which may often be referred to as the transformation of code logic). As used herein, "provable toolchain" generally refers to a A set of programming tools or utilities used to perform operations. Specifically, a toolchain can be used to convert source code into machine code for many different types of computing nodes. A toolchain can also combine source code with other code (such as running time or library). Provable toolchains may be able to create proofs that may include information about the operations performed by the toolchain.This toolchain support can be a point of difference for CSPs, as CSPs allow customers to write device-independent source code, and CSPs will determine the best hardware the CSP can use to execute that code, and apply the applicable toolchain to convert the customer's The written logic is translated into machine code that can run on a specific compute node for deployment, whether that compute node is a CPU, GPU, FPGA, or other specialized accelerator.However, API-style code deployment results in the specified compute node running code that is a derivative of customer-provided code, rather than the customer's code itself. Therefore, trusted computing in heterogeneous cloud computing platforms should provide attestation processes that support device-independent toolchain operations. Previous solutions to this problem required tenants to predetermine the type of accelerators that would run their logic, build them, and then provide that code to the CSP. This conventional model may work well enough for some simple workloads, where the choice of compute units can be predetermined, but workloads can be made up of multiple independent elements working together, requiring more complex operations. Additionally, cloud providers may want to differentiate themselves from others by providing an easy-to-use heterogeneous environment, and thus strongly want to take on the burden of determining which compute nodes are best suited to run workloads.In some embodiments, an apparatus, system or process will provide proofs of compilation, translation or evaluation through a provable toolchain. In some embodiments, an apparatus, system, or process utilizes one or more attestable environments to generate attestations for each stage of processing through a provable toolchain and to generate workloads from client-identifiable to end-computing nodes for use in Chain of proof assertions for machine logic. In such an operation, the customer can then apply the proof chain to trace the logic running on the end device back through the tool chain to the original logic provided or identified by the customer.FIG. 1 is an illustration of attestation of operations associated with a toolchain, according to some embodiments. In some embodiments, the process may be performed by one or more computing systems, such as the computing systems shown in FIG. 7 . In some embodiments, the attestation operation 100 may be decomposed into two phases: a build phase 110 (first phase) where device code logic is generated, and a runtime phase 120 (second phase) where trusted computing attestation occurs.During the build phase 110 , the provable toolchain is run from one or more provable environments 125 . As used herein, a "provable environment" refers to an environment capable of proving certain operations, including the generation of code. A provable environment may include, but is not limited to, Trusted Execution Environment (TEE) technologies such as Software Guard Extensions (SGX), Trusted Domain Extensions (TDX), or AMD SEV. A toolchain can include multiple stages that are used to generate code for end devices. In some embodiments, when processing a workload, each toolchain stage creates an attestation that subsequent stages can utilize, where the attestation may include:(a) Measurements 112 for a particular toolchain stage, which may include any known code measurement technique; and(b) Measurement or identification 114 of input logic to be transformed (compiled or translated), checked, or otherwise processed by the provable toolchain.In some embodiments, the attestation may optionally include other factors, such as one or more of the following:(c) any property 116 guaranteed at the toolchain stage (eg, by inserting code or linking to a specific library) or discovered (eg, by checking logic), or(d) Measurement or identification of logic created as a result of the phase (unless, for example, the toolchain phase is strictly a check phase, so no new code is created) 118 .In some embodiments, at runtime stage 120, the attestation is generated for the client. In some embodiments, the runtime phase 120 includes traditional trusted computing or trusted computing deployment. In general, trusted computing protects data in use by performing computations in a hardware-based trusted execution environment. In operation of the toolchain, the generated code is deployed to specific computing nodes 122 . The generated code may require access to specific resources held by a resource controller, such as tenant 150, where tenant 150 has access to sensitive data required by the compute node to perform device logic on the intended data. In some embodiments, this operation includes generating a device certification by compute node 124 and communicating the certification to resource controller 126 .In some embodiments, the complete attestation of the operation of the toolchain may be assembled into a transit chain of events, which may be referred to as a chain of attestations verified by a particular verification agent (referred to as a verifier), where the verifier This may include, for example, the tenant 150 or a designated authentication agent acting on behalf of the tenant. Actions provided by validators include:(a) verify each attestation in the attestation chain, which can be verified using any known attestation process, and(b) Ensure that the chain of events leads from some identified logic (the identified logic is the original workload code) to the logic reflected in the device attestation.In some embodiments, the resulting attestation chain may be processed as proof that the device is running logic derived from the original business logic, optionally with specified security attributes and constructed from the indicated toolchain elements. If these elements all comply with the validator's policy, the operation can proceed to deploy confidential data for workload computations.2 is an illustration of applying a toolchain to generate a chain of attestation assertions for code to be run on an end computing node, according to some embodiments. As shown in FIG. 2 , the cloud service provider 200 includes a plurality of computing nodes 210 to support the operation of a customer's workload, such as Tenant-A 250 with an associated workload 255 . The plurality of compute nodes 210 may include, for example, a heterogeneous set of devices, including multiple types of processing devices, hardware accelerators, or both, shown as one or more CPUs 212, one or more GPUs 214, and One or more hardware accelerators 216 . Cloud service provider 200 (or other third party) may use any combination of available computing nodes 210 to support workload 255 . A CSP may serve a large number of customers, such as other customers shown as Tenant-B 260 to Tenant-n 270, wherein each such customer may provide a different workload for processing, wherein each such workload may potentially Associated with a toolchain for converting source code into machine code suitable for one or more computational units.As shown, Tenant-A 250 may provide or select a provable toolchain, as shown in Toolchain-A 222, to convert workload code (source code) (as shown in Program Logic-A 220) into a A set or sets of machine code for one or more of the various types of compute nodes. In this way, workload 255 can potentially be processed using a variety of different computing nodes without the customer needing to identify such devices and use this knowledge in the generation of program logic 220, ie, the program logic can be device agnostic. Program Logic-A 220 needs to be translated into machine code for each of the one or more compute nodes selected for operation on workload 255 associated with Tenant-A 250.In normal operation, in order for a customer (such as Tenant-A 250) to identify and verify the machine logic running on each end device, the customer would need to run the toolchain and generate the resulting machine code to verify such code. Some toolchains may support this operating model when compiling their own code. However, for hardware accelerators, FPGA "compilers" or similar elements, this process requires knowledge of the CSP's basic bitstream, while GPUs require knowledge of the CSP's underlying custom kernel. Therefore, such operations are generally not practical in a CSP environment.In some embodiments, an apparatus, system or process will run a toolchain for a client from one or more provable environments. When processing a workload, a toolchain stage creates a proof of that stage, asserting that the toolchain has transformed input code into output code. In some embodiments, the attestation can optionally also include any security assertions that the toolchain can make on the resulting code. Thus, this creates a chain of proof assertions starting from the client-identifiable workload and continuing to the logic that will run on the end device. During workload deployment, when a customer receives an attestation from an end computing node, the customer will be able to apply the chain of attestation to trace the logic running on the end device back through the toolchain to the original logic provided or identified by the customer.Thus, in some embodiments, the toolchain can be executed anywhere, by the customer, by the CSP, by the governing body, or by a mutually agreed-upon third party, without requiring the build to be a deterministic and repeatable process. Instead, the proof chain provides the data needed to prove the transformation or evaluation of the tool chain from the initial logic to the logic executed by the end device.3 is an illustration of attestation generation by a provable toolchain, according to some embodiments. In some embodiments, the initial business logic (source code) as shown in Code 1 may be created by developer 300 (which may be a tenant or may be a third-party developer). In this example, Code 1 is the initial code for processing the security workload, where Code 1 needs to be converted to machine code for a specific end computing node, where the conversion (ie, compilation or translation) will use a provable toolchain , such as Toolchain-A 222 shown in FIG. 2 to execute. Developer 300 can provide public documentation 302 that provides measurements related to Code 1 that can be used to verify the operation of the toolchain.In some embodiments, a chain tool may include one or more stages, where each stage (which may be a transformation stage that transforms and attests the code or an inspection stage that checks the code without providing translation) can generate a proof of the code. For example, as shown in FIG. 3, at build phase 304, Code 1 is received by Translation Logic 1 (312) (wherein the translation logic may also be referred to as a first transformation or first transformer) to perform a transformation of Code 1 to generate code 2, and generate proof 1 (316). Translation logic 1 may further receive code such as common runtime code 314 . In this example, attestation 1 generated may include translation logic 1 (TL1) measurements (ie, measurements associated with the transition 1 stage itself), code 1 measurements (measures associated with received code 1), common runs Time code measurements (measures associated with the received common runtime code) and code 2 measurements (measures associated with generated code 2).The operations shown may continue on Code 2 received by Translation Logic 2 (322) (the second converter) to perform a conversion of Code 2 to generate Code 3 (in this particular example, Code 3 is the machine code of the end computing node), And generate proof 2 (326). Attestation 2 may include translation logic 2 (TL2) measurements, code 2 measurements, and code 3 measurements. Additionally, Code 3 may be further inspected by inspection logic 332 (first inspection element) to generate Proof 3 (336), which may include inspection logic (IL) measurements, Code 3 measurements, and may also include certain security assertions.In some embodiments, at runtime stage 306 , code 3 may be received by an end computing node, as shown by device 340 . In some embodiments, device 340 will generate certification 4 (device certification), which may include information about device 340 and code 3 measurements.In some embodiments, the generated attestations, namely attestation 1 generated by translation logic 1, attestation 2 generated by translation logic 2, attestation 3 generated by checking logic 332, and attestation 4 generated by terminal computing device 340, may be Each is provided to a validator, where a validator may be a tenant or a validating proxy acting on behalf of a tenant. The attestations can together form a chain of attestations to prove that the attestation device (device 340 in this example) is running logic derived from the original business logic (code 1) and built by the indicated toolchain elements (translation logic 1 and translation logic 2) (Code 3). The verifier can then determine whether the proof fully satisfies the verifier's policy, and if so, the tenant (or other party) can proceed to deploy the data to the workload for processing by appliance 340 .4 is an illustration of measurements and attestations associated with the operation of a provable toolchain, according to some embodiments. In some embodiments, the measurement and attestation 400 originates from the stage of the provable tool chain and end computing nodes, as shown in FIG. 3 .In some embodiments, Attestation 1 316 generated by Translation Logic 1 in Figure 3 includes TL 1 (Translation Logic 1) measurements, received Code 1 measurements, and Code 2 measurements. For verification, Code 2 measurements can be compared to Proof 2 326's input logic measurements. Additionally, the author or developer of the toolchain can provide public documentation, including the toolchain's measurements, which tenants or validators can use to compare the generated results. Common documents may include, but are not limited to, TL1 measured documents 404, TL2 (Translation Logic 2) measured documents 406, and IL (Inspection Logic) measured documents 402 as shown.Attestation 2 326 generated by Translation Logic 2 in Figure 3 includes the TL 2 (Translation Logic 2) measurement, the received Code 2 measurement, and the Code 3 measurement. Proof 2 326 may also include one or more security assertions. For verification, the code 3 measurements can be compared to the input logic measurements of the device certification 346 .Proof 3 336 generated by check logic 332 in FIG. 3 includes the IL measure and the generated Code 3 measure. For verification, the code 3 measurements can be compared to the input logic verification of the device certification 346 .Device certification 346 (certification 4 shown in Figure 3) then includes device measurements (for end processing devices) and code 3 measurements.In some embodiments, as described with respect to FIG. 3, the generated proof, the proof 1 generated by the translation logic 1, the proof 2 generated by the translation logic 2, the proof 3 generated by the checking logic, and the proof generated by the end computing node Device proofs can be individually provided to verifiers. The proofs can together form a proof chain to prove that each proof device is running logic derived from the original business logic (Code 1) and built by the indicated toolchain elements (Translation Logic 1 and Translation Logic 2).5 is an illustration of a proof-of-transmission summarizing proofs through a provable toolchain, according to some embodiments. When considered together, proof of delivery is provided as a logical representation of the information shown in FIG. 4 . As shown in FIG. 5, proof of delivery 500 may include device measurements (the device being an end processing device, such as device 340 shown in Compiled or translated) original business logic (machine code)); and, if required, one or more security assertions.In some embodiments, the proof of delivery includes or is associated with measurements from each stage of the cloud toolchain 520, where the measurements in this example are TL 1 (Translation Logic 1) measurements, TL2 (Translation Logic 2) measurements, and IL (check logic) measurements.In this way, the measurements and security assertions provide data that can be used to verify that the code received for the operating device originated from the original business logic and that the code has been generated by various stages of the tool chain.6 is a flow diagram illustrating a process for attesting transformation or evaluation through a provable toolchain, according to some embodiments. In some embodiments, process 600 includes receiving business logic, such as at a cloud service provider or other third party, for processing by selected compute nodes 605 a security workload associated with a particular tenant.In some embodiments, a provable toolchain is identified for the transformation 610 of the business logic, and one of the plurality of available compute nodes is selected to provide computation for the workload 615 . The process may include selecting a plurality of computing nodes, wherein the toolchain may provide individual transformations of business logic (or portions of business logic) for each of the plurality of computing nodes.In some embodiments, during the build phase 620 of the attestation process, each of the one or more transformation phases will receive a code, generate a code based on the received code, and generate a measure or identity of the received code, the generated code Measurement or identity of the code, and proof 625 of the transition phase. In some embodiments, each of the one or more inspection phases may be further operable to receive and inspect a code and generate a measure or identity of the received code and attestation 630 of the inspection phase.In some embodiments, during the runtime phase 635 of the attestation process, the terminal computing unit (ie, the device running the received machine code) will receive the machine code and generate a certification 640 of the computing unit and the received code.In some embodiments, each attestation will be to a verifier of the tenant 645, where the verifier may be the tenant or a verification agent acting on behalf of the tenant. In this way, the verifier receives a chain of proofs that collectively proves that each attestation device is running logic derived from the original business logic and that the resulting machine code is built from the indicated toolchain elements (transformation stage elements) . In this case, verification that the resulting machine code of the end computing node is derived from the original machine logic can be generated without the need for the tenant to participate in the operation of the toolchain.In some embodiments, if the proof is sufficient for verification, the process may then continue to receive security data 650 from the tenant and perform computations 655 for the workload using the received security data.7 illustrates one embodiment of an exemplary computing architecture for proof of operations through a provable toolchain, in accordance with some embodiments. In various embodiments as described above, computing architecture 700 may include or be implemented as part of an electronic device. In some embodiments, computing architecture 700 may represent, for example, a computer system that implements one or more components of the operating environment described above. Computing architecture 700 may be used to provide proof of operations through a tool chain, such as those described in Figures 1-6.As used in this application, the terms "system" and "component" and "module" are intended to refer to a computer-related entity, which may be hardware, a combination of hardware and software, software, or software in execution, such as by an exemplary computer Architecture 700 provides. For example, a component may be, but is not limited to, a process running on a processor, a processor, a hard drive or solid state drive (SSD), multiple storage drives (of optical and/or magnetic storage media), an object, an executable file, an execution Threads, programs and/or computers. For example, both the application running on the server and the server can be components. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. Additionally, components may be communicatively coupled to each other through various types of communication media to coordinate operation. Coordination may involve a one-way or two-way exchange of information. For example, components may communicate information in the form of signals communicated over a communication medium. This information can be implemented as signals assigned to various signal lines. In such an assignment, each message is a signal. However, further embodiments may alternatively employ data messages. Such data messages can be sent over various connections. Exemplary connections include parallel interfaces, serial interfaces, and bus interfaces.Computing architecture 700 includes various common computing elements such as one or more processors, multi-core processors, coprocessors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, Audio cards, multimedia input/output (I/O) components, power supplies, etc. However, embodiments are not limited to being implemented by computing architecture 700 .As shown in FIG. 7, computing architecture 700 includes one or more processors 702 and one or more graphics processors 708, and may be a uniprocessor desktop system, a multiprocessor workstation system, or have a large number of processors 702 or processors Server system with kernel 707. In one embodiment, system 700 is a processing platform incorporated within a system-on-chip (SoC or SOC) integrated circuit for mobile, handheld or embedded devices.One embodiment of system 700 may include or be incorporated within a server-based gaming platform, gaming console (including gaming and media consoles), mobile gaming console, handheld gaming console, or online gaming console. In some embodiments, system 700 is a mobile phone, smartphone, tablet computing device, or mobile internet device. The data processing system 700 may also be included, coupled, or integrated in a wearable device, such as a smart watch wearable device, a smart glasses device, an augmented reality device, or a virtual reality device. In some embodiments, data processing system 700 is a television or set-top box device having one or more processors 702 and a graphical interface generated by one or more graphics processors 708 .In some embodiments, the one or more processors 702 each include one or more processor cores 707 to process instructions that, when executed, perform the operations of the system and user software. In some embodiments, each of the one or more processor cores 707 is configured to process a particular instruction set 709 . In some embodiments, the instruction set 709 may facilitate complex instruction set computing (CISC), reduced instruction set computing (RISC), or computation via very long instruction words (VLIW). Multiple processor cores 707 may each process a different instruction set 709, which may include instructions that facilitate emulation of other instruction sets. The processor core 707 may also include other processing devices, such as a digital signal processor (DSP).In some embodiments, processor 702 includes cache memory 704 . Depending on the architecture, the processor 702 may have a single internal cache or multiple levels of internal caches. In some embodiments, cache memory 704 is shared among various components of processor 702 . In some embodiments, the processor 702 also uses an external cache (eg, a level 3 (L3) cache or a last level cache (LLC)) (not shown), which may use known cache coherency techniques Shared among processor cores 707 . Register file 706 is additionally included in processor 702, which may include different types of registers (eg, integer registers, floating point registers, status registers, and instruction pointer registers) for storing different types of data. Some registers may be general purpose registers, while other registers may be specific to the design of the processor 702 .In some embodiments, one or more processors 702 are coupled with one or more interface buses 710 to communicate communication signals, such as address, data, or control signals, between processors 702 and other components in the system. In one embodiment, interface bus 710 may be a version of a processor bus, such as a direct media interface (DMI) bus. However, the processor bus is not limited to the DMI bus, but may also include one or more peripheral component interconnect buses (eg, PCI, PCI Express), memory buses, or other types of interface buses. In one embodiment, the one or more processors 702 include an integrated memory controller 716 and a platform controller hub 730 . Memory controller 716 facilitates communication between memory devices and other components of system 700, while platform controller hub (PCH) 730 provides connections to I/O devices via a local I/O bus.The memory device 720 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device with suitable properties for use as process memory. In one embodiment, memory device 720 may operate as system memory for system 700 to store data 722 and instructions 721 for use when one or more processors 702 execute applications or processes. The memory controller hub 716 is also coupled to an optional external graphics processor 712, which can communicate with one or more of the graphics processors 708 of the processors 702 to perform graphics and media operations. In some embodiments, display device 711 may be connected to one or more processors 702 . Display device 711 may be one or more internal display devices, as in a mobile electronic device or laptop device, or an external display device attached via a display interface (eg, DisplayPort, etc.). In one embodiment, display device 711 may be a head mounted display (HMD), such as a stereoscopic display device used in virtual reality (VR) applications or augmented reality (AR) applications.In some embodiments, platform controller hub 730 enables peripheral devices to connect to memory device 720 and processor 702 via a high-speed I/O bus. I/O peripherals include, but are not limited to, audio controller 746, network controller 734, firmware interface 728, wireless transceiver 726, touch sensor 725, data storage device 724 (eg, hard drive, flash memory, etc.). Data storage device 724 may be connected via a storage interface (eg, SATA) or via a peripheral bus, such as a peripheral component interconnect bus (eg, PCI, PCI Express). The touch sensor 725 may include a touch screen sensor, a pressure sensor, or a fingerprint sensor. The wireless transceiver 726 may be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, Long Term Evolution (LTE) or 5G transceiver. Firmware interface 728 enables communication with system firmware and may be, for example, Unified Extensible Firmware Interface (UEFI). The network controller 734 may implement network connections to wired networks. In some embodiments, a high performance network controller (not shown) is coupled to interface bus 710 . In one embodiment, the audio controller 746 is a multi-channel high definition audio controller. In one embodiment, system 700 includes an optional legacy I/O controller 740 for coupling legacy (eg, Personal System 2 (PS/2)) devices to the system. The platform controller hub 730 may also connect to one or more Universal Serial Bus (USB) controllers 742 to connect input devices, such as a keyboard and mouse 743 combination, a camera 744, or other USB input devices.8A and 8B illustrate a one-time hash-based signature scheme and a multiple-time hash-based signature scheme, respectively. The operations shown in Figures 8A and 8B can be utilized as needed to provide security to support proof of operation of the toolchain. Hash-based cryptography is based on cryptographic systems such as Lamport signature, Merkle signature, Extended Merkle Signature Scheme (XMSS), SPHINCS scheme, SPHINCS+ scheme, etc. With the advent of quantum computing and the anticipation of its growth, there has been concern about the various challenges that quantum computing may present and how the field of cryptography can be used to address them.One area that is being explored to address the challenges of quantum computing is hash-based signatures (HBS), as these schemes have been around for a long time and possess the necessary fundamental components, such as relying on symmetric cryptographic building blocks (e.g., hash functions), to tackle quantum counting and post-quantum computing challenges. The HBS scheme is considered a fast signature algorithm working with fast platform security boot, which is considered to be the most resistant to quantum attacks.For example, as shown with respect to FIG. 8A, a scheme of HBS is shown using a Merkle tree and a one-time signature (OTS) scheme 800, such as signing a message with a private key and verifying the OTS with a corresponding public key message, where the private key only signs a single message.Similarly, as shown in FIG. 8B, FIG. 8B shows another HBS scheme, where the scheme involves a multiple-signature (MTS) scheme 850, where a private key can sign multiple messages.9A and 9B illustrate a one-time signature scheme and a many-time signature scheme, respectively. Continuing with the HBS-based OTS scheme 800 of FIG. 8A and the MTS scheme 850 of FIG. 8B , FIG. 9A illustrates the Winternitz OTS (WOTS) scheme 900 provided by Robert Winternitz of Stanford Mathematics Department, respectively, while FIG. 9B illustrates the XMSS MTS scheme, respectively 950.For example, the WOTS scheme 900 of Figure 9A provides for hashing and parsing a message into M, where there are 67 integers between [0, 1, 2, ..., 15], such as the private key sk, 905, the signature s, 910 and the public key pk, 915, each have 67 components of 32 bytes.Now, for example, FIG. 9B illustrates an XMSS SMTS scheme 950 that allows a combination of the WOTS scheme 900 of FIG. 9A and an XMSS scheme 955 with an XMSS Merkle tree 970. As previously discussed with respect to FIG. 9A , the WOTS scheme 900 is based on the one-time public key pk 915 , each having 67 components of 32 bytes, and then provides the WOTS compressed pk 967 through the L-tree compression algorithm 960 to be used in the XMSS scheme 955 XMSS has a place in the Merkle Merkle tree 970. It is envisaged that XMSS signature verification may include computing a WOTS verification and checking to determine whether the reconstructed root node matches the XMSS public key, such as root node=XMSS public key.The machine-readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a segmented format, a compiled format, an executable format, a packed format, and the like. The machine-readable instructions described herein may be stored as data (eg, portions of instructions, code, representations of code, etc.) that can be used to create, manufacture, and/or generate machine-executable instructions. For example, machine-readable instructions may be segmented and stored on one or more storage devices and/or computing devices (eg, servers). Machine-readable instructions may utilize one or more of installing, modifying, adapting, updating, combining, supplementing, configuring, decrypting, decompressing, unpacking, distributing, reassigning, compiling, etc., to make them directly readable, Interpretable and/or executable by computing devices and/or other machines. For example, machine-readable instructions may be stored in multiple parts that are separately compressed, encrypted, and stored on separate computing devices, where the parts are decrypted, decompressed, and combined to form a set of executable instructions, which Executable instructions implement programs such as those described herein.In another example, machine-readable instructions may be stored in a library that can be read by a computer but using additional libraries (eg, dynamic link libraries (DLLs)), software development kits (SDKs), application programming interfaces (APIs) etc. in order to execute instructions on a particular computing device or other device. In another example, the machine-readable instructions may be configured (eg, settings storage, data entry, network address recording, etc.) before the machine-readable instructions and/or the corresponding program or programs may be executed in whole or in part. Accordingly, the disclosed machine-readable instructions and/or the corresponding program(s) are intended to include such machine-readable instructions and/or the program(s) regardless of whether the machine-readable instructions and/or the program(s) The particular format or state in which a program is stored or otherwise on hold or en route.The machine-readable instructions described herein may be represented in any past, present or future instruction language, scripting language, programming language, and the like. For example, machine-readable instructions may be expressed in any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, Hypertext Markup Language (HTML), Structured Query Language (SQL), Swift, etc.As noted above, the example processes of FIGS. 9A and 9B and other figures may use data stored on non-transitory computer and/or machine readable media (such as hard drives, flash memory, read only memory, optical disks, digital versatile disks, caches, Random access memory and/or any other storage device or storage disk that stores information for any duration (eg, for an extended period of time, permanently, for ephemeral instances, for temporarily caching and/or caching information) executable instructions (eg, computer and/or machine-readable instructions) on the As used herein, the term "non-transitory computer-readable medium" is expressly defined to include any type of computer-readable storage device and/or storage disk, and to exclude propagating signals and to exclude transmission media."Including" and "comprising" (and all forms and tenses thereof) are used herein as open-ended terms. Thus, whenever a claim uses any form of "comprising" or "comprising" (eg, including, having, etc.) as a preamble or in a claim recitation of any kind, it should be understood as not exceeding the limits of the corresponding claim or In the case of the stated scope, other elements, terms, etc. may be present. As used herein, when the phrase "at least" is used as a transition term, eg, in the preamble of a claim, it is open-ended in the same way that the terms "comprising" and "including" are open-ended.When the term "and/or" is used, for example, in a form such as A, B, and/or C, it refers to any combination or subset of A, B, C, such as (1) A only, (2) B only, ( 3) C only, (4) A and B, (5) A and C, (6) B and C, and (7) A and B and A and C. As used herein in the context of describing structures, components, items, objects, and/or things, the phrase "at least one of A and B" is intended to refer to embodiments that include: (1) at least one A , (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects, and/or things, the phrase "at least one of A or B" is intended to refer to embodiments that include: (1) At least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or performance of a process, instruction, action, activity, and/or step, the phrase "at least one of A and B" is intended to refer to an embodiment that includes: (1 ) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or performance of processes, instructions, actions, activities, and/or steps, the phrase "at least one of A or B" is intended to refer to embodiments that include : (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.As used herein, a singular reference (eg, "a", "an", "first", "second", etc.) does not exclude a plurality. As used herein, the term "a" or "an" entity refers to one or more of the entities. The terms "a" (or "an"), "one or more" and "at least one" are used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method acts may be implemented by eg a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.The descriptors "first," "second," "third," etc. are used herein when identifying a plurality of elements or components that may be individually referenced.Unless otherwise specified or understood based on the context of use, such descriptors are not intended to confer any meaning on priority, physical order, or arrangement in a list or chronological order, but are used only for ease of understanding of the disclosed examples and are used to refer to multiple A label for a component or part. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while different descriptors such as "second" or "third" may be used in the claims to refer to the same element. It should be understood that in this case, such descriptors are used merely for ease of reference to various elements or components.The following examples refer to further embodiments.In Example 1, a storage medium includes instructions for: receiving source code for processing a tenant's secure workload; selecting at least a first compute node of a plurality of compute nodes to provide computation for the workload; via a provable toolchain processing the source code to generate machine code for the first compute node, including, in a first stage, performing one or more transformations of the source code by one or more transformers to generate transformed code and generating a and at the second stage, receiving the machine code of the first computing node and generating the attestation associated with the first computing node; and providing each attestation of the first and second stages for verification.In Example 2, the first stage is the build stage and the second stage is the runtime stage for processing the source code.In Example 3, the attestations from the first and second stages represent a chain of attestations between the received source code and the machine code generated for the first computing node.In Example 4, each attestation of the first stage includes at least a measurement or identity of the received code, a measurement or identity of the converted code, and a proof of a converter that converted the received code to the converted code.In Example 5, at least the first proof of the first stage further includes one or more security assertions.In Example 6, the first stage also includes performing one or more inspections of the source code and generating an attestation associated with each code inspection.In Example 7, the proof of the second stage includes at least the proof of the received machine code and the proof of the first computing node.In Example 8, the storage medium further includes instructions for: receiving secure data for verification in response to the attestation of the first and second phases; and performing computation of the secure workload using the received secure data.In Example 9, a system includes one or more processors for processing data; memory for storing media data; and a plurality of computing units for providing computation of supported workloads , wherein the system is used to: receive source code for processing a security workload of a tenant at the CSP; select at least one first computing node of the plurality of computing nodes to provide computing for the workload; The provable toolchain processes the source code to generate machine code for the first compute node, including, in a first stage, performing one or more transformations of the source code through one or more transformers to generate transformed code and generate a a proof associated with each code transition, and in the second stage, receive the machine code of the first computing node and generate a proof associated with the first computing node; and provide each proof of the first and second stages for verify.In Example 10, the plurality of computing nodes include one or more processing devices, one or more hardware accelerators, or both.In Example 11, the first stage is the build stage and the second stage is the runtime stage for processing the source code.In Example 12, the attestations from the first and second stages represent a chain of attestations between the received source code and the machine code generated for the first computing node.In Example 13, each attestation of the first stage includes at least a measurement or identity of the received code, a measurement or identity of the converted code, and a proof of a converter that converted the received code to the converted code.In Example 14, at least the first proof of the first stage further includes one or more security assertions.In Example 15, the first stage also includes performing one or more inspections of the source code and generating an attestation associated with each code inspection.In Example 16, the proof of the second stage includes at least the proof of the received machine code and the proof of the first computing node.In Example 17, an apparatus includes one or more processors to process code; and memory for storing computer instructions including instructions for receiving business logic for processing a tenant's security workload, the The apparatus includes a provable toolchain associated with the tenant; selects one or more of the plurality of compute nodes to provide computation for the workload; processes business logic through the toolchain to generate machine code for the one or more compute nodes , including during the build phase, performing one or more transformations of the business logic through one or more transformers to generate transformed code and generating the attestation associated with each code transformation, and during the runtime phase, receiving one or more machine code for each compute node and generate attestations associated with each of the one or more compute nodes; provide each attestation in the build phase and runtime phase for verification; and utilize the one or more compute nodes to perform security work load calculation.In Example 18, the attestations from the build phase and the runtime phase represent a chain of attestations between received business logic and machine code generated for one or more compute nodes.In Example 19, each attestation of the build phase includes at least a measurement or identity of the received code, a measurement or identity of the converted code, and a proof of a converter that converted the received code to the converted code.In Example 20, each proof of the runtime phase includes at least a proof of the received machine code and a proof of one of the one or more computing nodes.In example 21, an apparatus comprises: means for receiving source code for processing a security workload of a tenant; means for selecting at least a first computing node of a plurality of computing nodes to provide computing for the workload; using Means for processing the source code by a provable toolchain to generate machine code for a first computing node, comprising in a first stage, performing one or more transformations of the source code by one or more transformers to generate the transformations and means for receiving the machine code of the first computing node and generating a certificate associated with the first computing node in a second stage; and for A device that provides each proof of Phase 1 and Phase 2 for verification.In Example 22, the first stage is the build stage and the second stage is the runtime stage for processing the source code.In Example 23, the attestations from the first and second stages represent a chain of attestations between the received source code and the machine code generated for the first computing node.In Example 24, each attestation of the first stage includes at least a measurement or identity of the received code, a measurement or identity of the converted code, and a proof of a converter that converted the received code to the converted code.In Example 25, at least the first proof of the first stage further includes one or more security assertions.In Example 26, the first stage also includes performing one or more inspections of the source code and generating an attestation associated with each code inspection.In Example 27, the proof of the second stage includes at least the proof of the received machine code and the proof of the first computing node.In Example 28, the apparatus further includes means for: receiving secure data for verification in response to the attestation of the first and second phases; and performing computation of the secure workload using the received secure data.The details of the examples can be used anywhere in one or more embodiments.The foregoing description and drawings are to be regarded in an illustrative rather than a restrictive sense. Those skilled in the art will appreciate that various modifications and changes of the embodiments described herein can be made without departing from the broader spirit and scope of the features set forth in the appended claims. |
A computer addressing mode and memory access method rely on a memory segment identifier and a memory segment mask for indicating memory locations. In this addressing mode, a processor receives an instruction comprising the memory segment identifier and memory segment mask. The processor employs a two-level address decoding scheme to access individual memory locations. Under this decoding scheme, the processor decodes the memory segment identifier to select a particular memory segment. Each memory segment includes a predefined number of memory locations. The processor selects memory locations within the memory segment based on mask bits set in the memory segment mask. The disclosed addressing mode is advantageous because it allows non-consecutive memory locations to be efficiently accessed. |
CLAIMS 1. A processor, comprising: an interface for receiving an instruction comprising a memory segment identifier and a memory segment mask; a first digital circuit configured to select a memory segment based on the memory segment identifier; and a second digital circuit configured to select one or more of memory locations in the memory segment based on the memory segment mask. 2. The processor of claim 1 , wherein the memory segment mask identifies only non-consecutive memory locations in the memory segment. 3. The processor of claim 1 , wherein the instruction further comprises a plurality of data units and the processor further comprises a third digital circuit configured to store the data units in the selected memory locations. 4. The processor of claim 1 , further comprising a memory including a plurality of the memory segments, each of the memory segments including a predetermined plurality of memory locations. 5. The processor of claim 1 , wherein the processor is a graphics processing unit. 6. The processor of claim 1 , further comprising: a command engine including the interface; a functional unit including the memory segment; and a bus coupling the command engine and the functional unit; wherein the first digital circuit is included in the command engine and the second digital circuit is included in the functional unit. 7. The processor of claim 1 , wherein the memory locations correspond to registers included in the processor. 8. A processor, comprising: a plurality of register segments, each of the register segments including a predetermined plurality of registers; an interface for receiving an instruction comprising a register segment identifier, a register segment mask having a plurality of mask bits corresponding to the registers of a register segment, and a data block comprising one or more data units; means for decoding the register segment identifier to select one of the register segments; a priority decoder configured to determine which of the mask bits are set and to select the registers in the selected one of the register segments corresponding to the set mask bits; and a FIFO memory configured to transfer the data units to the selected registers. 9. The processor of claim 8, wherein the set mask bits identify only non- consecutive registers in the register segment. 10. The processor of claim 8, further comprising a command engine including the priority decoder and the FIFO memory. 11. The processor of claim 8, further comprising: a command engine; a functional unit including at least one of the register segments; and a bus coupling the command engine and the functional unit. 12. The processor of claim 8, further comprising: a plurality of functional units, each comprising one of the register segments. 13. The processor of claims 12, wherein the registers are configuration registers for the functional units. 14. The processor of claim 8, wherein the processor is a graphics processing unit. 15. A method of accessing a memory having a plurality of memory segments, each of the memory segments including a predetermined plurality of memory locations, the method comprising: receiving an instruction comprising a memory segment identifier and a memory segment mask; decoding the memory segment identifier to select a memory segment; selecting one or more memory locations in the memory segment based on the memory segment mask; and accessing the selected memory locations. 16. The method of claim 15, wherein the memory segment mask indicates only non- consecutive memory locations in the memory segment. 17. The method of claim 15 , further comprising : receiving a data block comprising a plurality of data units; and storing the data units in the selected memory locations. 18. The method of claim 15 , wherein the memory locations correspond to configuration registers included in a graphics processing unit. 19. The method of claim 15, wherein the memory segment mask includes a plurality of mask bits, and the step of selecting includes: determining which of the mask bits are set; and selecting the memory locations corresponding to the set mask bits. 20. The method of claim 15, wherein the step of decoding is performed by a command engine included in a graphics processing unit and the step of selecting is performed by a functional unit included in the graphics processing unit. 21. A processor, comprising: first means for receiving an instruction comprising a memory segment identifier and a memory segment mask;second means for selecting a memory segment based on the memory segment identifier; and third means for selecting one or more of memory locations in the memory segment based on the memory segment mask. 22. The processor of claim 21 , wherein the memory segment mask identifies only non-consecutive memory locations in the memory segment. 23. The processor of claim 21 , wherein the instruction further comprises a plurality of data units and the processor further comprises fourth means for storing the data units in the selected memory locations. 24. The processor of claim 21 , further comprising a memory including a plurality of the memory segments, each of the memory segments including a predetermined plurality of memory locations. 25. The processor of claim 21 , wherein the processor is a graphics processing unit. 26. The processor of claim 21 , further comprising: a command engine including the first means; a functional unit including the memory segment; and a bus coupling the command engine and the functional unit; wherein the second means is included in the command engine and the third means is included in the functional unit. 27. The processor of claim 21 , wherein the memory locations correspond to registers included in the processor. 28. A computer program stored on a computer-readable medium, comprising: a first program code segment for receiving an instruction comprising a memory segment identifier and a memory segment mask;a second program code segment for selecting a memory segment based on the memory segment identifier; and a third program code segment for selecting one or more of memory locations in the memory segment based on the memory segment mask. 29. The computer program of claim 28, wherein the memory segment mask identifies only non-consecutive memory locations in the memory segment. 30. The computer program of claim 28, wherein the instruction further comprises a plurality of data units and the computer program further comprises a forth program code segment for storing the data units in the selected memory locations. 31. The computer program of claim 28, wherein the memory locations correspond to registers included in the processor. |
COMPUTER MEMORY ADDRESSING MODE EMPLOYING MEMORY SEGMENTING AND MASKING TECHINCAL FIELD [0001] The invention relates generally to computers, and more specifically, to memory addressing schemes used in computers. BACKGROUND [0002] In some computing environments, system size and power consumption are key design constraints. For example, in mobile systems such as laptops, personal digital assistants (PDAs), cellular phones and other wireless mobile devices, the physical space and power available for computing resources is relatively limited. In these systems, power is generally limited to available battery capacity and size is generally limited by consumer tastes. [0003] Despite environmental constraints, the market demand for increased functionality has consistently challenged the limits of mobile computing technology. Users seemingly have an insatiable desire for new and enhanced features on their mobile devices. Examples of enhanced mobile features include cameras, both video and still, video players, music players, email, texting, web browsing, games and the like. All of these features can be integrated into a single mobile device with wireless phone and data services. Some of these features, particularly advanced 3-D gaming and other graphics applications, are computationally and memory intensive. To support such demanding applications on resource limited platforms, it is desirable to have a relatively small computing unit that is capable of providing the necessary performance at reduced levels of power consumption. SUMMARY [0004] It is an advantage of the present invention to provide a computer system that reduces power consumption and increases bus efficiency by reducing bus traffic in certain operational circumstances. In modern computers, power consumption is related to the number of information bits being transferred over internal buses. To reduce bustraffic, the computer system disclosed herein includes a novel memory addressing mode that significantly reduces the number address bits used in making certain bus transfers. [0005] In accordance with an exemplary embodiment of the invention, a computer processor addressing mode relies on a memory segment identifier and a memory segment mask for indicating memory locations. In this addressing mode, the processor receives an instruction comprising the memory segment identifier and memory segment mask. The processor decodes the memory segment identifier to select a particular memory segment. Each memory segment includes a predefined number of memory locations. The processor selects memory locations within the memory segment based on mask bits set in the memory segment mask. The disclosed addressing mode is advantageous because it allows both consecutive and non-consecutive memory locations to be efficiently accessed. [0006] Other aspects, features, embodiments, methods and advantages of the invention will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional features, embodiments, processes and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims. BRIEF DESCRIPTION OF THE DRAWINGS [0007] It is to be understood that the drawings are solely for purpose of illustration and do not define the limits of the invention. Furthermore, the components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. In the figures, like reference numerals designate corresponding parts throughout the different views. [0008] FIG. 1 is a block diagram of a computer system in accordance with an exemplary embodiment of the present invention. [0009] FIG. 2 illustrates an exemplary format of a segment/mask instruction used by the computer system of FIG. 1. [0010] FIG. 3 is a block diagram illustrating details of the register addressing logic and FIFO of the co-processor shown in FIG. 1.[0011] FIG. 4 is a flowchart illustrating exemplary co-processor operation in response to a segment/mask write instruction. [0012] FIG. 5 is a block diagram of a computer system in accordance with an alternative exemplary embodiment of the present invention. [0013] FIG. 6 is a conceptual dataflow diagram illustrating execution of a segment/mask write instruction within the co-processors of FIGS. 1 and 5. DETAILED DESCRIPTION [0014] The following detailed description, which references to and incorporates the drawings, describes and illustrates one or more specific embodiments of the invention. These embodiments, offered not to limit but only to exemplify and teach the invention, are shown and described in sufficient detail to enable those skilled in the art to practice the invention. Thus, where appropriate to avoid obscuring the invention, the description may omit certain information known to those of skill in the art. [0015] Turning now to the drawings, and in particular to FIG. 1, there is illustrated a block diagram of a computer system 10 in accordance with an exemplary embodiment of the present invention. The computer system 10 is not limited to any particular operational environment or application; however, it is preferably incorporated in a wireless mobile device, such as a cellular phone, PDA, laptop computer or the like. Furthermore, the computer system 10 can be implemented using any suitable computing technology. For example, the system 10 may be built using any suitable combination of hardware and software components, and may be implemented as a system-on-a-chip, or alternatively, it may be implemented using multiple chips and components. [0016] The system 10 includes a co-processor 12 in communication with a central processing unit (CPU) 14 by way of a system bus 18. A system memory 16 is also connected to the system bus 18 and is accessible to the CPU 14 and co-processor 12. [0017] The co-processor 12 employs a two-level address decoding scheme that decodes a register segment identifier (ID) and a register segment mask to access individual registers. In the first-level of decoding, the register segment ID is decoded by digital circuitry to indicate a register segment. In the second-level of decoding, theregister segment mask is decoded by digital circuitry to select individual registers in the register segment. The two-level address decoding scheme generally uses fewer addressing bits when compared to conventional global address schemes. In addition, the scheme simplifies the address decoding logic contained in the co-processor 12. [0018] The co-processor 12 supplements the functions of the CPU 14 by offloading certain processor-intensive tasks from the CPU 14 to accelerate the overall system performance. Preferably, the co-processor 12 is a graphic processing unit (GPU) that accelerates the processing of 3-D graphics. [0019] In most GPUs, including co-processor 12, there are internal configuration registers (see FIG. 1, Reg 0 - Reg N) that need to be loaded with data or control values before the GPU begins its rendering operations. To accomplish this rendering configuration, a driver 22 sends a set of register data units to the co-processor 12 through the system bus 18, which is preferably 32 bits and/or 8-bit aligned. These internal configuration registers can be periodically updated by the driver 22 based the operational state of the rendering. [0020] Conventional GPUs use global addressing to locate their internal configuration registers. Thus, if a conventional GPU has n internal registers, the register address bits will require m bits where 2m > n. With conventional global addressing, there are typically three ways for a driver to send registers to a GPU coprocessor: pack mode, pair mode and consecutive mode. Using pack mode, the register address is included with register data in a single bus-wide word, so only one system bus transaction is needed to load each internal register. With pair mode, each register data unit is transferred with its destination register address, both of which are bus-width words. Thus, using this mode, two system bus transactions are needed to load a single internal register. Using consecutive mode, internal registers consecutive in address are updated sequentially by a single instruction. The instruction includes a header packet which contains a base address of the first register and a count of the total number of registers to be updated, followed by the register data units. [0021] Because the system bus width is typically 32 bits or 8-bits aligned, the following problems may occur with the above conventional addressing modes. For pair mode, the register address itself consumes one entire system bus cycle. Thus, theoverhead for loading co-processor registers using this mode is relatively large. For pack mode, the register data is limited to the remaining bits after the bits for the register address. The number data bits that are allowed in pack mode are insufficient for some applications. Also, to send a group of register data units using pack mode still requires a multi-bit register address for each bus transfer. Where a co-processor has numerous internal registers, the bus bandwidth consumed by address bits can become unsuitably large. The consecutive mode is efficient for loading internal registers consecutive in address. However, there are situations where registers to be loaded are not consecutively located (non-consecutive in address). In these situations, therefore, multiple short consecutive register batches have to be used in which the header packet overhead will outweigh the savings normally achieved by consecutive mode addressing. [0022] The segment/mask addressing mode disclosed herein overcomes the shortcomings of the pair, pack and consecutive addressing modes described above. More particularly, the segment/mask addressing mode effectively reduces the address bits of registers, which means that the bus traffic is reduced and that the data payload becomes greater relative to the number of bits used for addressing. This significantly reduces system power consumption in certain operational scenarios, such as 3-D graphics processing. Furthermore, as bus interfaces become wider, e.g., 64 or 128 bits, the efficiencies of the segment/mask address mode increases. [0023] The segment/mask addressing mode entails two aspects that differentiate it from conventional global addressing: register segmenting and register segment masking. The co-processors 12, 412 described herein employ two-level address decoding to access registers addressed by the segment/mask addressing mode. [0024] Register segmenting divides registers into plural, predefined groups (segments). The segments are addressed by the register segment identifier (ID). Preferably, each segment of registers is dedicated to a corresponding functional unit in the co-processor 12 because individual functional units are usually programmed for the same rendering function. Each functional unit can include a different number of registers in its register segment. It is also contemplated that multiple register segments can be included in a single functional unit.[0025] Register segment masking provides that each register in a register segment has a local index number (preferably starting from 0) so that it can be indexed by a register segment mask. The mask indicates the offset locally within register segment 40 the functional unit, if needed. There is one mask bit per register in the register segment mask and mask bit value of 1 ' means the corresponding register will be accessed, i.e., either written to or read from. The mask index can start from either the least significant bit (LSB) or the most significant bit (MSB), although starting from the LSB is preferred. [0026] Unlike global addressing, the segment/mask address mode uses the register segment ID to differentiate among register segments. There are fewer register segments than registers, so addressing bits are conserved. Furthermore, instead of using a conventional binary address to address each individual register, the register segment mask uses one bit to index each register in a register segment. This indexing supports updating multiple registers at non-consecutive addresses, in addition to updating registers at consecutive locations. [0027] By supporting registers in both consecutive and non-consecutive addresses, the segment/mask mode provides an efficient and easy way to program and pass data to the registers. It also simplifies design because the segment/mask mode addressing does not always require an address comparator, and it makes driver encoding easier by focusing on registers in organized into groups (segments). [0028] Returning now to FIG. 1, the co-processor 12 utilizes the segment/mask addressing mode to efficiently access configuration registers Reg 0 - Reg N in each of the functional units 30 - 34. The co-processor 12 includes a command engine 24, a plurality of pipelined functional units 30 - 34, and an internal bus 28 connecting the command engine 24 and the functional units 30 - 34. Graphics data being processed internally by the co-processor pipeline can be transferred between the functional units by a data pipeline (not shown). [0029] The command engine 24 is the primary functional unit in the co-processor 12 and includes a bus interface 11 for receiving instructions and configuration register data from the driver 22 through the system bus 18. The command engine 24 decodes the instructions and passes the configuration data to the different functional units 30 - 34for storage in the internal register segments 40 over the internal bus 28. The internal bus 28 is preferably 64 data bits in width. The configuration data is directed to specific registers in the segments 40 based on the register segment ID and register segment mask, using the segment/mask address mode. [0030] The command engine 24 includes register addressing logic (RAL) 38 and a first-in- first-out (FIFO) memory 37 for processing instructions having the segment/mask address mode. Details of an exemplary segment/mask instruction format are described below in connection with FIG. 2 and details of the FIFO 37 and RAL 38 are described below in connection with FIG. 3. [0031] Each of the functional units 30 - 34 performs one or more specific operations on data that it receives. Preferably, the operations are those involved in rendering 3-D graphics output. The operation of the functional units 30 - 34 is controlled by the command engine 24. To control the functional units 30 - 34, the command engine 24 issues control signals and configuration data to the functional units 30 - 34, respectively, over the internal bus 28. Each of the functional units 30 - 34 includes a register decoder 26 and a register segment 40. The register decoder 26 decodes individual register addresses that it receives over the internal bus 28. Each register segment 40 includes a predefined number of registers, Reg 0 - Reg N. The registers can be any suitable size, and are preferably 64 bits. [0032] The CPU 14 is the primary processor in the system 10 and executes instructions stored in the system memory 16. Although the CPU 14 may be any suitable processor, it is preferably a commercially-available microprocessor core, such as an ARM9 processor core from ARM, Inc, or a digital signal processor (DSP). [0033] The system memory 16 stores data and executable instructions used by the CPU 14 and co-processor 12. The system memory 16 may be implemented using any suitable storage technology, and it is preferably a solid-state memory device, such as RAM or flash memory. The memory 16 may also use other memory technologies such as optical or magnetic memory disk drives. [0034] A software application 19 requiring services provided by the co-processor 12 may be stored in the memory 16. The application 19 can be a software program such as a 3-D game. The memory 16 also stores an operating system (OS) software program 20and the driver 22 for permitting the OS 20 to call the services of the co-processor 12. A commercially-available operating system such, as BREW, Symbian, Windows Mobile can be used by the system 10. The application 19 can also use industry standard application programming interfaces (APIs), such as those specified by OpenGL ES 1.x or 2.x for graphics applications, or DirectX 9.0. [0035] The system bus 18 is the interface through which the co-processor 12 receives instructions. It is preferably implemented using an industry-standard bus protocol, such as the Advanced Microprocessor Bus Architecture (AMBA) AXI Bus. [0036] FIG. 2 illustrates an exemplary format of a segment/mask instruction 100 used by the computer system 10 of FIG. 1. During operation of the system 10, the instruction 100 is sent by the driver 22 as a data packet over the system bus 18 to an address indicating the command engine 24 of the co-processor 12. [0037] The instruction 100 includes an instruction word 102 and a data block 104 comprising a plurality of data units 114 - 116. The instruction word 102 has bit fields defining, respectively, an instruction type 106, a register count 108, a register segment ID 110 and a register segment mask 112. Each of the bit fields includes a predefined number of bits sufficient to support the numbers of instructions, register segments, and registers per segment for the computer system 10. The instruction 100 has a predefined bit-width of M bits, where M is an integer. The bit-width M is preferably the same as the width of the internal bus 28, which is 64 bits. [0038] The instruction type 106 is essentially an opcode that tells the command engine 24 what the instruction is. In this case, the instruction type is a predefined bit code indicating that the instruction is a segment/mask write instruction. The instruction type can alternatively indicate that the instruction is a segment/mask read instruction or some other instruction using the segment/mask addressing mode. [0039] Only registers needing to be updated are loaded by the segment/mask write instruction. The number of registers to be updated is indicated by the register count 108. The register count 108 also indicates the number of data units included in the instruction 100. [0040] The register segment mask 112 indicates which registers are to be updated by the instruction. The register segment mask 112 includes a plurality of mask bits, eachcorresponding to an individual register in a segment. The mask LSB points to the first register of the register segment specified by the register segment ID 110. Alternatively, the mask MBS can point to the first register of the register segment. [0041] The data block 104 includes one or more data units 114 - 116. The data units can be data, configuration settings, instructions, constants or any other information that is usable by the co-processor 12. The data units 114 - 116 can be any suitable size, and are preferably the same size as the registers, which is preferably 64 bits. [0042] The number of registers in a functional unit, and thus, the number of register segment mask bits can vary depending upon the requirements of each functional unit. The register segment mask 112 includes enough bits to cover the register segment with the greatest number of registers. [0043] FIG. 3 is a block diagram illustrating details of the register addressing logic (RAL) 38 and FIFO 37 of the co-processor 12. The FIFO 37 stores the data block 104 with the data units 114 - 116 in order. The RAL 38 includes priority decoder logic 150, null selector 152, a mask register 154, and-gates 156 and counter 158. In response to receiving the register segment ID (reg seg ID), register segment mask 112, and register count (reg cnt) 108, the RAL 38 generates one or more bus addresses that are output on the address portion of the internal bus 28. Each bus address includes a segment address, which is the register segment ID, and a register address that is output from the priority logic decoder 150. [0044] The segment address indicates a specific register segment 40 in one of the functional units 30 - 34. Within a recipient function unit, logic hardware receives the segment address and enables the register decoder 26 when there is a matching segment address. [0045] The register address indicates a specific register in the addressed register segment. When the register decoder 26 within the recipient functional unit is indicated by the register segment ID 110, it decodes the register address on the internal bus 28, causing the data unit currently present on the internal bus 28, which is output from the FIFO 37, to be latched into the specific register being addressed within the selected register segment.[0046] The RAL 38 and FIFO 37 operate together as follows. Initially, the register segment mask 112 is loaded into the mask register 154 and latched onto the inputs of the and-gates 156. The counter 158 is loaded with the register count 108 and the data block 104 is loaded into the FIFO 37. After the RAL 38 and FIFO 37 are initialized, the RAL 38 sequentially detects each set bit in the stored register segment mask, and together, the RAL 38 and FIFO 37 sequentially output bus addresses and corresponding data units onto the internal bus 28, one pair during each clock period, until all of the data units are loaded into the destination registers. [0047] The priority decoder logic 150 and the null selector 152 cooperate together to read each of the set bits in the register segment mask stored in the mask register 154. Preferably, the stored register segment mask is read by the priority decoder logic 150 and null selector 152 from the LSB to the MSB; however, these devices can be alternatively configured to detect set mask bits from the MSB to LSB of the register segment mask. [0048] The priority decoder logic 150 is combinational logic responsive to the output of the mask register 154. The priority decoder logic 150 detects a leading one bit in the register segment mask and generates a register address corresponding to the position of the leading one in the register segment mask. Preferably, the leading one bit is the least significant bit in the stored register segment mask that is set to one. [0049] The null selector 152 is combinational logic that nulls previously read set mask bits by setting them to zero after they have been input to the priority decoder logic 150. The null selector 152 does this by decoding an output from the priority decoder logic 150 to output logical zeros to and-gate 156 inputs corresponding to register segment mask bits that have already been processed by the RAL 38. For set mask bits that have not been processed, the null selector 152 outputs logical ones to the corresponding and-gates 156 so that the corresponding latched mask bits persist in the mask register 154. [0050] A clock signal (CLK) is applied to the FIFO 37, mask register 154 and counter 158. One set mask bit and corresponding data unit stored in the FIFO 37 are consumed per clock cycle and output to the register segments 40.[0051] The counter 158 decrements the stored register count by one each clock cycle. The RAL 38 and FIFO 37 conclude processing of the register segment mask 112 and the data block 104 when the stored register count reaches zero. [0052] Although FIG. 3 illustrates a write operation being performed by the RAL 38 and FIFO 37, one of ordinary skill in the art will recognize that the RAL 38 and FIFO 37 can be readily configured to read data from the register segments 40, or perform address decoding for other instructions incorporating the segment/mask addressing mode. [0053] FIG. 4 is a flowchart 200 illustrating the operation of the co-processor 12 in response to a segment/mask write instruction. In step 202, the command engine 24 of the co-processor 12 receives the segment/mask instruction 100 issued by the driver 22. The instruction 100 includes the instruction word 102 and data block 104. The command engine 24 identifies the instruction as being a segment/mask address mode instruction from the instruction type 106, and accordingly loads the register segment mask 112 and register count 108 into the RAL 38, and also begins loading the FIFO 37 with the data block 104 as the data units 114 - 116 arrive over the system bus 18. [0054] In step 204, the functional units 30 - 34 decode the register segment ID, which is sent over the internal bus 28 as the segment address, to select the destination register segment. This step selects the register decoder 26 within the recipient functional unit. [0055] In step 206, the RAL 38 decodes the register segment mask 112, as discussed above in connection with FIG. 3, to generate the bus address, which selects individual registers within the identified register segment. The bus address is broadcast over the internal bus 28 and decoded by the register decoder 26 in the recipient functional unit so that the data unit output by the FIFO 37 can be stored in the register indicated by the register address (step 208). [0056] The consecutively-ordered data units 114 - 116 of the data block 104 are transferred into registers indexed by the register segment mask 112 from the least significant mask bit to the most significant mask bit. That is, the first data unit, Data Unito, is stored in first register indicated by the lowest bit set in the register segments mask; the second data unit, Data Uniti, is stored in the next lowest bit set in the mask,and so forth. Alternatively, the data block transfer can occur from the most significant bit to the least significant bit. [0057] In step 210, the RAL 38 determines whether the register count has been decremented to zero. If so, the processing of the segment/mask write instruction terminates. If not, the method returns to step 206 and the next register segment mask bit and data unit are processed. [0058] FIG. 5 is a block diagram of a computer system 400 in accordance with an alternative exemplary embodiment of the present invention. The computer system 400 performs many of the same functions of the first computer system 10, and it can be implemented using the same computing technologies as described above for the first computer system 10. However, the computer system 400 includes a co-processor 412 that has an alternative architecture that provides another two-level address decoding scheme that is distributed between a command engine 424 and functional units 430 - 434 . The distributed two-level decoding scheme reduces the complexity of register address decoding and reduces the processing load of the command engine 424. [0059] In this embodiment, the command engine 424 includes a segment decoder 426, and each functional unit includes register masking logic 406. Instead of a common internal bus 28, dedicated buses 401 - 404 connect the functional units 430 - 434 and the command engine 424. The command engine 424 decodes incoming instructions, such as instruction 100 shown in FIG. 2, and passes configuration data to the different functional units 430 - 434 for storage in the internal register segments 40 over the dedicated buses 401 - 404. [0060] The command engine 424 performs the first-level decoding and the functional units 430 - 434 perform the second-level decoding. In the first level of decoding, the command engine 424 decodes the register segment ID 110 using the segment decoder 426. The register segment ID 110 indicates which one of the functional units 430 - 434 is to receive the data units 114 - 116 contained in the data block 104 associated with the instruction 100. Upon decoding the register segment ID 110, the command engine 424 routes the data block 102, register count 108 and register segment mask 112 to the recipient functional unit containing the selected registersegment. The output of the segment decoder 426 is used to enable the dedicated bus corresponding to the identified functional unit. [0061] In the second level of decoding, the recipient functional unit interprets the register segment mask 112 to determine which of its registers are to receive the individual data units 114 - 116 contained in the data block 102. This interpretation is performed by the register masking logic 406. Essentially, the register masking logic 406 includes the RAL 38 and FIFO 37 as shown in FIG. 3. However, the RAL used in the register masking logic 406 is configured differently. Unlike RAL 38, the RAL in the register masking logic 406 does not receive the register segment ID or output the segment address. In addition, the priority decoder in the register masking logic 406 does not output a register address. Instead, it outputs individual register enable signals corresponding to each register in the register segment 40. In other respects, the register masking logic 406 functions similarly to the functions of the RAL 38 and FIFO 37 as described above in connection with FIG. 3. [0062] In the co-processor 412, the register segment mask 112 greatly simplifies the second-level decoding because the register masking logic 406 can use the mask to directly select the addressed registers, instead of using address comparison, which is typically used in global addressing schemes. The register segment mask also permits linear addressing time for the local registers and simplifies address decoding logic. [0063] Register segmenting reduces the burden of first-level decoding on command engine 424 because the register segment ID 110 generally uses fewer addressing bits when compared to conventional global address schemes. In addition, the command engine 424 is only concerned with the register segment ID 110 and does not need to consider either the data block 102 or register segment mask 112. This simplifies the decoding logic of the command engine 424. [0064] In an alternative architecture, the co-processor 412 includes a common internal bus, such as internal bus 28, between the command engine 424 and the functional units 430 - 434, instead of the dedicated buses 401 - 404. The command engine 424 is configured to broadcast the instruction word 102 over the common internal bus to the functional units 430 - 434 followed by data units 114 - 116. The common internal bus includes a signal bit that is set only when the instruction word 102is broadcast on the bus by the command engine 424. In bus cycles when the signal bit is set, the functional units 430 - 434 decode the register segment ID 110 currently on the common internal bus to determine which functional unit is to receive the data units contained in the instruction 100. Each functional unit 430 - 434 includes a segment address decoder 426 for this purpose. If the signal bit is not set, the functional units do not attempt to decode incoming data units 114 - 116 presently on the common internal bus. [0065] If the register segment 40 in a functional unit 430 - 434 is to receive the data units 114 - 116, as indicated by the register segment ID 110, the recipient functional unit latches the register segment mask 112 internally. The recipient functional unit then uses the register masking logic 406 to apply the register segment mask 112 to select individual registers in the register segment 40 to receive the incoming data units 114 - 116 that are subsequently received from the command engine 424 over the common internal bus. [0066] FIG. 6 is a conceptual dataflow diagram 300 illustrating the execution of an exemplary segment/mask write instruction within the co-processor 12 of FIG. 1 and coprocessor 412 of FIG. 5. In this example, Registers 1, 3, 4, 7 and 9 of Register Segment 1 are to be updated. The register segment ID 110 in the instruction word 102 is set to '0 . . . 01 ' so that Register Segment 1 (Reg Seg 1) is identified to be updated with five data units (Data 0 - Data 4) contained in data block 104. Also, the mask bits corresponding to registers 1, 3, 4, 7 and 9 are set to ' 1 ' in the register segment mask 112 of the instruction word 102. This yields a value of O...01010011010' for the register segment mask 112 in the instruction word 102, which indicates to the logic hardware 302 that the registers 1, 3, 4, 7 and 9 in the Reg Seg 1 are to be updated. [0067] The data block 104 contains the five data units, Data 0 - Data 4, in order. The logic hardware 302 takes as input the register segment mask 112, value '0...01010011010', and the register segment ID 110, value O...01 '. In response to these inputs, the logic hardware 302 loads Data 0 into register 1 of Reg Seg 1, Data 1 into register 3 of Reg Seg 1, Data 2 into register 4 of Reg Seg 1, Data 3 into register 7 of Reg Seg 1 and Data 4 into register 9 of Reg Seg 1.[0068] The logic hardware 302 is digital circuitry that includes any suitable combination and number of logic gates and/or logic devices required to perform the functionality as described herein for the disclosed embodiments. The logic hardware 302 can include the register decoder 26, FIFO 37, RAL 38, or alternatively, the segment decoder 426 and register masking logic 406, as well as other logic hardware or any suitable combination of the foregoing. [0069] Although the foregoing detailed description illustrates the segment/mask mode addressing scheme in the context of co-processors 12 and 412, it will readily occur to one of ordinary skill in art that segment/mask mode addressing can be employed in any suitable computing architecture, including stand-alone CPUs, networked computers, multi-processor systems or the like. In addition, the segment/mask mode addressing scheme can also be implemented in software code. A computer program stored on a computer-readable medium may include a first code segment for receiving an instruction comprising a memory segment identifier and a memory segment mask a second code segment for selecting a memory segment based on the memory segment identifier; and a third code segment for selecting one or more of memory locations in the memory segment based on the memory segment mask. The computer program may include additional code segments for performing the other functions described herein. The program code may be any suitable programming language or code, including firmware or microcode, and the computer-readable medium may be any suitable computer memory for storing the program code. [0070] Other embodiments and modifications of this invention will occur readily to those of ordinary skill in the art in view of these teachings. The above summary and description is illustrative and not restrictive. The invention is to be limited only by the following claims, which include all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawings. The scope of the invention should, therefore, not be limited to the above summary and description, but should instead be determined by the appended claims along with their full scope of equivalents. [0071] What is claimed is: |
<P>PROBLEM TO BE SOLVED: To provide a current density parameter estimation method on a signal lead wire of an integrated circuit capable of reducing computer resources (memory usage and a computation time) by using a computer-aided design (CAD); and to provide a machine-readable medium. <P>SOLUTION: The signal lead wire is modeled as an impedance network including a resistor and a capacitor; a drive cell is modeled as a triangle current signal; and parameters (a peak value and periodicity) of the triangle signal are determined based on corresponding characteristic data of the drive cell. By measuring a signal transmitted on impedance, the current density parameter on the signal lead wire is estimated. <P>COPYRIGHT: (C)2005,JPO&NCIPI |
A method of estimating a plurality of current density parameters that characterize an electrical signal transmitted over a signal lead of an integrated circuit, the signal lead connecting a drive cell to a load cell, the method comprising: Implemented in a computer aided design (CAD) tool used to design: modeling the drive cell in the form of a trapezoidal signal and the signal leads in the form of an impedance network, where Modeling one parallel line of the trapezoidal signal to be considerably shorter than the other parallel line of the trapezoidal signal; providing the operation of the integrated circuit as the input of the impedance network In order to estimate the plurality of current density parameters on the signal lead Measuring the electrical signal on-impedance network, said method comprising the steps.A machine readable medium comprising one or more instruction sequences that act on a system to estimate a plurality of current density parameters that characterize an electrical signal transmitted over a signal lead of an integrated circuit, the signal comprising: A lead wire connects the drive cell to the load cell, and when the one or more instruction sequences are executed by one or more processors included in the system, the one or more processors are: The drive cell is modeled in the form of a trapezoidal signal, and the signal leads are modeled in the form of an impedance network, where one parallel line of the trapezoidal signal is considerably larger than the other parallel line of the trapezoidal signal. Modeling to shorten; the operation of the integrated circuit, the trapezoidal signal to the input of the impedance network It was simulated by supplying; to estimate the plurality of current density parameter of the signal lead line, measuring electrical signals on the impedance network to perform operations, the machine-readable media. |
Estimating current density parameters on integrated circuit signal leads.The present invention relates to computer-aided design (CAD) used in integrated circuit design, and more particularly to a method and apparatus for measuring current density parameters on signal leads of an integrated circuit.Integrated circuits typically include several components, such as flip-flops, logic gates, multiplexers, and comparators. A plurality of components in one integrated circuit are interconnected using, for example, metal wiring, and these are generally called signal leads. Thus, one signal lead is typically designed to carry / transmit signals from one component (hereinafter “drive cell”) to another component (hereinafter “load cell”). As the signal is transmitted, it is characterized by several parameters (“current density parameters”), such as an average value, an RMS (root mean square) current density, and the like.During integrated circuit design, it is often desirable to estimate the current density parameter on the signal leads. For example, when the circuit is used after manufacture, the estimated value can be used to ensure that the current density parameter does not exceed a value corresponding to a predetermined threshold. Such operations generally lead to reliable operation during the planned product lifetime of the integrated circuit, as is well known in the relevant arts.The current density is typically estimated at the design stage by running a simulation (eg, using a SPICE program well known in the relevant art) based on the digital representation of the integrated circuit. According to one conventional method, the components to be simulated include all low level components of the drive cell (such as transistors) and load cells along with connected signal leads.One advantage of such an approach is that the estimated value (related to the current density parameter) is close to the parameter encountered when the integrated circuit is operating. However, one problem with this method is that significant computational resources (eg, processor time, memory) are required to determine the parameters of one signal on the signal lead. Furthermore, such methods require a significant amount of data entry, which is not always possible. Thus, in at least some circumstances, the corresponding solution may not be acceptable.The present invention will be described with reference to the accompanying drawings.In the drawings, like reference numbers generally indicate identical, functionally similar, and / or structurally similar components.1. Overview One feature of the present invention utilizes the nature of the drive cell, which uses a triangular signal as an input to the signal lead that connects to the load cell during normal operation after fabrication. Generate. Thus, in one embodiment, the simulation is performed by modeling the signal leads in the form of an impedance network (eg, resistors and capacitors) and providing the input signal in the form of a triangular signal (to the signal leads). . Next, the signal level on the impedance network is measured, in order to estimate the current density on the signal lead once the corresponding integrated circuit is deployed. The time required to measure the aggregated resources and / or current density parameters on the signal leads of the integrated circuit is consequently reduced as described below.Several features of the present invention are described below with reference to an example environment for illustration purposes. It should be understood that many individual details, relationships, and methods are provided to assist in a complete understanding of the present invention. However, one skilled in the relevant art will readily recognize that the present invention may be practiced without one or several specific details, or in other ways or the like. In other instances, well-known structures or operations are not shown in detail in order to avoid obscuring the present invention.2. Computer System FIG. 1 is a block diagram of a computer system 100 illustrating an example system for implementing the present invention. The computer system 100 includes one or more processors, such as a central processing unit (CPU) 110, a random access memory (RAM) 120, a secondary memory 130, a graphics controller 160, a display device 170, and a network interface 180. , And an input interface 190. All components except the display device 170 communicate with each other via a communication path 150, which includes several buses as is well known in the relevant art. The components of FIG. 1 are described in further detail below.CPU 110 executes instructions stored in RAM 120 to provide several functions of the present invention. The CPU 110 may include a plurality of processing units, each processing unit potentially designed for an individual task. Alternatively, the CPU 110 can include only a single processing unit. The RAM 120 receives commands from the secondary memory 130 using the communication path 150. Data representing various cells (in the library) and determined current density parameters corresponding to each signal lead are stored in and recalled from secondary memory 130 (and / or RAM 120) during instruction execution. .The graphic controller 160 generates a display signal (for example, in RGB format) to the display device 170 based on data / commands received from the CPU 110. Display device 170 includes a display screen for displaying an image defined by a display signal. Input interface 190 corresponds to a keyboard and / or mouse and generally allows a user to provide input. The network interface 180 allows some of the inputs (and outputs) to be provided on the network. In general, the display device 170, the input interface 190, and the network interface 180 are provided by the user in the secondary memory 130 (or received from the network interface 180) and are implemented in a known manner. It is possible to design an integrated circuit using a cell library.Secondary memory 130 includes a hard drive 131, a flash memory 136 and a removable storage element drive 137. The secondary memory 130 stores software instructions and data (eg, determined current density parameters corresponding to the cell library and each signal lead), which the computer system 100 has several functions according to the present invention. It is possible to provide. Some or all of the data and instructions are provided on the removable storage element 140, and the data and instructions are read by the removable storage element driver 137 and provided to the CPU 110. A floppy (registered trademark) drive, magnetic tape drive, CD_ROM drive, flash memory, removable memory chip (PCMCIA card, EPROM) are examples of such removable storage element drive 137.The removable storage element 140 is implemented using a medium and storage format corresponding to the removable storage element driver 137 so that the removable storage element driver 137 can read data and instructions. Accordingly, removable storage element 140 includes a computer readable storage medium having computer software and / or data stored therein. One embodiment of the present invention is implemented using software running (ie, executed) within computer system 100.In this document, “computer program product” is generally used to refer to a hard disk installed in the removable storage element 140 or the hard drive 131. These computer program products are means for providing software to the computer system 100. As previously described, CPU 110 invokes software instructions and executes those instructions to provide the various functions of the present invention. The functions of the present invention are described in detail below. For purposes of illustration, their functions are described with reference to one example integrated circuit.3. Exemplary Components FIG. 2 is a block diagram of an example integrated circuit 200, and the current density parameter on the lead can be measured based on one function of the invention. Integrated circuit 200 is shown to include a drive cell 210, a lead network 240, and load cells 260-A through 260-Z.For simplicity only, this figure shows several representative components connected in a simple topology. However, a typical environment that uses the various features of the present invention includes more components connected using a complex topology. Each block is described in further detail below.The drive cell 210 represents a functional block (e.g., a flip-flop) that generates signals that are transmitted to other cells. This signal can be transmitted to several cells using the corresponding lead shown in FIG. These leads are shown to be included in lead network 240.Load cells 260-A through 260-Z represent functional blocks (eg, comparators) that perform corresponding tasks based on signals received from drive cell 210. Each load cell is shown to receive a corresponding signal on a corresponding signal lead of lead network 240.The manner in which the current density parameter on the signal leads of the lead network 240 can be determined is described below. However, as will be apparent to those skilled in the relevant arts, these techniques can be applied to other environments. Such application examples are also intended to fall within the scope of the present invention and the technical ideas of various configurations.4. Principle FIG. 3 is a waveform illustrating the input signal, which can be used to determine the current density parameter on each lead based on one function of the present invention. This waveform is drawn with time on the x-axis and current on the y-axis.The input signal of FIG. 3 shows a sharp increase in the current signal flowing through the load cell that the load cell (modeled as a capacitor) encounters (charges) as it accumulates charge, and a similar decrease encountered during discharge. Designed based on the observation results. The result of the current waveform passing through the load cell by charging and discharging is approximated by a triangular signal. Therefore, a simulation that enables accurate estimation of the current density parameter on the signal lead can be performed using the input signal modeled as a triangular signal.However, various parameters are required to generate the input signal of FIG. As described in the next section, a positive peak (Jpkp) 310 and a negative peak (Jpkn) 320 are given from the characteristics of the driving cell. The periodicity (T) of all signals is determined by the expected frequency of integrated circuit operation. The spacing of the positive and negative pulses Ta and Tb is also required and is generated from the RMS current (Jrms) and the average current (Javg) as explained below, which can also be done in a drive cell simulation. is there.As is well known, with respect to the waveform of FIG.Here r represents the recovery factor, the value of which generally depends on the specific manufacturing technique. The recovery factor can be determined experimentally, for example, in more detail, the article, “AC Electromigration Characteristics and Multilayer Interconnect Modeling” (Ting LM). By May J. S., Hunter W. R., McPherson J. W., Conference: International Symposium on Reliability Physics , 1993. In one embodiment, the restoration factor is equal to 0.5.Solving for Ta and Tb:Therefore, T, Jpkp and Jpkn, and the triangular waveform of FIG. 3 can be constructed using the above equation. The above principles and how the input signal can be used to estimate the current density parameter on the signal lead are described in more detail below.5. Method FIG. 4 is a flow diagram illustrating a method for quickly estimating the current parameter of a signal lead based on one function of the present invention. The method begins at step 401, where control immediately passes to step 420. This method will be described with reference to FIG. 1, which is for illustration purposes only. This method can be implemented in other types of systems as well.At step 420, the computer system 100 receives the data necessary to characterize the input triangular signal, which can be used as an input to the signal lead. As will be appreciated, the specific shape of the triangular signal can be determined based on parameters such as positive peak, negative peak, average and root mean square (RMS) values (of the current signal). These values are also determined by various characteristics of the drive cell. In one embodiment, each set of such parameters is generated using a CAD tool, such as the widely available SPICE, as separate values for load and slew (speed at which the clock / data signal transitions). .At step 430, computer system 100 generates data representing the input triangular signal (eg, periodicity of peaks, positive and negative portions) based on the data received at step 420. The periodicity of the signal is determined by the frequency at which the integrated circuit needs to operate.In step 440, the computer system 100 models the lead network 240 as an impedance network that includes impedances connected according to one topology. In one embodiment, each signal lead is modeled as an RC network (including resistors and capacitors). Such modeling can be performed in a well-known manner using several commercially available CAD tools.In step 450, the computer system 100 performs the simulation by supplying the input signal generated in step 420 to the impedance network modeled in step 430 (along with the load cell). In step 460, the signal levels on the various impedances are examined and the current density parameter on each signal lead is determined. The method ends at step 499.In general, the steps of FIG. 4 are incorporated into various CAD tools. The determined parameters can be used to compare against all corresponding thresholds, and in particular a report is generated indicating any violation (ie outside the allowed threshold limits). The description is continued with reference to the way in which an integrated circuit is modeled in one embodiment of the invention.6. Modeled Integrated Circuit FIG. 5 is a circuit diagram illustrating how the integrated circuit of FIG. 2 is modeled according to one embodiment. The current source 510 generates the above-described triangular signal and expresses the signal generated by the driving cell 210. This current source is shown connected to the signal lead 520 of the lead network 240.Four signal leads 520, 530, 540 and 550 (corresponding to the four leads shown in FIG. 2) are included in lead network 240. A signal lead 520 is connected to current source 510 at one end and the other end is connected to all three signal leads 530, 540 and 550. Each signal lead is modeled as a resistor-capacitor combination. Resistor and capacitor values are generally based on the physical dimensions and conductive properties of the metal used to design the signal leads.The signal lead 520 is modeled as a combination of two resistors (R1 521 and R2 522) and two capacitors (C1 525 and C2 526). Signal lead 530 is modeled as a combination of four resistors (R3 531; R4 532; R5 533; R6 534) and four capacitors (C3 535, C4 536, C5 537, and C6 538). The signal lead 540 is modeled as a combination of two resistors (R7 541 and R8 542) and one capacitor (C7 545). Signal lead 550 is modeled as a combination of two resistors (R9 551 and R10 552) and two capacitors (C8 555 and C9 556).In general, the load cells are modeled as capacitors, so the three load cells in FIG. 2 are modeled as capacitors, CL1 560-A through CL3 560-Z, respectively. The triangular signal generated by current source 510 is carried by signal lead 520 and is routed to each signal lead 530, 540 and 550 based on the respective impedance of capacitors 560-A through 560-Z (representing a load cell). Distributed. The current density parameter on each signal lead 530, 540 and 550 can be measured by examining the signal levels flowing on and through the corresponding internal components. Therefore, the current density parameter of the signal lead can be estimated.Understand that the resources required to estimate the current density parameter can be reduced (as compared to the method described in the prior art section above) by modeling the drive cell as a single component Let's be done. The necessary resources can be further reduced because the number of data points required to represent the triangular signal is small.However, an alternative embodiment can be implemented using a trapezoidal (shape) signal instead of a triangular signal, as described above. However, in order to increase the accuracy of the estimation, one of the parallel lines needs to be considerably shorter and shorter than the other side of the parallel lines. When the short parallel line becomes a single point, the trapezoid is equal to a triangle.In addition, since the triangular signal accurately represents the signal generated by the drive cell, the estimated value is very close to the actual parameters encountered during normal operation of the integrated circuit. Accordingly, various features of the present invention are that the current density parameters on the signal leads can be estimated potentially accurately and quickly while consuming less resources.7. CONCLUSION While various embodiments of the present invention have been described so far, it should be understood that these are given by way of example only and not limitation. Accordingly, the breadth and scope of the present invention should not be limited by any of the above-described embodiments, but should be defined only in accordance with the appended claims and their equivalents.The following items are further disclosed with respect to the above description. (1) A method for estimating a plurality of current density parameters that characterize an electrical signal transmitted on a signal lead of an integrated circuit, wherein the signal lead connects a driving cell to a load cell, the method comprising: Implemented in a computer aided design (CAD) tool used to design an integrated circuit: modeling the drive cell in the form of a trapezoidal signal and the signal leads in the form of an impedance network. Where one parallel line of the trapezoidal signal is modeled so as to be considerably shorter than the other parallel line of the trapezoidal signal; the operation of the integrated circuit is converted into the input of the impedance network To estimate the plurality of current density parameters on the signal lead Measuring the electrical signal on the impedance network, the method comprising.(2) The method according to claim 1, wherein the one parallel line includes a single point so that the trapezoidal signal becomes a triangular signal.(3) The method of claim 2, wherein the modeling comprises: receiving a first plurality of parameters characterizing the operation of the drive cell; characterizing the triangular signal based on the first plurality of parameters. Calculating said plurality of parameters, comprising the above.(4) In the method according to item 3, the triangular signal includes a positive peak and a negative peak, and the first plurality of parameters are an average current value (Javg), a root mean square current value (Jrms), Including positive peak level (Jpkp), negative peak level (Jpkn), where the interval between the positive portion (Ta) of the triangle and the negative portion (Tb) of the triangle is based on the following equation: Calculated: said method.(5) a machine-readable medium comprising one or more instruction sequences that act on a system to estimate a plurality of current density parameters that characterize an electrical signal transmitted on a signal lead of an integrated circuit. The signal lead connects the driving cell to the load cell, and the one or more instruction sequences are executed by one or more processors included in the system, The processor: models the drive cell in the form of a trapezoidal signal and the signal leads in the form of an impedance network, where one parallel line of the trapezoidal signal is the other parallel line of the trapezoidal signal. Modeling the integrated circuit to be much shorter; the operation of the integrated circuit is converted from the trapezoidal signal to the impedance network. Simulating by supplying as input to the machine; measuring the electrical signal on the impedance network to perform the operation to estimate the plurality of current density parameters on the signal lead; Medium.(6) The machine-readable medium according to item 5, wherein the one parallel line includes a single point so that the trapezoidal signal becomes a triangular signal.(7) The machine-readable medium of item 6, wherein the modeling is: receiving a first plurality of parameters characterizing the operation of the drive cell; based on the first plurality of parameters, The machine-readable medium comprising: calculating a second plurality of parameters to characterize.(8) In the machine-readable medium according to item 7, the triangular signal includes a positive peak and a negative peak, and the first plurality of parameters include an average current value (Javg), a root mean square current value ( Jrms), positive peak level (Jpkp), and negative peak level (Jpkn), where the interval between the positive portion (Ta) of the triangle and the negative portion (Tb) of the triangle is Based on: the machine-readable medium.(9) Estimate current density parameters on the signal leads of the integrated circuit using computer aided design (CAD) tools. The signal leads are modeled as an impedance network (eg, including resistors and capacitors) and the drive cell is modeled as a triangular (current) signal. The triangular signal parameters (for example, peak value, periodicity) are determined based on the characteristic data of the corresponding driving cell. By measuring the signal transmitted over the impedance, the current density parameter on the signal lead is estimated.FIG. 1 is a block diagram of a computer system implemented according to the features of the present invention.FIG. 2 is a block diagram of an example integrated circuit in which the current density parameter on a signal lead can be measured according to one aspect of the present invention.FIG. 3 is a waveform illustrating an input signal that can be used to determine the current density parameter on each lead.FIG. 4 is a flow diagram illustrating a method for quickly estimating the current density parameter of a signal lead in accordance with one aspect of the present invention.FIG. 5 is a circuit diagram illustrating how an integrated circuit can be modeled in one embodiment.Explanation of symbols100 Computer System 110 Central Processing Unit (CPU) 120 Random Access Memory (RAM) 130 Secondary Memory 131 Hard Drive 136 Flash Memory 137 Removable Storage Device Drive 140 Removable Storage Device 150 Communication Path 160 Graphic Control Device 170 Display device 180 Network interface 190 Input interface 200 Integrated circuit 210 Drive cell 240 Lead network 260-A, -Z Load cell 510 Current source 520, 530, 540, 550 Signal lead |
Embodiments include a method comprising identifying, by an instruction scheduler of a processor core, a first high power instruction in an instruction stream to be executed by an execution unit of theprocessor core. A pre-charge signal is asserted indicating that the first high power instruction is scheduled for execution. Subsequent to the pre-charge signal being asserted, a voltage boost signalis asserted to cause a supply voltage for the execution unit to be increased. A busy signal indicating that the first high power instruction is executing is received from the execution unit. Based atleast in part on the busy signal being asserted, de-asserting the voltage boost signal. More specific embodiments include decreasing the supply voltage for the execution unit subsequent to the de-asserting the voltage boost signal. More Further embodiments include delaying asserting the voltage boost signal based on a start delay time. |
1.A device including:Execution unitRate controller circuit;An instruction scheduler, coupled to the rate controller circuit and the execution unit, and the instruction scheduler is used to:Identify the first high-power instruction in the instruction stream to be executed; andAssert the precharge signal to the rate controller circuit,Wherein, the rate controller circuit is used for:After the precharge signal is asserted, the boost signal is asserted to increase the power supply voltage for the execution unit; andThe boost signal is de-asserted based at least in part on the first high power command being executed.2.The apparatus according to claim 1, wherein the first high power instruction is an instruction for multiplying a matrix.3.The apparatus of claim 1, wherein the instruction scheduler is configured to assert the precharge signal in response to scheduling the first high power instruction for execution.4.The apparatus according to claim 1, wherein the rate controller circuit is further used for:Based on the start-up delay time, the boost signal is delayed to be asserted.5.The apparatus according to claim 4, wherein the activation delay time expires before the execution unit initiates the execution of the first high-power instruction.6.The device according to any one of claims 1-5, wherein the rate controller circuit is used to:Receiving a busy signal from the execution unit, the busy signal indicating that the execution unit initiated the execution of the first high-power instruction; andBased on the hold time, the de-assertion of the boost signal is delayed.7.The device according to any one of claims 1-3, wherein the rate controller circuit is further used for:Delay the assertion of the boost signal based on the start-up delay time; andBased on the hold time, delay de-asserting the boost signal, wherein the rate controller circuit includes:A start-up delay counter programmed with a start-up delay count value, wherein the start-up delay count value corresponds to the start-up delay time; andA stop delay counter programmed with a stop delay count value, wherein the stop delay count value corresponds to the hold time.8.The device according to any one of claims 1-5, wherein the instruction scheduler is further used for:Before the boost signal is de-asserted, identifying a second high power command; andAvoid asserting the second boost signal for the second high power command.9.A system including:Execution unitAn instruction scheduler, coupled to the execution unit, and the instruction scheduler is used to:Identify the first high-power instruction in the instruction stream to be executed; andAssert the first precharge signal;A rate controller circuit, coupled to the execution unit and the instruction scheduler, and the rate controller circuit is configured to:After the first precharge signal is asserted, assert the boost signal; andA voltage regulator, coupled to the rate controller circuit, and the voltage regulator is used to:In response to receiving the boost signal, the power supply voltage for the execution unit is increased to execute the first high power command.10.The system according to claim 9, wherein the execution unit is further configured to:After the boost signal is asserted, initiate execution of the first high power command; andAssert a busy signal indicating that the first high-power instruction is being executed.11.The system according to claim 10, wherein the rate controller circuit is further used for:The boost signal is de-asserted based at least in part on the busy signal being asserted.12.The system according to claim 11, wherein the voltage regulator is also used for:After the boost signal is de-asserted, the power supply voltage for the execution unit is reduced.13.The system according to any one of claims 9 to 12, wherein the instruction scheduler is further used for:Identifying a second high-power command in the command stream; andAssert the second precharge signal,Wherein, the rate controller circuit is further configured to assert a second boost signal after the second precharge signal is asserted, andWherein, the voltage regulator is further configured to increase the power supply voltage for the execution unit to execute the second high power command in response to receiving the second boost signal.14.The system according to any one of claims 9 to 12, wherein the instruction scheduler is configured to assert the first precharge signal in response to scheduling the first high power instruction for execution.15.The system according to claim 9, wherein the rate controller circuit is further used for:Based on the start-up delay time, the boost signal is delayed to be asserted.16.The system according to claim 15, wherein the activation delay time expires before the execution unit initiates the execution of the first high-power instruction.17.The system according to claim 9, wherein the increase of the power supply voltage by the voltage regulator is consistent with the execution of the first high power command initiated by the execution unit.18.One method includes:The instruction scheduler of the processor core identifies the first high-power instruction in the instruction stream to be executed by the execution unit of the processor core;Assert a pre-charge signal, the pre-charge signal indicating that the first high-power instruction is scheduled for execution;After the precharge signal is asserted, the boost signal is asserted to increase the power supply voltage for the execution unit;Receiving a busy signal from the execution unit indicating that the first high-power instruction is being executed; andBased at least in part on the busy signal being asserted, the boost signal is de-asserted.19.The method according to claim 18, wherein the first high power instruction is an instruction for multiplying a matrix.20.The method of claim 18, wherein the precharge signal is asserted in response to scheduling the first high power instruction for execution.21.The method of claim 18, further comprising:Based on the start-up delay time, the boost signal is delayed to be asserted.22.22. The method of claim 21, wherein the activation delay time expires before the first high power command is executed.23.The method according to any one of claims 18-22, further comprising:Based on the hold time, the de-assertion of the boost signal is delayed.24.The method according to any one of claims 18-22, further comprising:Before de-asserting the boost signal, identifying the second high power command; andAvoid asserting the second boost signal for the second high power command.25.At least one machine-readable storage medium comprising instructions, wherein the instructions, when executed, implement the device, system or method according to any one of claims 1-5, 9-12 or 15-22. |
Active DI/DT voltage drop suppressionTechnical fieldThe present disclosure generally relates to the computer field, and more specifically, to active Di/Dt voltage drop suppression.Background techniqueThe demand for high-performance computing is growing exponentially. Parallel execution units, such as matrix processing units (MPUs), are often used in high-performance computing because they allow processing or operations to be performed simultaneously. In a form of parallel execution, the instruction stream can be divided into independent stages or parts. Assuming no dependency prevents simultaneous execution, each execution unit can execute a stage of the instruction stream at the same time as a different execution unit that executes another stage of the instruction stream. In addition, two or more execution units can execute stages in parallel. This can increase the execution speed of the task being performed. Parallelization has been used as an alternative to frequency scaling, which may be limited by physical constraints. However, the parallel execution unit is limited by the voltage regulator's inability to suppress the large voltage drop (Vmin) caused by the sudden change (Di/Dt) of the workload power demand.Summary of the inventionAccording to an aspect of the present application, there is provided an apparatus including: an execution unit; a rate controller circuit; an instruction scheduler, coupled to the rate controller circuit and the execution unit, the instruction scheduler is used to: Identify the first high-power instruction in the executed instruction stream; and assert a pre-charge signal to the rate controller circuit, wherein the rate controller circuit is configured to: after the pre-charge signal is asserted, assert a boost signal The power supply voltage for the execution unit is increased; and the boost signal is de-asserted based at least in part on the first high power command being executed.According to another aspect of the present application, there is provided a system, including: an execution unit; an instruction scheduler coupled to the execution unit, the instruction scheduler configured to: identify a first high-power instruction in an instruction stream to be executed And asserting the first precharge signal; a rate controller circuit, coupled to the execution unit and the instruction scheduler, the rate controller circuit being used to: after the first precharge signal is asserted, assert the increase And a voltage regulator coupled to the rate controller circuit, the voltage regulator configured to: in response to receiving the boost signal, increase the power supply voltage for the execution unit to execute the first High power command.According to another aspect of the present application, there is provided a method, including: identifying, by an instruction scheduler of a processor core, a first high-power instruction in an instruction stream to be executed by an execution unit of the processor core; and asserting precharge Signal, the precharge signal indicating that the first high-power instruction is scheduled for execution; after the precharge signal is asserted, a boost signal is asserted to increase the power supply voltage for the execution unit; from the execution The unit receives a busy signal indicating that the first high power command is being executed; and based at least in part on the busy signal being asserted, de-asserting the boost signal.Description of the drawingsIn order to provide a more complete understanding of the present disclosure and its features and advantages, refer to the following description in conjunction with the accompanying drawings, in which the same reference numerals denote the same parts, in which:FIG. 1 is a simplified block diagram showing high-level components of a processor with active Di/Dt voltage drop suppression capability according to at least one embodiment of the present disclosure;2 is a simplified block diagram showing possible implementation details of the active Di/Dt voltage drop suppression capability in the core of the processor according to at least one embodiment;FIG. 3 is a timing diagram showing an example active Di/Dt voltage drop suppression of a workload power demand change on a processor according to at least one embodiment;4 is a diagram showing an example surge current in a matrix processing unit when a matrix multiplication instruction is executed;Figure 5 is a diagram showing an example power delivery to a high-performance computing platform;FIG. 6 is a graph showing an example operating voltage and related operating frequency characteristics of a high-performance processing element;Figure 7 is a diagram showing how the active Di/Dt voltage drop suppression capability can be used in combination with other advanced power management techniques;FIG. 8 is a simplified flowchart showing possible operations of a computing system implemented by using active Di/Dt voltage drop suppression capability according to at least one embodiment;FIG. 9 is a simplified flowchart showing more possible operations of a computing system implemented using software-assisted power management capabilities according to at least one embodiment;FIG. 10 is a simplified flowchart showing more possible operations of a computing system implemented using software-assisted power management capabilities according to at least one embodiment;Figure 11 is a block diagram of a register architecture according to one embodiment;Figure 12A is a block diagram showing both an exemplary sequential pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to an embodiment of the present disclosure;12B is a block diagram showing both an exemplary embodiment of a sequential architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to an embodiment of the present disclosure;13A-13B show a block diagram of a more specific exemplary sequential core architecture, which is one of multiple logic blocks (including other cores of the same type and/or different types) in the chip;14 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to an embodiment of the present disclosure;Figures 15-18 are block diagrams of exemplary computer architectures; andFIG. 19 is a block diagram contrasting using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment of the present disclosure.Specific embodimentThe following disclosure provides various possible embodiments or examples for implementing the features disclosed in this specification. These features are related to the active Di/Dt voltage drop suppression capability of the processing element (such as the central processing unit (CPU)). Active Di/Dt voltage drop suppression capability can be realized in a processor that includes one or more cores with multiple execution units. In a system with active Di/Dt voltage drop suppression capability, the input command queue is analyzed to detect high-power commands. For example, machine learning processes involving multiple computational layers can use high-power instructions. When a high-power command is detected, when the dispatch command stream is switched from a low-power command to a high-power command, it is requested to temporarily increase the power supply voltage. When the high power command is completed or when the voltage regulator catches up to compensate for the higher power demand, the temporary increase in the power supply voltage can be stopped.In order to illustrate several embodiments of processors with active Di/Dt voltage drop suppression capabilities, it is important to first understand the operations and activities related to parallel processing and the conversion between high-power commands and low-power commands. Therefore, the following basic information can be regarded as a basis that can appropriately explain the present disclosure.In the past, frequency scaling resulted in an increase in computer performance. Increasing the clock frequency while keeping other factors constant will generally reduce the running time of the application. However, increasing the frequency also increases the amount of power used in the processing element. This is shown by the formula for calculating the power consumption P:P=C x V2x FC = capacitance switched every clock cycleV = voltageF = clock cycles per secondRecently, parallel execution has been used to alleviate the problems of power consumption and critical device temperature. The parallel execution unit is configured to execute the operation of the task at the same time to accelerate the completion of the execution of the task. However, voltage regulators that supply power for parallel execution generally cannot suppress large voltage drops (Vmin) caused by sudden changes in workload power requirements (Di/Dt), where Di/Dt is the instantaneous rate of change of current in amperes /second. Voltage drop refers to the loss of output voltage when the device tries to drive the load. In one example, the voltage drop may be caused by not providing enough current to the processor to drive a heavy load (e.g., high power command (e.g., running in parallel)).When computing instructions change from a single-threaded scalar operation to a multi-threaded parallel operation, the power demand of the central processing unit (CPU) may immediately change from a low power consumption state to an extremely high power consumption state. Since the voltage regulator cannot quickly compensate for sudden power losses on the resistive transmission network, these sudden changes in power demand may cause a sharp voltage drop on the CPU power line. When the voltage drops lower, the related CPU operating frequency also drops lower. When the voltage drops below the minimum supply voltage (Vmin) required to maintain the maximum operating frequency (Fmax), the system malfunctions because the voltage has dropped below the sustainable frequency.Generally, three methods have been used to avoid these types of system failures due to sudden voltage drops. First, the system can be designed to work at frequencies below the maximum frequency so that the minimum voltage threshold is lower. Second, the system can be designed to work at a higher voltage. This creates a greater range for the voltage to drop without falling below Vmin. Third, the power supply of the system can be enhanced to minimize the power transmission network (PDN) impedance. Current solutions can use any combination of these technologies. In fact, some systems may use a combination of all three technologies. The specific technology implemented on a given computing system may depend on the characteristics of that computing system. For example, mobile devices tend to work at lower frequencies, performance devices tend to work at higher voltages, and advanced devices tend to work under enhanced (or enhanced) power supplies. The combination of the three technologies attempts to solve all the problems presented by the various characteristics of computing devices.These current technologies are not sufficient to suppress the Di/Dt voltage drop and make compromises in cost, performance, and/or reliability. For example, low-frequency solutions sacrifice performance, high-voltage solutions sacrifice reliability and waste power, and enhanced power solutions sacrifice cost. Solutions that rely on a combination of these three technologies sacrifice a certain degree of cost, performance, and reliability. In short, even a combination of available technologies to suppress voltage drops is a compromise choice.Active Di/Dt voltage drop suppression technology solves many of the above problems (and more). Specifically, the voltage drop suppression technology maximizes computing performance without sacrificing reliability or enhancing power. In at least one embodiment, the core is configured to detect an expected power surge based on analyzing a command queue of high power commands. When the high-power parallel execution unit is scheduled to run, the core can notify the voltage regulator. Therefore, a request to temporarily increase the power supply voltage can be issued to accommodate the conversion to a high-power command. When the parallel engine starts to execute high-power commands, the voltage regulator provides a temporary boost to compensate for the expected voltage drop. Subsequently, when the high power command is completed or when the voltage regulator catches up to compensate for the higher power demand, the temporary boost can be stopped.In the active Di/Dt voltage drop suppression technology, the boost request can be timed to be executed just before the high-power command is executed, so that the increased power consumption is consistent with the increased power supply. In this case, the net impact of changes in demand and supply is offset, which results in a net zero change in the operating voltage of the CPU. Therefore, active Di/Dt voltage drop suppression can provide higher performance, higher efficiency and higher hardware utilization. First of all, because the constant CPU voltage level can be maintained by actively scheduling the voltage regulator and parallel execution unit to achieve the balance of power supply voltage and demand, the CPU can run at the maximum frequency, so higher performance can be achieved. Second, one or more embodiments enable the CPU to operate at maximum efficiency because unless scheduled to execute high power instructions, the power supply voltage can be kept close to Vmin, which provides the maximum operating frequency. Finally, one or more embodiments allow the CPU to operate more reliably because the voltage applied to the CPU remains close to Vmin, which ensures that minimal stress is applied to the silicon of the device. Therefore, higher performance products can be produced, while the users of the products benefit from a more reliable system with a lower cost voltage regulator.Turning to FIG. 1, a brief description of a possible processor 100 with active Di/Dt voltage suppression capability is now provided. The processor 100 includes at least one core 120(1)-120(M) and a memory 115. The cores 120(1)-120(M) include one or more corresponding execution units, such as the execution units 160A and 160B in the core 120(1) and the execution units 160C and 160D in the core 120(M). It should be noted that for illustrative purposes, two execution units are shown in each core, but cores 120(1)-120(M) may include any number of execution units. In one or more embodiments, the cores 120(1)-120(M) also include corresponding instruction decoders and schedulers 130(1)-130(M) and rate controllers 140(1)-140(M). ). In coordination with the signals from the instruction decoder and scheduler and the execution unit, the rate controller 140(1)-140(M) can be configured to actively suppress the sudden change in the workload power demand on the execution unit of the core. The voltage drops.The memory 115 may include a system memory, which may be separate from the cores 120(1)-120(M). In at least some embodiments, the memory 115 may be implemented as a high-bandwidth memory (HBM). The processor 100 may be configured as a single die, or may include additional cores, execution units, and memory for a dual die configuration. In one example, the cores 120(1)-120(M) may be implemented as tensor processing cores (TPC), and the execution units 160A, 160B, 160C, and 160D may be implemented as matrix processing units (MPU). Cores can form tensor processing clusters.In one or more embodiments, the application may be compiled into code including instructions 105. The compiled code with the instruction 105 can be extracted from the memory by the processor 100 and stored in the cache. In one example, the execution units 160A-160D may be configured to run (eg, execute) instructions from the code in parallel. For example, the matrix multiplication (MM) instruction involves the multiplication of two matrices, which includes many operations to multiply the elements of each row in the first matrix with the elements of each column in the second matrix and add the products. Therefore, many operations can be executed in parallel by more than two execution units.In the tensor processing core (TPC), the TPC execution unit (e.g., matrix processing unit (MPU)) can be used to perform multiple levels of work for applications, such as deep neural network (DNN) machine learning applications. Each MPU can be provided with instructions for DNN applications, and data can be distributed to each MPU to calculate its own results. The results of the MPU can be combined to generate the results of a specific level of work. Data can be returned to the memory, new data can be distributed to the MPU, and each MPU can calculate a new result based on the result of the previous level and the new data. This processing can be repeated by the MPU using parallel processing until the final result is reached.The instruction 105 can be decoded and analyzed by an instruction decoder and scheduler (for example, 130(1)-130(M)) to identify whether a specific instruction is a high-power instruction or a low-power instruction. The rate controller (e.g., 140(1)-140(M)) may request a temporary increase in the power supply voltage based on the transition of the scheduling instruction stream from the low-power instruction to the high-power instruction. When the high power command is completed or the voltage regulator is able to compensate for the higher power demand, the rate controller may further request to stop the temporary increase of the power supply voltage.Figure 2 is a block diagram showing possible details of a core 220 configured with active Di/Dt voltage drop suppression capabilities. The input to the core 220 may include the instruction 205 of the instruction stream, and the output includes a boost (Vboost) signal 252 that is sent to the voltage regulator 270, which is used to send the core 220 (and other cores in the processor) The components in provide voltage. The core 220 shows possible implementation details of the cores 120(1)-120(M) of the processor 100. The core 220 may include an instruction decoder and scheduler 230, an instruction queue 235, a rate controller 240, and an execution unit 260. In at least one embodiment, the rate controller 240 may include a circuit configured with start and stop delay counters 242 and 246, start and stop AND gates 243 and 247, and a set/reset circuit 250.In one or more examples, the instruction decoder and scheduler 230 may include an instruction decoder circuit and a scheduler circuit. The instruction decoder circuit may be configured to decode instructions 205 in the instruction stream. The scheduler circuit may be configured to execute the scheduling stage in the execution pipeline to schedule the instruction 205 for execution. The instruction queue 235 may be used to schedule the instruction 205, and the instruction queue 235 may be accessed by the execution unit 260. In one or more embodiments, the scheduler circuit in the instruction decoder and scheduler 230 may be enhanced to detect high power instructions. When a high power command is detected, the command decoder and scheduler 230 may assert the precharge signal 232 to start the delay counter 242.The detection of the high-power command can be realized by distinguishing the high-power command from the low-power command. For example, deep learning (e.g., deep neural network (DNN)) applications often contain instructions involving the multiplication of thousands of different multipliers at the same time. The matrix multiplication (MM) instruction used in many deep learning applications is an example of multiplication involving large matrices. Other examples include, but are not limited to, instructions that use vector-based processing to perform a large number of calculations. For example, these types of instructions generally require more power than other instructions (such as reading data from memory). Therefore, high-power instructions can be detected by distinguishing matrix multiplication instructions (or similar instructions that perform a large number of calculations) from other instructions in the instruction stream. The detection of high-power commands in the command stream allows priority identification of power surges in the command stream.The execution unit 260 may execute instructions from the instruction queue 235 in parallel with the core 220 and/or other execution units in other cores (for example, 120(1)-120(M)). The instructions may include calculation instructions that execute deep neural network (DNN) instructions for machine learning applications that require a large amount of computing resources. For example, the MM instruction for multiplying the matrix may be executed by the execution unit 260. The execution unit 260 may assert the busy signal 262 when the execution unit initiates the execution of a high-power instruction such as a matrix multiplication instruction.The start delay counter 242 and the stop delay counter 246 may be programmable timers, which are used to filter the continuously used high power commands from the commonly used low power commands to adapt to different voltage regulator capabilities. In at least one embodiment, the start-up delay counter 242 can be tuned to the start-up delay time. The start delay time may be in the form of a start delay count value, which sets the number of clock cycles counted by the start delay counter 242 after the precharge signal 232 is received. After the pre-charge signal 232 is asserted, the Vboost signal 252 may be asserted after the start-up delay time expires. The start delay counter 242 determines when the start delay time expires by counting clock cycles until the start delay count value is reached. Therefore, the start-up delay time can be calculated as follows:Start-up delay time (ns) = Start-up delay count value (programmed clock cycles) * clock cycle (ns/clock cycle)In one or more embodiments, the start-up delay time is the setting time 356 of the Vboost signal 252, during which the high-power command has not yet started to be executed. The start-up delay count value can be selected to minimize the Vboost setup time 356, but will not cause execution to start before the Vboost signal is asserted. It should be noted that the start delay count value of each instruction can be the same or different.In one example, the start AND gate 243 is used to output the start signal 244 based on the input from the precharge signal 232 and the start delay counter 242. Therefore, when the precharge signal has been received and the start delay counter 242 reaches the programmed start delay count value, the high (eg, binary 1) start signal 244 can set the set/reset circuit 250 to assert the Vboost signal 252.In at least one embodiment, the stop delay counter 246 can be tuned to the stop delay time. The stop delay time may take the form of a stop delay count value, which sets the number of clock cycles for which the Vboost signal 252 should remain asserted. In at least one embodiment, the stop delay time may take the form of a stop delay count value, which is set after the execution of the high power instruction starts (for example, after the busy signal 262 is asserted) the number of clock cycles counted by the stop delay counter 246 . After the busy signal 262 is asserted, after the stop delay time expires, the Vboost signal 252 may be de-asserted. The stop delay counter 246 determines when the stop delay time expires by counting clock cycles until the stop delay count value is reached. Therefore, the stop delay time can be calculated as follows:Stop delay time (ns) = stop delay count value (programmed clock cycles) * clock cycle (ns/clock cycle)In one or more embodiments, the stop delay time is the hold time 358 of the Vboost signal 252. The stop delay count value can be selected to minimize the Vboost hold time 358, during which the power supply voltage temporarily rises above the minimum voltage. It should be noted that the stop delay count value of each instruction can be the same or different.In one example, the stop AND gate 247 is used to output the stop signal 248 based on the input from the busy signal 262 and the stop delay counter 246. Therefore, when a busy signal is received and the stop delay counter 246 reaches the programmed stop delay count value, the high level (for example, binary 1) stop signal 248 can reset the set/reset circuit 250 to de-assert the Vboost signal 252.In one or more embodiments, the set/reset circuit 250 may be configured as a set/reset flip-flop (flip-flop) circuit with a set (S) input based on the pre-charge signal 232 and the start-up delay time and based on Busy signal and stop delay time (or hold time) reset (R) input. The output (Q) is triggered to a high level state by the setting (S) signal input, and maintains this value until it is reset to a low level by the reset (R) signal input. When the output (Q) is triggered to a high state, the Vboost signal 252 is generated and can be asserted to the voltage regulator 270.Any suitable architecture can be used to implement the voltage regulator 270. In one example, a digital voltage regulator architecture (dFIVR) can be used to implement the voltage regulator 270. The voltage regulator 270 may include a voltage regulator (VR) compensation circuit (also referred to herein as a "Vboost circuit") that performs a VR compensation function in response to receiving the Vboost signal 252 as an input. When triggered by the Vboost signal 252, the VR compensation function may include increasing or "raising" the power supply voltage to the maximum voltage (Vmax) allowed by the system.In one or more implementations, the embodiments herein can be implemented on a deep neural network (DNN) accelerator ASIC. The execution unit 260 may be a matrix processing unit (MPU) in a tensor processing core (TPC) of the tensor processing cluster, and may execute a matrix multiplication (MM) instruction. The instruction decoder and scheduler 230 may be implemented as a microcode controller (MCC) on the TPC of the DNN accelerator. Active Di/Dt voltage drop suppression logic can use the MCC on the TPC in the DNN accelerator ASIC to detect high-power commands from low-power commands. Although active Di/Dt voltage suppression logic can be implemented in the TPC, it is obvious that the concepts disclosed herein can be applied to many other architectures using various hardware configurations. Therefore, the reference and description of TPC are not intended to be limiting, but to further explain and illustrate possible embodiments for illustrative purposes.FIG. 3 is a timing diagram 300 illustrating an example scenario of active Di/Dt voltage drop suppression for changes in workload power demand according to at least one embodiment. The operation of the processor with active Di/Dt voltage drop suppression capability will now be described mainly with reference to FIGS. 2 and 3.In the timing diagram 300, a clock signal 290 is generated for the execution unit 260 to execute the instruction 205. The precharge signal 232, the busy signal 262, and the Vboost signal 252 are initially low (ie, 0). An integrated voltage regulator (IVR) clock signal 237 is generated for the output voltage of the voltage regulator 270. In the timing diagram 300, the IVR clock signal 237 has seven clock cycles 311-317.In one or more embodiments, the voltage regulator 270 may be implemented as a fully integrated digital voltage regulator (for example, a digital frequency integrated voltage regulator (dFIVR)) that uses a pulse width modulation (PWM) control mechanism to generate Compensation voltage. Since the voltage regulator uses digital PWM, a simple digital selection signal (such as the Vboost signal 252) can quickly switch the preset value of the PWM from the nominal voltage to a higher percentage compensation voltage. The timing diagram 300 also shows the PWM signal 274 and the voltage regulator (VR) voltage signal 272. The VR voltage signal 272 starts from Vmin, which is the minimum voltage required to maintain the maximum operating frequency (Fmax).During operation, the instruction stream with instructions 205 can be downloaded into the cache by the core 220 and accessed by the instruction decoder and scheduler 230. In at least one embodiment, the scheduler of the instruction decoder and scheduler 230 monitors the instruction stream and detects high power signals. For example, the scheduler can detect the signal converted from IDLE to MM signal (or some other known high power signal).When a high power instruction is detected (for example, by an enhanced scheduler), the precharge signal 232 is asserted to inform the voltage regulator 270 that the high power instruction is scheduled to be executed in parallel, for example, by the execution unit 260. The values of the programmable start delay counter 242 and the stop delay counter 246 can be tuned to align the execution of the high power command with the boost of the related voltage regulator (such as 270) to minimize the need for adjustments in the voltage regulator for new The higher voltage is used to compensate for the expected voltage drop time before the high power demand. Therefore, the activation delay counter 242 may delay the assertion of the Vboost signal 252 based on a certain amount of time (eg, the programmed number of clock cycles). In the timing diagram 300, the assertion of the Vboost signal 252 is delayed from the precharge assertion at 332 to the Vboost assertion at 352.In response to the Vboost signal 252 being asserted at 352, the voltage regulator 270 switches to a higher voltage at 382, which compensates for the expected voltage drop due to the scheduled high power command. As shown in the timing diagram 300, just before the execution unit 260 starts to execute the high power command, the voltage signal 272 reaches Vmax (maximum voltage), which is indicated by the busy signal 262 at the assertion point 362. When the execution of a high-power command (eg, an MM command that activates instances of 1024 product-sum-accumulate engines) begins, the increased power consumption produces a significant voltage drop 384. Because the Vboost signal 252 pre-schedules the voltage regulator 270 to increase the voltage to compensate for the expected voltage drop, however, the effective voltage seen by the execution unit 260 will not drop below Vmin (the minimum voltage used to maintain the maximum clock frequency). Therefore, even during the execution of high-power commands, the cores (e.g. 120(1)-120(M), 220) can maintain the full clock frequency (e.g. 290). Otherwise, when the power supply voltage remains at the nominal voltage, the system without the active Di/Dt voltage suppression logic may malfunction due to a sharp drop in voltage.When the execution unit 260 starts to execute the high-power instruction, the execution unit asserts a busy signal 262, as indicated by 362. The assertion of the busy signal triggers the de-assertion processing of the Vboost signal 252. In one or more embodiments, the stop delay counter 246 delays the de-assertion based on the hold time (eg, the programmed number of clock cycles). Once the hold time is reached, the Vboost signal 252 can be de-asserted at 354. The hold time is configured to ensure that the voltage regulator 270 will catch up and will be able to respond to the newly increased power level without the need to hold the voltage increase.If no other high power commands are scheduled, at 354, the voltage regulator 270 may reduce the voltage level to an appropriate level. In some scenarios, the voltage level can be reduced to Vmin (minimum voltage), where the voltage level can be maintained until another high power command is scheduled.In another scenario, as shown in the timing diagram 300, another high power command can be scheduled after the first high power command is scheduled but before the Vboost signal is deasserted. In the timing diagram 300, the precharge signal 232 is asserted again at 334. Subsequently, the execution unit 260 starts to execute the second high power instruction, and at 364 asserts the busy signal 262 again. Because the Vboost signal 252 is still asserted at 364, the other Vboost signal is not asserted. The voltage signal 272 is high enough to handle voltage drops such as the second high power command 376A-376B. Therefore, based on the delay counter 246 reaching the stop delay count value after being activated in response to the first assertion of the busy signal 262 at 362, the Vboost signal 252 is de-asserted. In other implementations, the subsequent assertion of the busy signal 262 at 364 triggers the de-assertion process of the Vboost signal 252 again. In this implementation, the stop delay counter 246 restarts the stop delay count to delay the de-assertion of the Vboost signal 252. Therefore, the Vboost signal assertion can be maintained for a longer period of time because the stop delay count is restarted in response to the subsequent assertion of the busy signal at 364. However, in some scenarios, it may be more desirable to minimize the amount of time the Vboost signal is asserted. In such scenarios, the stop delay counter is not restarted, and the Vboost signal is de-asserted based on the stop delay count initiated in response to the previous assertion of the busy signal (e.g., at 362).Once the Vboost signal 252 is de-asserted, the voltage regulator 270 can reduce the voltage level to an appropriate level. In this scenario, since the second high-power command is still being executed when the Vboost signal is de-asserted, the voltage signal 272 can be reduced to the nominal voltage level (for example, between Vmin and Vmax), and then it rises. Voltage is no longer provided, but the actuator has enough voltage to prevent the voltage from falling below Vmin.Turning to FIG. 4, FIG. 4 is a graph 400 depicting the expected Di/Dt power consumption when the execution unit of the processor starts to execute high-power instructions. More specifically, when the matrix processing unit (MPU) starts to execute the matrix multiplication (MM) instruction, the power simulation waveform 402 shows a possible current surge of 16.5 A in a time frame of 4 nanoseconds (ns). As the number of cores integrated in a processor increases, the power surge generated will also increase proportionally. For example, when execution units in all 40 cores initiate MM instructions, a processor with 40 cores may have a power surge of up to 660A.At least some data of the MM instruction can be obtained through the input feature map (IFM) operation 410. The IFM operation 410 can read the memory, extract [X] and [Y] operands, and store the operands in a multi-dimensional array for use by the MPU. This operation is performed using 2 million nodes with a power requirement of approximately 2%.The input arithmetic engine (IAE) operation 404 can perform preprocessing on the [X] and [Y] operands to prepare data for the multiplier. The IAE operation 404 causes the initial spike. The parallel matrix multiplication (MM) and sum instruction 406 causes a second spike of 100% power demand and 80 million nodes. Therefore, MPU rises by 16.5A from low-end to high-end in 4 nanoseconds. A power supply that usually works in milliseconds only receives a huge power demand within 4 nanoseconds. Therefore, the power supply cannot adapt to such a high power demand in such a short time frame.After the calculation is complete, the output arithmetic engine (OAE) operation 408 may manipulate the output by, for example, shifting the result to scale down the result. Manipulation can be performed at high power levels.FIG. 5 is a graph 500 depicting a typical example of the impedance of a power transmission network for a high-performance computing platform. The example computing platform includes 40 tensor processing cores (TPC) with a maximum current of 16.4A (9.7nFCdyn at 0.85V, 2GHz). Curve 510 shows the impedance for the previous plan of record (POR) with 10 μF DSC, and curve 520 shows the impedance for the final POR with 1 μF DSC.For frequencies below 10KHz, the alternating current (AC) characteristic impedance shown in the graph 500 is about 200μOhm, and at the operating power supply voltages of about 50KHz and 5MHz, the typical worst-case impedance is about 600μOhm. Therefore, in this common scenario, when all cores (for example, 40 cores) start to execute MM instructions, the worst-case voltage drop may be as high as -46% (about 394 μV drop). Without significantly increasing the power supply voltage or reducing the PDN impedance, the system is likely to malfunction.Embodiments with the characteristics shown in graph 500 may be configured with active Di/Dt voltage suppression capabilities to prevent system failure. If the system normally runs at 800mV (millivolts), the drop of 394mV is almost half of the system's operating voltage. To compensate for this, the voltage needs to be increased by at least 200 millivolts to reach the midpoint. In this scenario, the system may need to run at 1.2 volts instead of 800mV. Active Di/Dt voltage suppression logic provides the ability to keep the voltage at 0.85V and only compensate when it needs to switch to the maximum voltage (for example, for matrix multiplication instructions).FIG. 6 is a graph 600 depicting typical operating voltage and related operating frequency characteristics of a high-performance central processing unit (CPU). The line graph 600 shows a line 602 that plots the voltage versus frequency of an example CPU. The graph on line 602 includes sleep (SLEEP), pacing (PACE), running (RUN), and burst (BURST). Sleep means that little or no work is being performed and therefore requires minimal power. Pacing is the voltage and frequency used when deliberately reducing the speed of the system to save power. Operation is the normal frequency and operating point. Finally, a burst means that a lot of work is to be done (for example, MM instructions), so an increased voltage is required.The line graph 600 shows that the reduced voltage translates into a reduced frequency. More specifically, when the CPU voltage drops due to increased current consumption, in order to prevent system failure, the system can be compensated to operate at a higher voltage or lower frequency.Traditional systems usually operate at higher voltages and lower frequencies to avoid system failures due to sudden voltage drops and to adapt to high-power commands. In an example, the CPU may have a computationally limited workload that runs above the minimum voltage (Vmin). By achieving a better frequency-voltage curve and running at a slightly higher voltage, these workloads can run with higher performance at the same power. For example, consider using the RUN frequency and voltage of the CPU running at 1.8GHz and 0.75v. If the system voltage drops to a pacing voltage (e.g. 0.65v), the frequency needs to be dropped to about 1.3GHz to avoid system failure. As shown by line 604, if the voltage is increased to 0.85v and the system is operating at 1.8GHz, the 1.8GHz frequency required for the voltage drop at 606 can be maintained. Therefore, operating the system at a higher voltage and lower frequency can minimize system failures. However, in such a configuration, the excess power indicated at 604 is wasted.One or more embodiments described herein dynamically perform compensation voltage generation by requesting the voltage regulator to compensate for the inrush current expected due to the execution of the scheduled instruction that has been detected. Therefore, one or more embodiments not only prevent system failures, but also minimize power waste by actively and dynamically suppressing voltage drops caused by high-power commands.Figure 7 shows supplementary graphs 700A, 700B, and 700C, which describe how to use active Di/Dt voltage drop suppression technology in conjunction with other advanced power management using dynamic voltage and frequency scaling techniques. Specifically, the active Di/Dt voltage drop suppression technology can be applied to scenarios where sparsity occurs during the execution of high-power commands. When the multiplication is performed and many factors to be multiplied are zero, sparsity occurs. For example, in matrix multiplication, if the matrix contains zeros, many calculations will be performed by multiplying a certain number by zero. Since the result of multiplying zero by any other number is always zero, there is no need to perform this calculation. Therefore, when the high-power command starts to execute, the active Di/Dt voltage drop suppression technique can be applied to provide the initial voltage increase, and then the Vboost signal is canceled when the power demand decreases due to the sparsity of the matrix.In one or more embodiments, when high-power instructions are scheduled for execution, the instruction decoder and scheduler (e.g., 130(1)-130(M), 230) can be used to request an appropriate amount of voltage rise. High to compensate for the expected execution of high power. By tuning the programmable delay counter to match the characteristics of the VR delay and the execution pipeline of the execution unit, the voltage rise and the execution of high-power instructions can be scheduled to coincide to offset the effective voltage drop to maintain a constant (or near-constant) CPU voltage , Even if the sparsity of the matrix reduces the power requirements during execution. Such an embodiment can achieve maximum performance, reliability and lower cost.Figure 7 shows a scenario where voltage drop suppression can be configured to accommodate the sparsity of the input matrix of high power commands. Graph 700A shows a signal for suppressing voltage drop during the execution of a high-power command when the input is sparse, graph 700B shows the power demand related to the execution of a high-power command, and graph 700C shows Supplementary current during instruction execution. In the graph 700C, the TPC_ICC curve 708 shows an example of the current during the execution of a high power command (e.g., MM command), including a current reduction 710 due to the sparsity of the input. In the graph 700B, the VDD_TPC curve 706 shows the power requirements of the execution unit during the execution of the high-power instruction. Graph 700A shows the MM busy signal 704 that is asserted when the high power command begins to execute. Graph 700A also shows the Vboost signal 702 that was asserted before execution started and de-asserted some time after execution started.As shown by the VDD_TPC curve 706, the first voltage spike 720 occurs just after the execution of the high power command begins, as indicated by the assertion of the MM busy signal 704 at 705 (for example, when the MM busy signal goes high). After the initial voltage spike at 720, another voltage spike occurs at 722, and then the voltage drops due to the sparsity of the input. Embodiments that adapt to input sparsity can be configured to predict the occurrence of sparsity so that the voltage can be raised to compensate for the increased voltage demand (for example, when high-power commands begin to be executed), and then when sparsity causes power demand to decrease The voltage is reduced. The voltage can then be raised again to compensate for any further voltage jumps.Turning to Figures 8-10, example flowcharts show possible flows 800-1000 of operations that can be associated with the embodiments described herein. In at least one embodiment, one or more sets of operations correspond to the activities of FIGS. 8-10. In at least one embodiment, a processor (e.g., 100) having a core (e.g., 120(1)-120(M), 220) or a portion thereof may utilize one or more sets of operations. Specifically, instruction decoders and schedulers (e.g. 130(1)-130(M), 230), rate controllers (e.g. 140(1)-140(M), 240), and execution units (e.g., 160A-160D) ) Can utilize and/or perform one or more sets of operations. In at least one embodiment, the processes 800-1000 show an example of implementing the active Di/Dt voltage suppression capability in a processor with multiple cores.Generally, the process 800 of FIG. 8 shows active Di/Dt voltage suppression when a high-power instruction is scheduled and executed by an execution unit (for example, 260) of a core (for example, 220). At 802, high power instructions are identified in the instruction stream. For example, a matrix multiplication (MM) instruction is an example of a high-power instruction that can be recognized. In at least one embodiment, high power instructions are recognized by the scheduler in the core. In one implementation, the scheduler may be part of the instruction decoder and scheduler (e.g. 230). In other embodiments, the scheduler may be separate from the decoder.At 804, the high power instruction may be scheduled by the scheduler for execution. In one example, high-power instructions may be added to the instruction queue (e.g., 235) when they are scheduled for execution.At 806, the pre-charge signal is asserted in response to scheduling the high power instruction for execution. In at least one embodiment, the pre-charge signal is asserted to notify the voltage regulator (eg, 270) that a high-power command is scheduled for execution so that the voltage regulator can dynamically and actively temporarily increase the supply voltage to compensate for The initial voltage drop that will occur when the high power command is executed.At 808, in response to the assertion of the precharge signal, the assertion of the Vboost signal for notifying the voltage regulator of the high power command is delayed. In at least one embodiment, the precharge signal is sent to the start delay counter (e.g., 242) and the start AND gate (e.g., 243). The start-up delay counter can be programmed with a start-up delay count value, which indicates the number of clock cycles to be counted before the Vboost signal is asserted. Therefore, the assertion of the Vboost signal is delayed until the start-up delay time expires, which time is determined based on the start-up delay count value. Once the number of clock cycles counted by the start delay counter equals the start delay count value, the start delay time expires, and the start delay counter can send a signal to the start AND gate, and the start AND gate can generate a start signal (for example, 244) to trigger from The Vboost signal of the set/reset circuit is asserted.At 810, the boost signal is asserted to the voltage regulator based on the assertion of the pre-charge signal and the expiration of the start-up delay period.At 812, the voltage regulator increases the voltage in response to the assertion of the Vboost signal. In at least one embodiment, the voltage can be increased to the maximum voltage (Vmax) so that there is more room to accommodate a large voltage drop. In one or more embodiments, the start-up delay time is selected to ensure that the voltage is boosted before or in accordance with the execution of the high power command.At 814, after increasing the voltage, the execution of the high power command is initiated, and in response to the execution of the high power command, the busy signal is asserted.At 816, in response to the assertion of the busy signal, the Vboost signal is maintained based on the stop delay time, which is also referred to herein as the "hold time". In one or more embodiments, a stop delay counter (eg, 246) can be programmed with a stop delay count value that indicates the number of clock cycles to count before the Vboost signal is deasserted. Therefore, the de-assertion of the Vboost signal is delayed until the stop delay time (or hold time) expires, which is determined based on the stop delay count value. Once the number of clock cycles counted by the stop delay counter equals the stop delay count value, the stop delay time expires, and at 818, the Vboost signal is de-asserted to allow the power supply voltage to drop to the nominal value or minimum depending on the specific scenario Level. For example, if another high-power command starts execution before the first high-power command is executed, the voltage signal may be reduced to the nominal voltage level during the remaining execution of the second high-power command. However, if no other high-power commands are scheduled when the first high-power command completes execution, the voltage signal can be reduced to the minimum voltage level.The flow 900 of FIG. 9 shows possible details of activities related to delaying the assertion of the Vboost signal after the pre-charge signal is asserted. In at least one embodiment, the start-up delay counter (e.g., 242), start-up AND gate (e.g., 243), and set/reset circuit (e.g., 250) are coordinated to delay the assertion of the Vboost signal, where the delay is based on the start-up delay time. One or more activities in the process 900 may occur at 808 in the process 800.At 902, the delay counter is started to receive a precharge signal. At 904, in response to receiving the precharge signal, the start delay counter is started.At 906, the start delay counter can be incremented after the clock cycle is complete. In at least one embodiment, the start-up delay counter can be programmed with a value indicating the start-up delay time in the form of the number of clock cycles used to delay the assertion of the Vboost signal.At 908, it is determined whether the startup delay time has expired (e.g., the programmed number of clock cycles has been counted). For example, if the start delay counter is equal to the programmed start delay count value, the start delay time expires.If the start delay time has not expired, the flow returns to 906 to increment the start delay counter again based on the clock cycle. However, if the start-up delay time expires, at 910, a start signal is generated and provided to the set/reset circuit 250 to trigger the assertion of the Vboost signal of the voltage regulator.The flow 1000 of FIG. 10 shows possible details of activities related to delaying the de-assertion of the Vboost signal after the busy signal is asserted. In at least one embodiment, the stop delay counter (e.g. 246), stop AND gate (e.g. 247), and set/reset circuit (e.g. 250) are coordinated to delay the assertion of the Vboost signal, where the delay is based on the stop delay time (or hold time). One or more activities in the process 1000 may occur at 816 in the process 800.At 1002, stop the delay counter to receive the busy signal. At 1004, a stop delay counter is started in response to receiving a busy signal.At 1006, the stop delay counter can be incremented after the clock cycle is complete. In at least one embodiment, the stop delay counter can be programmed with a value indicating the stop delay time (or hold time) in the form of the number of clock cycles used to delay the de-assertion of the Vboost signal.At 1008, it is determined whether the stop delay time has expired (e.g., the programmed number of clock cycles has been counted). For example, if the stop delay counter is equal to the programmed stop delay count value, the stop delay time expires.If the stop delay time has not expired, the flow returns to 1006 to increment the stop delay counter again based on the clock cycle. However, if the stop delay time expires, at 1010, a stop signal is generated and provided to the set/reset circuit 250 to trigger the de-assertion of the Vboost signal of the voltage regulator.Figures 11-19 illustrate in detail exemplary architectures and systems (e.g., processor 100, cores 120(1)-120(M), 220, instruction decoders and schedulers 130(1)-130) used to implement the above-mentioned embodiments. (M), 230, rate controllers 140(1)-140(M), 240 and execution units 160A, 160B, 160C, 160D). In some embodiments, one or more of the hardware components and/or instructions described above are emulated as described below or implemented as software modules. It is also possible (or alternatively) to use other computer architecture designs known in the art for processors, mobile devices, computing systems and their components. Generally, a suitable computer architecture for the embodiments disclosed herein may include, but is not limited to, the configuration shown in FIGS. 11-19.The embodiments of the instruction(s) detailed above can be implemented in a "general vector friendly instruction format". In other embodiments, this format is not used, but another instruction format is used. However, the following descriptions of write mask registers, various data conversions (swizzle, broadcasting, etc.), addressing, etc. generally apply to The above description of an embodiment of the instruction(s). In addition, the example system, architecture, and pipeline are described in detail below. The embodiments of the above instruction(s) can be executed on such systems, architectures, and pipelines, but are not limited to those described in detail.The instruction set can include one or more instruction formats. A given instruction format can define various fields (e.g., number of bits, bit position) to specify the operation to be performed (e.g., opcode) and the operand and/or other data fields on which the operation will be performed (e.g. , Mask) etc. Some instruction formats are further subdivided by the definition of instruction templates (or sub-formats). For example, the instruction template of a given instruction format can be defined as having different subsets of the fields of the instruction format (the fields included usually have the same order, but at least some have different bit positions because fewer fields are included) and /Or is defined as a given field with a different interpretation. Therefore, each instruction of the ISA is represented using a given instruction format (and, if defined, in a given instruction template of the instruction format) and includes fields for specifying operations and operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field that specifies the opcode and an operand field that selects the operand (source 1 / destination and source 2); and is in the instruction stream The occurrence of the ADD instruction will have specific content in the operand field to select a specific operand. Has published and/or published a set of SIMD extensions called Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using Vector Extensions (VEX) coding schemes (for example, see 64 and IA-32 Architecture Software Developer’s Manual, May 2019; see Advanced Vector Extension Programming Reference, October 2014).FIG. 11 is a block diagram of a register architecture 1100 according to at least one embodiment of the present disclosure. In the illustrated embodiment, there are 32 vector registers 1110 that are 512 bits wide; these registers are referenced as zmm0 to zmm31. The low-level 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-15. The low-order 128 bits of the lower 16 zmm registers (the low-order 128 bits of the ymm register) are overlaid on the registers xmm0-15.That is, the vector length field selects between the maximum length and one or more other shorter lengths, where each such shorter length is half the length of the previous length; instruction templates without a vector length field operate on the maximum vector length . In addition, in one embodiment, a type B instruction template of a specific vector-friendly instruction format operates on packed or scalar single/double precision floating point data and packed or scalar integer data. A scalar operation is an operation performed on the position of the lowest-order data element in the zmm/ymm/xmm register; according to an embodiment, the position of the higher-order data element remains the same as before the instruction or is zeroed.Write mask register 1115-In the illustrated embodiment, there are 8 write mask registers (k0 to k7), each having a size of 64 bits. In an alternative embodiment, the size of the write mask register 1115 is 16 bits. As mentioned earlier, in one embodiment, the vector mask register k0 cannot be used as a write mask; when the encoding that usually represents k0 is used for a write mask, it selects a hard-wired write mask of 0xFFFF, which effectively disables The write mask of the instruction.General Register 1125-In the illustrated embodiment, there are 16 64-bit general purpose registers, which are used with the existing x86 addressing mode to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP and R8 to R15.Scalar floating-point stack register file (x87 stack) 1145 with MMX packed integer flat register file 1150 aliased on it-in the illustrated embodiment, the x87 stack is used to use x87 instruction set extensions for 32/64/80 bit floating Point data performs an eight-element stack of scalar floating-point operations; while the MMX register is used to perform operations on 64-bit packed integer data, and to hold the operands used for certain operations performed between the MMX and XMM registers.Alternative embodiments of the present disclosure may use wider or narrower registers. In addition, alternative embodiments of the present disclosure may use more, fewer, or different register files and registers.The processor core can be implemented in different ways, can be implemented for different purposes, and can be implemented in different processors. For example, the implementation of this core may include: 1) a general-purpose sequential core for general-purpose computing; 2) a high-performance general-purpose out-of-order core for general-purpose computing; 3) mainly for graphics and/or science (throughput) Dedicated core for calculation. The implementation of different processors may include: 1) CPU, which includes one or more general-purpose sequential cores intended for general-purpose computing and/or one or more general-purpose out-of-order cores intended for general-purpose computing; 2) co-processing The processor includes one or more dedicated cores mainly used for graphics and/or science (throughput). This different processor leads to different computer system architectures, which can include: 1) a coprocessor on a different chip from the CPU; 2) a coprocessor on a separate die in the same package as the CPU Processor; 3) A coprocessor on the same die as the CPU (in this case, this coprocessor is sometimes referred to as dedicated logic (for example, integrated graphics and/or scientific (throughput) logic) (Or called a dedicated core); 4) a system-on-chip, which can include the described CPU (sometimes referred to as application core(s) or application processor(s)) on the same die, The coprocessor described above, and additional functions. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.Figure 12A is a block diagram showing both an exemplary sequential pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to one or more embodiments of the present disclosure. 12B is a block diagram showing an exemplary embodiment of both sequential architecture cores and exemplary register renaming, out-of-order issue/execution architecture cores to be included in a processor according to one or more embodiments of the present disclosure . The solid-line boxes in FIGS. 12A to 12B show sequential pipelines and sequential cores, while the optionally-added dashed boxes show register renaming, out-of-order issue/execution pipelines, and cores. Assuming that the order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 12A, the processor pipeline 1200 includes an extraction stage 1202, a length decoding stage 1204, a decoding stage 1206, an allocation stage 1208, a rename stage 1210, a scheduling (also called dispatch or release) stage 1212, a register read/memory The read stage 1214, the execution stage 1216, the write back/memory write stage 1218, the exception handling stage 1222, and the commit stage 1224.FIG. 12B shows a processor core 1290 that includes a front end unit 1230 coupled to an execution engine unit 1250, and both the execution engine unit 1250 and the front end unit 1230 are coupled to the memory unit 1270. The core 1290 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a mixed or alternative core type. As another option, the core 1290 may be a dedicated core, for example, a tensor processing core (TPC), a network or communication core, a compression engine, a coprocessor core, a general computing graphics processing unit (GPGPU) core, a graphics core, and so on.The front-end unit 1230 includes a branch prediction unit 1232 coupled to the instruction cache unit 1234, the instruction cache unit 1234 is coupled to the instruction conversion back-up buffer (TLB) 1236, the instruction conversion back-up buffer 1236 is coupled to the instruction fetching unit 1238, and the instruction fetching unit 1238 is coupled To the decoding unit 1240. The decoding unit 1240 (or decoder) can decode instructions and generate one or more micro-operations, micro-code entry points, micro-instructions, other instructions, or other control signals as output, which are decoded from the original instructions or reflected in other ways The original instruction or derived from the original instruction.Various different mechanisms can be used to implement the decoding unit 1240. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), etc. In one embodiment, the core 1290 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in the decoding unit 1240 or in the front-end unit 1230). The decoding unit 1240 is coupled to the rename/allocator unit 1252 in the execution engine unit 1250.The execution engine unit 1250 includes a rename/allocator unit 1252, which is coupled to a retirement unit 1254 and a set of one or more scheduler units 1256. The (one or more) scheduler unit 1256 represents any number of different schedulers, including reservation stations, central command windows, and so on. In one or more embodiments utilizing the core 1290, the scheduler unit(s) 1256 may include an instruction decoder and scheduler 130(1)-130(M), 230 (or an instruction decoder and scheduler 130(1)-130(M), at least some functions of the scheduler 230). Therefore, the scheduler unit(s) 1256 may be configured to identify high-power instructions in the instruction stream, and to assert the pre-charge signal in response to scheduling high-power instruction execution. It should be noted that this function may or may not be combined with any other suitable components or circuits of the decoding unit 1240 or the core 1290. In addition, the rate controllers 140(1)-140(M) may be implemented in the execution engine unit 1250 and coupled to the scheduler unit(s) 1256 and the execution unit(s) 1262.The scheduler unit(s) 1256 is coupled to the physical register file unit(s) 1258. Each physical register file unit 1258 represents one or more physical register files, and different physical register files in these physical register files store one or more different data types, for example, scalar integer, scalar floating point, packed integer, packed Floating point, vector integer, vector floating point, state (for example, an instruction pointer as the address of the next instruction to be executed), etc. In one embodiment, the physical register file unit(s) 1258 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file unit(s) 1258 overlaps the retirement unit 1254 to illustrate the various ways in which register renaming and out-of-order execution can be implemented (for example, using (one or more) reordering caches and (one or more) Multiple) retirement register files; use (one or more) future files, (one or more) history caches, and (one or more) retirement register files; use register maps and register pools; etc.). The retirement unit 1254 and the physical register file unit(s) 1258 are coupled to the execution cluster(s) 1260.The execution cluster(s) 1260 includes a group of one or more execution units 1262 and a group of one or more memory access units 1264. The execution unit 1262 may perform various operations (for example, shift, addition, subtraction, and multiplication) on various types of data (for example, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). Although some embodiments may include multiple execution units dedicated to a specific function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. In one or more embodiments using the core 1290, the execution unit(s) 1262 may include at least some functions of the execution units 160A, 160B, 160C, 160D, and 260. Therefore, the execution unit(s) 1262 may be configured to assert the busy signal in response to initiating execution of the high power instruction. In one or more examples, the execution unit(s) 1262 may be a matrix processing unit (MPU) of a tensor processing core (TPC).The scheduler unit(s) 1256, the physical register file unit(s) 1258, and the execution cluster(s) 1260 are shown as possibly multiple, because certain embodiments are specific to certain types Create separate pipelines for data/operations (for example, scalar integer pipeline, scalar float/packed integer/packed float/vector integer/vector float pipeline, and/or memory access pipeline, where each pipeline has its own schedule Processor unit, physical register file unit, and/or execution cluster-and in the case of a separate memory access pipeline, certain embodiments in which only the execution cluster of the pipeline has memory access unit(s) 1264 are implemented ). It should also be understood that in the case of using separate pipelines, one or more of these pipelines may be issued/executed out of order and the rest may be issued/executed in order.The set of memory access unit(s) 1264 is coupled to a memory unit 1270, which includes a data TLB unit 1272 coupled to a data cache unit 1274, wherein the data cache unit 1274 is coupled to a level 2 (L2) cache unit 1276. In an exemplary embodiment, the memory access unit(s) 1264 may include a load unit, a store address unit, and a store data unit, each of which is coupled to a data TLB unit 1272 in the memory unit 1270. The instruction cache unit 1234 is also coupled to a level 2 (L2) cache unit 1276 in the memory unit 1270. The L2 cache unit 1276 is coupled to one or more other levels of cache and ultimately to the main memory.As an example, an exemplary register renaming out-of-order issue/execution core architecture can implement the pipeline 1200 as follows: 1) the instruction extraction unit 1238 executes the extraction and length decoding stages 1202 and 1204; 2) the decoding unit 1240 executes the decoding stage 1206; 3) The rename/allocator unit 1252 executes the allocation phase 1208 and the rename phase 1210; 4) (one or more) scheduler unit 1256 executes the scheduling phase 1212; 5) (one or more) physical register file unit 1258 and The memory unit 1270 executes the register read/memory read stage 1214; the execution cluster(s) 1260 executes the execution stage 1216; 6) the memory unit 1270 and the physical register file unit(s) 1258 execute write-back/memory The writing stage 1218; 7) various units may be involved in the exception handling stage 1222; 8) the retirement unit 1254 and the physical register file unit(s) 1258 execute the commit stage 1224.The core 1290 can support one or more instruction sets (for example, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIP Technologies, Sunnyvale, California, USA; Sunnyvale, California, USA The ARM instruction set of ARM Holdings (with optional additional extensions, such as NEON)) includes the instruction(s) described herein. In one embodiment, the core 1290 includes logic to support packaged data instruction set extensions (e.g., AVX1, AVX2), thereby allowing operations used by many multimedia applications to be performed using packaged data.It should be understood that the core can support multi-threading (execute two or more parallel operation sets or thread sets), and can do so in various ways, including time-slicing multi-threading, simultaneous multi-threading (among which, a single The physical core provides a logical core for each thread that is simultaneously multi-threaded by the physical core), or a combination thereof (for example, extraction and decoding of time slicing and simultaneous multi-threading thereafter, for example, in hyper-threading technology).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in sequential architectures. Although the embodiment of the processor shown also includes separate instruction and data cache units 1234/1474 and a shared L2 cache unit 1276, alternative embodiments may have a single internal cache for both instructions and data, for example, level 1. (L1) Internal cache, or multi-level internal cache. In some embodiments, the system may include a combination of internal cache and external cache, where the external cache is external to the core and/or processor. Alternatively, the entire cache can be external to the core and/or processor.13A-13B show a block diagram of a more specific exemplary sequential core architecture, where the core will be one of several logic blocks in the chip (may include other cores of the same type and/or different types). The logic block communicates with certain fixed-function logic, memory I/O interface, and other necessary I/O logic through a high-bandwidth interconnection network (for example, a ring network), depending on the application.13A is a block diagram of a single processor core and its connection to the on-die interconnection network 1302 and its local subset of the level 2 (L2) cache 1304 according to one or more embodiments of the present disclosure. In one embodiment, the instruction decoder 1300 supports an x86 instruction set with a packed data instruction set extension. L1 cache 1306 allows low-latency accesses to cache memory in scalar and vector units. Although in one embodiment (in order to simplify the design), the scalar unit 1308 and the vector unit 1310 use separate register sets (respectively the scalar register 1312 and the vector register 1314), and the data transferred between them is written to the memory and then from Level 1 (L1) cache 1306 is read back, but alternative embodiments of the present disclosure can use different methods (for example, using a single register set or including allowing data to be transferred between two register files without being written And read back communication path).The local subset 1304 of the L2 cache is a part of the global L2 cache. The global L2 cache is divided into separate local subsets, one local subset for each processor core. Each processor core has a direct access path to its own local subset 1304 of the L2 cache. The data read by the processor core is stored in its L2 cache subset 1304 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. The data written by the processor core is stored in its own L2 cache subset 1304, and is flushed from other subsets if needed. The ring network 1302 ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other logic blocks to communicate with each other within the chip. Each circular data path is 1012 bits wide in each direction.FIG. 13B is an expanded view of a portion of the processor core in FIG. 13A according to one or more embodiments of the present disclosure. FIG. 13B includes parts of the L1 cache 1306A, the L2 cache 1304, and more details about the vector unit 1310 and the vector register 1314. Specifically, the vector unit 1310 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 1328), which executes one or more of integer, single-precision floating-point, and double-precision floating-point instructions. The VPU supports the deployment of register inputs through the deployment unit 1320, the use of digital conversion units 1322A-B for digital conversion, and the use of the replication unit 1324 to copy memory inputs. The write mask register 1326 allows predictive vector writes.FIG. 14 is a block diagram of a processor 1400 that may have more than one core, may have an integrated memory controller, and may have integrated graphics, according to one or more embodiments of the present disclosure. The solid-line box in FIG. 14 shows a processor 1400 with a single core 1402A, a system agent 1410, and a set of one or more bus controller units 1416; but the optional addition of a dashed-line box shows the following items: Alternative processor 1400: multiple cores 1402A-N, a set of one or more integrated memory controller units 1414 in the system agent unit 1410, and dedicated logic 1408.Therefore, different implementations of the processor 1400 may include: 1) a CPU with dedicated logic 1408 (where the dedicated logic is integrated graphics and/or scientific (throughput) logic (which may include one or more cores)), and cores 1402A-N (which is one or more general-purpose cores (for example, general-purpose sequential cores, general-purpose out-of-sequence cores, or a combination of both); 2) a coprocessor with core 1402A-N (wherein core 1402A-N is the main A large number of dedicated cores for graphics and/or science (throughput)); 3) Coprocessor with core 1402A-N (where core 1402A-N is a large number of general-purpose sequential cores). Therefore, the processor 1400 may be a general-purpose processor, a co-processor, or a special-purpose processor, for example, a network or communication processor, a compression engine, a graphics processor, a GPGPU (general graphics processing unit), many integrated cores with high throughput. (MIC) Coprocessor (including 30 or more cores), embedded processor, etc. The processor can be implemented on one or more chips. The processor 1400 may be part of one or more substrates and/or may be implemented on one or more substrates by using any one of a variety of process technologies (for example, BiCMOS, CMOS, or NMOS).The memory hierarchy includes one or more levels of cache within the core (e.g., cache units 1404A-N), a group or one or more shared cache units 1406, and an external memory (not shown) coupled to the group of integrated memory controller units 1414. show). The set of shared cache units 1406 may include one or more middle-level caches (for example, level 2 (L2), level 3 (L3), level 4 (L4)), or other levels of cache, last level cache (LLC), and /Their combination. Although in one embodiment, the ring-based interconnection unit 1412 pair dedicated logic 1408 (for example, integrated graphics logic), the set of shared cache units 1406, and the system proxy unit 1410/(one or more) integrated memory controller unit 1414 is interconnected, but alternative embodiments may use any number of well-known techniques to interconnect these units. In one embodiment, consistency is maintained between one or more cache units 1404A-N and cores 1402A-N.In some embodiments, one or more of the cores 1402A-N are capable of multi-threading. The system agent 1410 includes those components that coordinate and operate the cores 1402A-N. The system agent unit 1410 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or may include the logic and components required to adjust the power state of the cores 1402A-N and the integrated graphics logic 1408. The display unit is used to drive one or more externally connected displays.Core 1402A-N may be homogeneous or heterogeneous in terms of architectural instruction set; that is, two or more cores in core 1402A-N may be able to execute the same instruction set, while other cores may be able to execute Only a subset of the instruction set or a different instruction set.Figures 15-18 are block diagrams of exemplary computer architectures. For laptop computers, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network equipment, network hubs, switches, embedded processors, digital signal processors (DSP), graphics equipment, video game equipment Other system designs and configurations known in the art for set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. Generally, various systems or electronic devices capable of combining the processors and/or other execution logic disclosed herein are generally suitable.Referring now to FIG. 15, shown is a block diagram of a system 1500 in accordance with at least one embodiment of the present disclosure. The system 1500 may include one or more processors 1510, 1515, which are coupled to a controller hub 1520. In one embodiment, the controller hub 1520 includes a graphics memory controller hub (GMCH) 1590 and an input/output hub (IOH) 1550 (may be on a separate chip); the GMCH 1590 includes a memory 1540 and a coprocessor 1545 coupled to it Memory and graphics controller; IOH 1550 couples input/output (I/O) devices 1560 to GMCH 1590. Alternatively, one or both of the memory and the graphics controller are integrated in the processor (as described herein), the memory 1540 and the coprocessor 1545 are directly coupled to the processor 1510, and in a single chip including the IOH 1550 The controller hub 1520.The optional nature of the additional processor 1515 is indicated by dotted lines in FIG. 15. Each processor 1510, 1515 may include one or more of the processing cores described herein, and may be a certain version of the processor 1400.The memory 1540 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1520 communicates with the processor(s) 1510, 1515 via a multi-drop bus, such as a front side bus (FSB), such as a QuickPath interconnect (QPI) or similar connection 1595.In one embodiment, the coprocessor 1545 is a dedicated processor, such as a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, etc. In one embodiment, the controller hub 1520 may include an integrated graphics accelerator.There may be various differences between the physical resources 1510 and 1515 in terms of the range of index metrics including architectural characteristics, micro-architectural characteristics, thermal characteristics, power consumption characteristics, and the like.In one embodiment, the processor 1510 executes instructions that control general types of data processing operations. Embedded in instructions can be coprocessor instructions. The processor 1510 recognizes these coprocessor instructions as types that should be executed by the attached coprocessor 1545. Therefore, the processor 1510 issues these coprocessor instructions (or control signals representing coprocessor instructions) to the coprocessor bus or other interconnections to the coprocessor 1545. The coprocessor(s) 1545 accepts and executes the received coprocessor instructions.Referring now to FIG. 16, shown is a block diagram of a first more specific exemplary system 1600 in accordance with one or more embodiments of the present disclosure. As shown in FIG. 16, the multi-processor system 1600 is a point-to-point interconnection system, and includes a first processor 1670 and a second processor 1680 that are coupled via a point-to-point interconnection 1650. The processors 1670 and 1680 may be any type of processors, such as those shown or discussed in conjunction with other figures. For example, each of the processors 1670 and 1680 may be a certain version of the processor 1400. In another example, the processors 1670 and 1680 are processors 1510 and 1515, respectively, and the coprocessor 1638 is a coprocessor 1545. In yet another example, the processors 1670 and 1680 are the processor 1510 and the coprocessor 1545, respectively.The processors 1670 and 1680 may be implemented as single-core processors 1674a and 1684a or multi-core processors 1674a-1674b and 1684a-1684b. Each of the cores 1674a-1674b and 1684a-1684b may be a certain version of the core 1290. The processors 1670 and 1680 may each include caches 1671 and 1681 used by their respective core or cores. A shared cache (not shown) can be included in either processor, or external to the two processors but connected to the processor via a PP interconnection, so that when the processor enters a low-power mode, either or both The processor's local cache information can be stored in the shared cache.Processors 1670 and 1680 are shown as including integrated memory controller (IMC) units 1672 and 1682, respectively, to communicate with memory elements 1632 and 1634. In some embodiments, memory elements 1632 and 1634 may be locally attached to Part of the main memory of the corresponding processor, or may be a high-bandwidth memory (HBM). In some embodiments, the memory controller logic 1672 and 1682 may be discrete logic separate from the processors 1670 and 1680. The memory element 1632 and/or 1634 may store various data to be used by the processors 1670 and 1680 to implement the operations and functions outlined herein.The processor 1670 also includes point-to-point (P-P) interfaces 1676 and 1678 as part of its bus controller unit; similarly, the second processor 1680 includes P-P interfaces 1686 and 1688. The processors 1670, 1680 may use P-P interface circuits 1678, 1688 to exchange information via a point-to-point (P-P) interface 1650.The processors 1670, 1680 can each use point-to-point interface circuits 1676, 1694, 1686, 1698 to exchange information with the chipset 1690 via the respective P-P interfaces 1652, 1654. The chipset 1690 can optionally exchange information with the coprocessor 1638 via the high-performance interface 1692. In one embodiment, the coprocessor 1638 is a dedicated processor, such as a high-throughput MIC processor, a network or communication processor, a compression and/or decompression engine, a graphics processor, a GPGPU, an embedded processor, etc. Optionally, the chipset 1690 can also communicate with the display 1633 to display data that can be seen by a human user.Shared caches (e.g., 1671 and/or 1681) can be included in either processor, or external to the two processors but connected to the processor via a PP interconnection, so that if the processor enters a low-power mode, any The local cache information of one or two processors can be stored in a shared cache.The chipset 1690 may be coupled to the first bus 1610 via the interface 1696. In one embodiment, the first bus 1610 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, but the scope of the present disclosure is not limited this.As shown in FIG. 16, various I/O devices 1614 may be coupled to the first bus 1610, and a bus bridge 1618 that couples the first bus 1610 to the second bus 1620. In one embodiment, one or more additional processors 1615 (e.g., coprocessors, high-throughput MIC processors, GPGPUs, accelerators (e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gates The array, or any other processor) is coupled to the first bus 1610. In one embodiment, the second bus 1620 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1620, including, for example, a keyboard and/or mouse 1622, or other input devices (for example, a touch screen, trackball, joystick, etc.), and a communication device 1626 (for example, A modem, a network interface device, or other types of communication devices that can communicate through the network 1660), an audio I/O device 1614, and/or a storage unit 1628 (for example, a disk drive or other mass storage device, which may include instructions/code And data 1630). In addition, the audio I/O 1624 may be coupled to the second bus 1620. Note that there may be other architectures. For example, instead of the point-to-point architecture of FIG. 16, the system can implement a multi-drop bus or other such architectures.Referring now to FIG. 17, shown is a block diagram of a second more specific exemplary system 1700 in accordance with at least one embodiment of the present disclosure. Similar elements in FIGS. 16 and 17 have similar reference numerals, and some aspects in FIG. 16 have been omitted from FIG. 17 to avoid obscuring other aspects in FIG. 17.Figure 17 shows that the processors 1670, 1680 may include integrated memory and I/O control logic ("CL") 1672 and 1682, respectively. Therefore, CL 1672, 1682 include integrated memory controller units and include I/O control logic. Figure 17 shows that not only the memory 1632, 1634 is coupled to the CL 1672, 1682, but the I/O device 1714 is also coupled to the control logic 1672, 1682. A legacy I/O device 1715 is coupled to the chipset 1690.Referring now to FIG. 18, there is shown a block diagram of an SoC 1800 according to at least one embodiment of the present disclosure. Similar elements in Figure 14 have similar reference numerals. In addition, the dashed box is an optional feature on the higher-level SoC. In FIG. 18, the interconnection unit(s) 1802 is coupled to the following: an application processor 1810, which includes a set of one or more cores 1402A-N and a shared cache unit(s) 1406; System agent unit 1410; (one or more) bus controller unit 1416; (one or more) integrated memory controller unit 1414; a group or one or more coprocessor 1820, which may include integrated graphics logic, image Processor, audio processor and video processor; static random access memory (SRAM) unit 1830; direct memory access (DMA) unit 1832; and display unit 1840 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1820 includes a dedicated processor, such as a network or communication processor, a compression engine, a GPGPU, a high-throughput MIC processor, an embedded processor, etc.The embodiments of the mechanism disclosed herein can be implemented in hardware, software, firmware, or a combination of these implementation methods. The embodiments of the present disclosure can be implemented as a computer program or program code executed on a programmable system including at least one processor and a storage system (including volatile and non-volatile memory and/or storage elements) , At least one input device and at least one output device.Program code (such as code 1830 shown in FIG. 18) can be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For this application, the processing system includes any system having a processor, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor, and other examples.The program code can be implemented in a high-level process or object-oriented programming language to communicate with the processing system. If necessary, the program code can also be implemented in assembly language or machine language. In fact, the mechanism described herein is not limited to the scope of any particular programming language. In any case, the language can be a compiled or parsed language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the representative instructions representing various logics in the processor, and when read by a machine, the machine construction logic is Perform the techniques described in this article. This representation called "IP core" can be stored on a tangible machine-readable (for example, or computer-readable) medium and provided to various customers or manufacturing facilities to be loaded into the manufacturing machine that actually manufactures logic or processors in.Such machine-readable storage media may include, but are not limited to, non-transitory tangible arrangements of objects manufactured or formed by machines or equipment, including storage media such as hard disks, any other types of disks, including floppy disks, optical disks, and optical disks. Read memory (CD-ROM), compact disc rewritable (CD-RW) and magneto-optical disks, semiconductor devices (such as read only memory (ROM)), random access memory (RAM) (such as dynamic random access memory (DRAM) , Static random access memory (SRAM)), erasable programmable read-only memory (EPROM), flash memory, electrically erasable programmable read-only memory (EEPROM), phase change memory (PCM), magnetic card or optical card or Applicable to any other types of media storing electronic instructions.Therefore, the embodiments of the present disclosure also include non-transitory physical machine-readable media containing instructions or containing design data, such as hardware description language (HDL), which defines the structures, circuits, devices, processors, and/or System characteristics. These embodiments may also be referred to as program products.In some cases, the instruction converter can be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter may convert (eg, use static binary conversion, including dynamic binary conversion of dynamic compilation), morph, emulate, or otherwise convert instructions into one or more other instructions to be processed by the core. The instruction converter can be implemented by software, hardware, firmware, or a combination thereof. The instruction converter may be on the processor, external to the processor, or partly on the processor and partly external to the processor.FIG. 19 is a block diagram comparing using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment of the present disclosure. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter may be implemented by software, firmware, hardware, or various combinations thereof. FIG. 19 shows that a program using a high-level language 1902 can be compiled using an x86 compiler 1904 to generate x86 binary code 1906, which can be executed locally by a processor 1916 having at least one x86 instruction set core. A processor 1916 with at least one x86 instruction set core indicates that it can perform the following operations to perform substantially the same functions as an Intel processor with at least one x86 instruction set core, thereby achieving processing with the Intel processor having at least one x86 instruction set core Any processor that has essentially the same result: Compatibly executes or otherwise processes (1) most of the instruction set of the Intel x86 instruction set core or (2) the target is to process in Intel with at least one x86 instruction set core The object code version of the application or other software running on the device. The x86 compiler 1904 represents a compiler operable to generate x86 binary code 1906 (for example, object code), where the binary code can be used with or without additional link processing in a processor with at least one x86 instruction set core 1916 Is executed. Similarly, FIG. 19 shows that a program using a high-level language 1902 can be compiled with an alternative instruction set compiler 1908 to generate an alternative instruction set binary code 1910, which can be used by a processor 1914 (without at least one x86 instruction set core). For example, a processor with a core that executes the MIPS instruction set of MIPS Technologies of Sunnyvale, California, USA and/or that executes the ARM instruction set of ARM Holdings of Sunnyvale, California, USA) is locally executed. The instruction converter 1912 is used to convert the x86 binary code 1906 into code that can be executed locally by the processor 1914 without the x86 instruction set core. The converted code is unlikely to be the same as the alternative instruction set binary code 1910, because it is difficult to make an instruction converter that can implement it; however, the converted code will perform general operations and consist of instructions from the alternative instruction set . Therefore, the instruction converter 1912 represents software, firmware, hardware, or a combination thereof that allows a processor or other electronic device without an x86 instruction set processor or core to execute the x86 binary code 1906 through simulation, simulation, or any other process.Note that using the various examples provided herein, interactions can be described in terms of two, three, four, or more network elements, hosts, devices, computing systems, modules, and/or other components. However, this is done for clarity and illustrative purposes only. It should be understood that the system may be combined or divided in any suitable manner (e.g., segmented, partitioned, separated, etc.). With similar design alternatives, any of the controllers, limiters, decoders, modules, nodes, elements, hosts, devices, systems, and other components shown in the drawings can be combined in various possible configurations, all of which Obviously, they are all within the broad scope of this specification. It should be understood that the concept of active Di/Dt voltage drop suppression shown and described with reference to the drawings (and its teachings) is easy to expand and can be adapted to a large number of components and more complex/complex arrangements and configurations. Therefore, the examples provided should not limit the scope or hinder the broad teaching of the system, as these examples may apply to countless other architectures.It is also important to note that the operations described with reference to the aforementioned figures only show some possible scenarios that can be executed by the system or within the system (for example, by the processor 100). Under appropriate circumstances, some of these operations can be deleted or removed, or these operations can be considerably modified or changed without departing from the scope of the concepts discussed. In addition, the timing of these operations can be changed considerably and still achieve the results taught in this disclosure. As an example, the processes depicted in the figures do not necessarily require the specific order or sequential order shown to achieve the desired result. In certain embodiments, multitasking and parallel processing may be advantageous. For the purpose of example and discussion, the aforementioned operation flow is provided. The system provides great flexibility because any suitable arrangement, timing, configuration, and timing mechanism can be provided without departing from the teaching of the concepts discussed.As used herein, unless expressly stated to the contrary, the use of the phrase "at least one" refers to any combination of the named item, element, condition, or activity. For example, "at least one of X, Y, and Z" is intended to mean any of the following: 1) at least one X, but no Y and Z; 2) at least one Y, but no X and Z; 3) at least one Z , But no X and Y; 4) at least one X and at least one Y, but no Z; 5) at least one X and at least one Z, but no Y; 6) at least one Y and at least one Z, but no X; or 7) At least one X, at least one Y and at least one Z. In addition, unless expressly stated to the contrary, the ordinal adjectives "first", "second", "third", etc. are intended to distinguish the following specific terms (for example, element, condition, module, activity, operation, claim element Etc.), but is not intended to indicate any type of order, ranking, importance, chronological order or hierarchy of the modified terms. For example, "first X" and "second X" are intended to designate two separate X elements, and they are not necessarily restricted by any order, ranking, importance, chronological order, or hierarchy of the two elements. In addition, references to "one embodiment," "an embodiment," "some embodiments," etc. in the specification indicate that the described embodiment(s) may include specific features, structures, or characteristics, but each implementation Examples may or may not necessarily include the particular feature, structure, or characteristic. Furthermore, such phrases do not necessarily refer to the same embodiment. In addition, the words "optimization", "optimization", "optimization", "optimum" and related terms are terms in the art, which refer to the improvement of the speed and/or efficiency of the specified result, and are not intended to indicate the use of The processing to achieve the specified results has been achieved or can achieve a perfectly fast/perfectly efficient state.Although this specification contains many specific implementation details, these should not be construed as limitations on any embodiment or the scope of protection that can be claimed, but as a specific implementation of the concept of active Di/Dt voltage drop suppression disclosed herein. Description of the characteristics of the case. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. In addition, although the above may describe features as functioning in certain combinations and even as originally claimed as such, in some cases, one or more of the features in the combination may be deleted from the claimed combination, And the claimed combination may involve a sub-combination or a variant of the sub-combination. Those skilled in the art can determine many other changes, substitutions, changes, alterations and modifications, and it is intended that the present disclosure covers all such changes, substitutions, changes, alterations and modifications that fall within the scope of the appended claims.Other instructions and examplesThe following examples relate to embodiments according to this specification. The system, device, method, and machine-readable storage medium embodiments may include one or a combination of the following examples:Example A1 provides an apparatus including: an execution unit; a rate controller circuit; and an instruction scheduler, coupled to the rate controller circuit and the execution unit. The instruction scheduler is used to: identify the first high-power instruction in the instruction stream to be executed; and assert the pre-charge signal to the rate controller circuit. The rate controller circuit is configured to: after the precharge signal is asserted, assert the boost signal to increase the power supply voltage of the execution unit; and de-assert the boost signal based at least in part on the execution of the first high power command.In Example A2, the subject matter of Example A1 may optionally include: the first high power instruction is an instruction for multiplying a matrix.In Example A3, the subject matter of any one of Examples A1-A2 may optionally include: the instruction scheduler is configured to: in response to scheduling the first high-power instruction for execution, assert the pre-charge signal.In Example A4, the subject matter of any one of Examples A1-A3 may optionally include: the rate controller circuit is further configured to delay the assertion of the boost signal based on the start-up delay time.In Example A5, the subject matter of Example A4 may optionally include: the activation delay time is to expire before the execution unit initiates the execution of the first high-power instruction.In Example A6, the subject of any one of Examples A1-A5 may optionally include: the rate controller circuit is configured to: receive a busy signal from the execution unit, the busy signal indicating that the execution unit initiates the execution of the first high-power instruction ; And based on the hold time, delay the de-assertion of the boost signal.In Example A7, the subject matter of any one of Examples A1-A3 may optionally include: the rate controller circuit is further configured to: delay the assertion of the boost signal based on the startup delay time; and delay the cancellation of the assertion boost based on the hold time The rate controller circuit includes a start delay counter programmed with a start delay count value corresponding to the start delay time, and a stop delay counter programmed with a stop delay count value corresponding to the hold time.In Example A8, the subject matter of any one of Examples A1-A7 may optionally include: the instruction scheduler is also used to: identify the second high power instruction before the boost signal is de-asserted, and avoid targeting the second high The power command asserts the second boost signal.Example S1 provides a system including: an execution unit, an instruction scheduler coupled to the execution unit, a rate controller circuit coupled to the execution unit and the instruction scheduler, and a voltage regulator coupled to the rate controller circuit. The instruction scheduler is used to identify the first high-power instruction in the instruction stream to be executed and assert the pre-charge signal. The rate controller circuit is used to assert the boost signal after the precharge signal is asserted. The voltage regulator is used to increase the power supply voltage for the execution unit to execute the first high power command in response to receiving the boost signal.In Example S2, the subject matter of Example S1 may optionally include: the execution unit is further configured to: after the boost signal is asserted, initiate execution of the first high-power instruction; and assert a busy state indicating that the first high-power instruction is being executed. signal.In Example S3, the subject matter of Example S2 may optionally include: the rate controller circuit is further configured to: be asserted based at least in part on the busy signal, and de-assert the boost signal.In Example S4, the subject matter of Example S3 may optionally include: the voltage regulator is also used to reduce the power supply voltage for the execution unit after the boost signal is de-asserted.In Example S5, the subject matter of any one of Examples S1-S4 may optionally include: the instruction scheduler is further configured to identify the second high-power instruction in the instruction stream and assert the second precharge signal. The rate controller circuit is also used to assert the second boost signal after the second precharge signal is asserted. The voltage regulator is also used to increase the power supply voltage for the execution unit to execute the second high power command in response to receiving the second boost signal.In Example S6, the subject matter of any one of Examples S1-S5 may optionally include: the instruction scheduler is configured to: in response to scheduling the first high-power instruction for execution, assert the pre-charge signal.In Example S7, the subject matter of any one of Examples S1-S6 may optionally include: the rate controller circuit is further configured to: delay the assertion of the boost signal based on the start-up delay time.In Example S8, the subject of Example S7 may optionally include: the activation delay time is to expire before the execution unit initiates the execution of the first high-power instruction.In Example S9, the subject of any one of Examples S1-S8 may optionally include: increasing the power supply voltage by the voltage regulator is consistent with the execution of the first high power command initiated by the execution unit.Example M1 provides a method including: the instruction scheduler of the processor core identifies the first high-power instruction in the instruction stream to be executed by the execution unit of the processor core; the assertion indicates that the first high-power instruction is scheduled for A pre-charge signal for execution; after the pre-charge signal is asserted, the boost signal is asserted to increase the power supply voltage of the execution unit; a busy signal is received from the execution unit indicating that the first high-power instruction is being executed; and based at least in part on the busy signal Is asserted, cancel the assertion of the boost signal.In Example M2, the subject matter of Example M1 may optionally include: the first high power instruction is an instruction for multiplying a matrix.In Example M3, the subject matter of any of Examples M1-M2 may optionally include: asserting the precharge signal in response to scheduling the first high power instruction for execution.In Example M4, the subject matter of any one of Examples M1-M3 may optionally include delaying the assertion of the boost signal based on the activation delay time.In Example M5, the subject matter of Example M4 may optionally include: the activation delay time is to expire before the first high power command is executed.In Example M6, the subject matter of any one of Examples M1-M5 may optionally include: delaying the de-assertion of the boost signal based on the hold time.In Example M7, the subject matter of any one of Examples M1-M3 may optionally include: delaying the assertion of the boost signal based on the start-up delay time, and delaying the de-assertion of the boost signal based on the hold time, wherein the utilization and the start-up delay The start delay count value corresponding to the time programs the start delay counter, and wherein the stop delay counter is programmed with the stop delay count value corresponding to the hold time.In Example M8, the subject matter of any one of Examples M1-M7 may optionally include: before the boost signal is de-asserted, identifying the second high power command, and avoiding asserting the second high power command for the second high power command. Pressure signal.In Example M9, the subject matter of any one of Examples M1-M8 may optionally include: after deasserting the boost signal, reducing the power supply voltage of the execution unit.Example Y1 provides an apparatus including means for performing the method of any one of Examples M1-M9.In Example Y2, the subject matter of Example Y1 may optionally include: the apparatus for performing the method includes at least one processor and at least one memory element.In Example Y3, the subject matter of Example Y2 may optionally include that at least one memory element includes machine-readable instructions that, when executed, cause the apparatus to perform the method of any one of Examples M1-M9.In Example Y4, the subject matter of any one of Examples Y1-Y3 may optionally include: the device is one of a computing system or a system on a chip.Example X1 provides one or more computer-readable media, including instructions, which, when executed, implement the devices, systems, or devices in any of the foregoing examples A1-A8, S1-S9, M1-M9, and Y1-Y4 method. |
Disclosed are methods and devices, among which is a device that includes a pattern-recognition processor. The pattern-recognition processor (14) may include a matching-data reporting module, which may have a buffer and a match event table. The buffer may be coupled to a data stream and configured to store at least part of the data stream, and the match event table may be configured to store data indicative of a buffer location corresponding with a start of a search criterion being satisfied. |
CLAIMS 1. A device, comprising: a pattern-recognition processor comprising: a plurality of feature cells each comprising: a plurality of memory cells, where each of the memory cells is coupled to an output conductor and to one of a plurality of input conductors; and a detection cell comprising an activation memory cell, wherein the detection cell is configured to output a signal based on the state of the activation memory cell and a signal received from the output conductor; a decoder configured to receive a data stream and select one of the plurality of input conductors based on data received from the data stream; and a matching-data reporting module comprising: a buffer coupled to the data stream and configured to store at least part of the data stream; and a match event table configured to store data indicative of a buffer location corresponding with a start of a search criterion being satisfied. 2. The device of claim 1, wherein the buffer comprises a circular buffer. 3. The device of claim 1, wherein the matching-data reporting module comprises a counter. 4. The device of claim 3, wherein the counter is configured to increment or decrement a count each time a term or a portion of a term is received from the data stream. 5. The device of claim 3, wherein the matching-data reporting module is configured to store data from the data stream in locations in the circular buffer that are selected based on a value of the counter. 6. The device of claim 5, wherein the matching-data reporting module is configured to write the value of the counter to the match event table in response to a first search term of a search criterion being matched. 7. The device of claim 6, wherein the matching-data reporting module is configured to write the value of the counter to the match event table in response to a last search term of the search criterion being matched. 8. The device of claim 1, wherein the matching-data reporting module is configured to output a portion of a data stream that satisfied a search criterion. 9. The device of claim 1, comprising a central processing unit (CPU) coupled to the pattern-recognition processor, wherein the matching-data reporting module is configured to output the portion of the data stream that satisfied the search criterion to the CPU in response to a request for the portion of the data stream from the CPU. 10. The device of claim 1, wherein the matching-data reporting module comprises a timer. 11. The device of claim 1, wherein the matching-data reporting module comprises a pattern generator that outputs a repeatable sequence of values. 12. The device of claim 1, wherein the match event table comprises groups of memory cells, and wherein each group is associated with a group of feature cells. 13. The device of claim 1, comprising a server, a personal computer, a work station, a router, a network switch, chip test equipment, a laptop, a cell phone, a media player, a game console, or a mainframe computer that comprises the pattern-recognition processor. 14. A method, comprising: writing a data stream to a buffer; matching a first search term of a search criterion to a first matching term from the data stream; and in response to a match, storing a first value indicative of a first address of the buffer at which the first matching term is stored. 15. The method of claim 14, wherein storing the first value comprises storing the first address of the buffer at which the first matching term is stored. 16. The method of claim 14, comprising:matching a last search term of the search criterion to a second matching term from the data stream; and in response to matching the last search term, storing a second value indicative of a second address of the buffer at which the second matching term is stored. 17. The method of claim 16, comprising: reporting satisfaction of the search criterion to a central processing unit; receiving a request for the data that satisfied the search criterion from the CPU; and in response to the request, transmitting data stored by the buffer. 18. The method of claim 17, wherein transmitting data stored by the buffer comprises transmitting data stored between the first address and the second address. 19. The method of claim 18, wherein transmitting data stored between the first address and the second address does not include transmitting data stored before the first address other than data stored before the second address. 20. The method of claim 14, wherein writing the data stream to the buffer comprises overwriting data previously written to the buffer. 21. The method of claim 14, comprising selecting one input conductor among a plurality of input conductors based on a term presented by the data stream. 22. The method of claim 14, comprising storing criterion-identifying data that identifies the criterion. 23. The method of claim 14, comprising storing match-identifying data that distinguishes the match from a previous match. 24. A device, comprising: a matching-data reporting module comprising: a buffer configured to store data from a data stream; a match event table having memory for storing buffer addresses; and a buffer-address generator. 25. The device of claim 24, wherein the buffer-address generator comprises a counter. 26. The device of claim 24, wherein the buffer comprises a circular buffer. 27. The device of claim 24, comprising: a recognition module coupled to the data stream; and an aggregation module coupled to an output of the recognition module. 28. The device of claim 24, comprising a controller configured to write data from the data stream at addresses of the buffer, wherein the addresses are output by the buffer- address generator. 29. The device of claim 28, wherein the controller is configured to write a value from the buffer-address generator to the match event table in response to data from the data stream matching a portion of a search criterion. 30. The device of claim 28, wherein the controller is configured to output data from the buffer based on addresses stored in the match event table. 31. The device of claim 30, wherein the controller is configured to output data from the buffer in response to a request from a central processing unit. 32. The device of claim 30, wherein the controller is configured to output data from the buffer that is between a match starting address and a match ending address that are stored by the match event table. 33. A method, comprising: searching a data stream according to search criteria; storing a portion of the data stream in a circular buffer; and if a matching portion of the data stream satisfies a portion of a search criterion among the search criteria, storing an address of the circular buffer that is indicative of where the matching portion of the data stream is stored. 34. The method of claim 33, wherein searching the data stream according to search criteria comprises searching according to the search criteria at generally the same time. 35. The method of claim 34, wherein the search criteria comprise more than 1000 search criteria. 36. The method of claim 33, wherein storing the portion of the data stream in the circular buffer comprises overwriting previously stored portions of the data stream. 37. The method of claim 33, wherein storing the portion of the data stream in the circular buffer comprises: incrementing or decrementing a counter; and writing data from the data stream to addresses of the circular buffer based on a count of the counter. 38. The method of claim 33, comprising outputting data from the circular buffer after satisfying the search criterion. |
PATTERN-RECOGNITION PROCESSOR WITH MATCHING-DATA REPORTING MODULE Field of Invention [0001] Embodiments of the invention relate generally to electronic devices and, more specifically, in certain embodiments, to electronic devices with pattern-recognition processors. Description of Related Art [0002] In the field of computing, pattern recognition tasks are increasingly challenging. Ever larger volumes of data are transmitted between computers, and the number of patterns that users wish to identify is increasing. For example, spam and malware are often detected by searching for patterns in a data stream, e.g., particular phrases or pieces of code. The number of patterns increases with the variety of spam and malware, as new patterns may be implemented to search for new variants. Searching a data stream for each of these patterns can form a computing bottleneck. Often, as the data stream is received, it is searched for each pattern, one at a time. The delay before the system is ready to search the next portion of the data stream increases with the number of patterns. Thus, pattern recognition may slow the receipt of data. [0003] When a pattern is detected, it is often useful to examine the data that matched the pattern. Reproducing the matching data, however, may be difficult. Searches may specify wildcard characters or other operators that allow arbitrarily long portions of the data stream to produce a match. Further, portions of different patterns may be matched by the same portions of the data stream, while the different patterns may start and stop atdifferent times. Creating a new copy of the data stream each time the data stream begins to match one of the patterns is expensive, as forming multiple, arbitrarily long copies of the data stream consumes a large amount of memory. BRIEF DESCRIPTION OF DRAWINGS [0004] FIG. 1 depicts an example of system that searches a data stream; [0005] FIG. 2 depicts an example of a pattern-recognition processor in the system of FIG. 1; [0006] FIG. 3 depicts an example of a search-term cell in the pattern-recognition processor of FIG. 2; [0007] FIGS. 4 and 5 depict the search-term cell of FIG. 3 searching the data stream for a single character; [0008] FIGS. 6-8 depict a recognition module including several search-term cells searching the data stream for a word; [0009] FIG. 9 depicts the recognition module configured to search the data stream for two words in parallel; [0010] FIGS. 10-12 depict the recognition module searching according to a search criterion that specifies multiple words with the same prefix; [0011] FIG. 13 illustrates an embodiment of a matching-data reporting module in accordance with an embodiment of the present technique; and[0012] FIGS. 14-19 illustrate the matching-data reporting module of FIG. 13 operating according to an embodiment of the present technique. DETAILED DESCRIPTION [0013] FIG. 1 depicts an example of a system 10 that searches a data stream 12. The system 10 may include a pattern-recognition processor 14 that searches the data stream 12 according to search criteria 16. [0014] Each search criterion may specify one or more target expressions, i.e., patterns. The phrase "target expression" refers to a sequence of data for which the pattern- recognition processor 14 is searching. Examples of target expressions include a sequence of characters that spell a certain word, a sequence of genetic base pairs that specify a gene, a sequence of bits in a picture or video file that form a portion of an image, a sequence of bits in an executable file that form a part of a program, or a sequence of bits in an audio file that form a part of a song or a spoken phrase. [0015] A search criterion may specify more than one target expression. For example, a search criterion may specify all five-letter words beginning with the sequence of letters "cl", any word beginning with the sequence of letters "cl", a paragraph that includes the word "cloud" more than three times, etc. The number of possible sets of target expressions is arbitrarily large, e.g., there may be as many target expressions as there are permutations of data that the data stream could present. The search criteria 16 may be expressed in a variety of formats, including as regular expressions, a programming language that concisely specifies sets of target expressions without necessarily listing each target expression.[0016] Each search criterion may be constructed from one or more search terms. Thus, each target expression of a search criterion may include one or more search terms and some target expressions may use common search terms. As used herein, the phrase "search term" refers to a sequence of data that is searched for, during a single search cycle. The sequence of data may include multiple bits of data in a binary format or other formats, e.g., base ten, ASCII, etc. The sequence may encode the data with a single digit or multiple digits, e.g., several binary digits. For example, the pattern-recognition processor 14 may search a text data stream 12 one character at a time, and the search terms may specify a set of single characters, e.g., the letter "a", either the letters "a" or "e", or a wildcard search term that specifies a set of all single characters. [0017] Search terms may be smaller or larger than the number of bits that specify a character (or other grapheme — i.e., fundamental unit — of the information expressed by the data stream, e.g., a musical note, a genetic base pair, a base-10 digit, or a sub-pixel). For instance, a search term may be 8 bits and a single character may be 16 bits, in which case two consecutive search terms may specify a single character. [0018] The search criteria 16 may be formatted for the pattern-recognition processor 14 by a compiler 18. Formatting may include deconstructing search terms from the search criteria. For example, if the graphemes expressed by the data stream 12 are larger than the search terms, the compiler may deconstruct the search criterion into multiple search terms to search for a single grapheme. Similarly, if the graphemes expressed by the data stream 12 are smaller than the search terms, the compiler 18 may provide a single search term, with unused bits, for each separate grapheme. The compiler18 may also format the search criteria 16 to support various regular expressions operators that are not natively supported by the pattern-recognition processor 14. [0019] The pattern-recognition processor 14 may search the data stream 12 by evaluating each new term from the data stream 12. The word "term" here refers to the amount of data that could match a search term. During a search cycle, the pattern- recognition processor 14 may determine whether the currently presented term matches the current search term in the search criterion. If the term matches the search term, the evaluation is "advanced", i.e., the next term is compared to the next search term in the search criterion. If the term does not match, the next term is compared to the first term in the search criterion, thereby resetting the search. [0020] Each search criterion may be compiled into a different finite state machine in the pattern-recognition processor 14. The finite state machines may run in parallel, searching the data stream 12 according to the search criteria 16. The finite state machines may step through each successive search term in a search criterion as the preceding search term is matched by the data stream 12, or if the search term is unmatched, the finite state machines may begin searching for the first search term of the search criterion. [0021] The pattern-recognition processor 14 may evaluate each new term according to several search criteria, and their respective search terms, at about the same time, e.g., during a single device cycle. The parallel finite state machines may each receive the term from the data stream 12 at about the same time, and each of the parallel finite state machines may determine whether the term advances the parallel finite state machine to the next search term in its search criterion. The parallel finite state machines may evaluate terms according to a relatively large number of search criteria, e.g., morethan 100, more than 1000, or more than 10,000. Because they operate in parallel, they may apply the search criteria to a data stream 12 having a relatively high bandwidth, e.g., a data stream 12 of greater than or generally equal to 64 MB per second or 128 MB per second, without slowing the data stream. In some embodiments, the search-cycle duration does not scale with the number of search criteria, so the number of search criteria may have little to no effect on the performance of the pattern-recognition processor 14. [0022] When a search criterion is satisfied (i.e., after advancing to the last search term and matching it), the pattern-recognition processor 14 may report the satisfaction of the criterion to a processing unit, such as a central processing unit (CPU) 20. The central processing unit 20 may control the pattern-recognition processor 14 and other portions of the system 10. [0023] The system 10 may be any of a variety of systems or devices that search a stream of data. For example, the system 10 may be a desktop, laptop, handheld or other type of computer that monitors the data stream 12. The system 10 may also be a network node, such as a router, a server, or a client (e.g., one of the previously-described types of computers). The system 10 may be some other sort of electronic device, such as a copier, a scanner, a printer, a game console, a television, a set-top video distribution or recording system, a cable box, a personal digital media player, a factory automation system, an automotive computer system, or a medical device. (The terms used to describe these various examples of systems, like many of the other terms used herein, may share some referents and, as such, should not be construed narrowly in virtue of the other items listed.) [0024] The data stream 12 may be one or more of a variety of types of data streams that a user or other entity might wish to search. For example, the data stream 12may be a stream of data received over a network, such as packets received over the Internet or voice or data received over a cellular network. The data stream 12 may be data received from a sensor in communication with the system 10, such as an imaging sensor, a temperature sensor, an accelerometer, or the like, or combinations thereof. The data stream 12 may be received by the system 10 as a serial data stream, in which the data is received in an order that has meaning, such as in a temporally, lexically, or semantically significant order. Alternatively, the data stream 12 may be received in parallel or out of order and, then, converted into a serial data stream, e.g., by reordering packets received over the Internet. In some embodiments, the data stream 12 may present terms serially, but the bits expressing each of the terms may be received in parallel. The data stream 12 may be received from a source external to the system 10, or may be formed by interrogating a memory device and forming the data stream 12 from stored data. [0025] Depending on the type of data in the data stream 12, different types of search criteria may be chosen by a designer. For instance, the search criteria 16 may be a virus definition file. Viruses or other malware may be characterized, and aspects of the malware may be used to form search criteria that indicate whether the data stream 12 is likely delivering malware. The resulting search criteria may be stored on a server, and an operator of a client system may subscribe to a service that downloads the search criteria 16 to the system 10. The search criteria 16 may be periodically updated from the server as different types of malware emerge. The search criteria 16 may also be used to specify undesirable content that might be received over a network, for instance unwanted emails (commonly known as spam) or other content that a user finds objectionable.[0026] The data stream 12 may be searched by a third party with an interest in the data being received by the system 10. For example, the data stream 12 may be monitored for text, a sequence of audio, or a sequence of video that occurs in a copyrighted work. The data stream 12 may be monitored for utterances that are relevant to a criminal investigation or civil proceeding or are of interest to an employer. [0027] The search criteria 16 may also include patterns in the data stream 12 for which a translation is available, e.g., in memory addressable by the CPU 20 or the pattern- recognition processor 14. For instance, the search criteria 16 may each specify an English word for which a corresponding Spanish word is stored in memory. In another example, the search criteria 16 may specify encoded versions of the data stream 12, e.g., MP3, MPEG 4, FLAC, Ogg Vorbis, etc., for which a decoded version of the data stream 12 is available, or vice versa. [0028] The pattern recognition processor 14 may be a hardware device that is integrated with the CPU 20 into a single component (such as a single device) or may be formed as a separate component. For instance, the pattern-recognition processor 14 may be a separate integrated circuit. The pattern-recognition processor 14 may be referred to as a "co-processor" or a "pattern-recognition co-processor". [0029] FIG. 2 depicts an example of the pattern-recognition processor 14. The pattern-recognition processor 14 may include a recognition module 22, an aggregation module 24, and a matching-data reporting module 25. The recognition module 22 may be configured to compare received terms to search terms, and both the recognition module 22 and the aggregation module 24 may cooperate to determine whether matching a term witha search term satisfies a search criterion. The matching-data reporting module 25 may store the data stream 12 in a buffer and report matching data to the CPU 20 (FIG. 1). [0030] The recognition module 22 may include a row decoder 28 and a plurality of feature cells 30. Each feature cell 30 may specify a search term, and groups of feature cells 30 may form a parallel finite state machine that forms a search criterion. Components of the feature cells 30 may form a search-term array 32, a detection array 34, and an activation-routing matrix 36. The search-term array 32 may include a plurality of input conductors 37, each of which may place each of the feature cells 30 in communication with the row decoder 28. [0031] The row decoder 28 may select particular conductors among the plurality of input conductors 37 based on the content of the data stream 12. For example, the row decoder 28 may be a one byte to 256 row decoder that activates one of 256 rows based on the value of a received byte, which may represent one term. A one-byte term of 0000 0000 may correspond to the top row among the plurality of input conductors 37, and a one-byte term of 1111 1111 may correspond to the bottom row among the plurality of input conductors 37. Thus, different input conductors 37 may be selected, depending on which terms are received from the data stream 12. As different terms are received, the row decoder 28 may deactivate the row corresponding to the previous term and activate the row corresponding to the new term. [0032] The detection array 34 may couple to a detection bus 38 that outputs signals indicative of complete or partial satisfaction of search criteria to the aggregation module 24. The activation-routing matrix 36 may selectively activate and deactivate feature cells 30 based on the number of search terms in a search criterion that have been matched.[0033] The aggregation module 24 may include a latch matrix 40, an aggregation- routing matrix 42, a threshold-logic matrix 44, a logical-product matrix 46, a logical-sum matrix 48, and an initialization-routing matrix 50. [0034] The latch matrix 40 may implement portions of certain search criteria. Some search criteria, e.g., some regular expressions, count only the first occurrence of a match or group of matches. The latch matrix 40 may include latches that record whether a match has occurred. The latches may be cleared during initialization, and periodically reinitialized during operation, as search criteria are determined to be satisfied or not further satisfiable — i.e., an earlier search term may need to be matched again before the search criterion could be satisfied. [0035] The aggregation-routing matrix 42 may function similar to the activation- routing matrix 36. The aggregation-routing matrix 42 may receive signals indicative of matches on the detection bus 38 and may route the signals to different group-logic lines 53 connecting to the threshold-logic matrix 44. The aggregation-routing matrix 42 may also route outputs of the initialization-routing matrix 50 to the detection array 34 to reset portions of the detection array 34 when a search criterion is determined to be satisfied or not further satisfiable. [0036] The threshold-logic matrix 44 may include a plurality of counters, e.g., 32- bit counters configured to count up or down. The threshold-logic matrix 44 may be loaded with an initial count, and it may count up or down from the count based on matches signaled by the recognition module. For instance, the threshold-logic matrix 44 may count the number of occurrences of a word in some length of text.[0037] The outputs of the threshold-logic matrix 44 may be inputs to the logical- product matrix 46. The logical-product matrix 46 may selectively generate "product" results (e.g., "AND" function in Boolean logic). The logical-product matrix 46 may be implemented as a square matrix, in which the number of output products is equal the number of input lines from the threshold-logic matrix 44, or the logical-product matrix 46 may have a different number of inputs than outputs. The resulting product values may be output to the logical-sum matrix 48. [0038] The logical-sum matrix 48 may selectively generate sums (e.g., "OR" functions in Boolean logic.) The logical-sum matrix 48 may also be a square matrix, or the logical- sum matrix 48 may have a different number of inputs than outputs. Since the inputs are logical products, the outputs of the logical-sum matrix 48 may be logical-Sums- of-Products (e.g., Boolean logic Sum-of-Product (SOP) form). The output of the logical- sum matrix 48 may be received by the initialization-routing matrix 50. [0039] The initialization-routing matrix 50 may reset portions of the detection array 34 and the aggregation module 24 via the aggregation-routing matrix 42. The initialization-routing matrix 50 may also be implemented as a square matrix, or the initialization-routing matrix 50 may have a different number of inputs than outputs. The initialization-routing matrix 50 may respond to signals from the logical-sum matrix 48 and re- initialize other portions of the pattern-recognition processor 14, such as when a search criterion is satisfied or determined to be not further satisfiable. [0040] The aggregation module 24 may include an output buffer 51 that receives the outputs of the threshold-logic matrix 44, the aggregation-routing matrix 42, and the logical- sum matrix 48. The output of the aggregation module 24 may be transmitted fromthe output buffer 51 may be transmitted to the CPU 20 (FIG. 1) on the output bus 26. In some embodiments, an output multiplexer may multiplex signals from these components 42, 44, and 48 and output signals indicative of satisfaction of criteria or matches of search terms to the CPU 20 (FIG. 1). In other embodiments, results from the pattern-recognition processor 14 may be reported without transmitting the signals through the output multiplexer, which is not to suggest that any other feature described herein could not also be omitted. For example, signals from the threshold-logic matrix 44, the logical-product matrix 46, the logical-sum matrix 48, or the initialization routing matrix 50 may be transmitted to the CPU in parallel on the output bus 26. [0041] FIG. 3 illustrates a portion of a single feature cell 30 in the search-term array 32 (FIG. 2), a component referred to herein as a search-term cell 54. The search- term cells 54 may include an output conductor 56 and a plurality of memory cells 58. Each of the memory cells 58 may be coupled to both the output conductor 56 and one of the conductors among the plurality of input conductors 37. In response to its input conductor 37 being selected, each of the memory cells 58 may output a value indicative of its stored value, outputting the data through the output conductor 56. In some embodiments, the plurality of input conductors 37 may be referred to as "word lines", and the output conductor 56 may be referred to as a "data line". [0042] The memory cells 58 may include any of a variety of types of memory cells. For example, the memory cells 58 may be volatile memory, such as dynamic random access memory (DRAM) cells having a transistor and a capacitor. The source and the drain of the transistor may be connected to a plate of the capacitor and the output conductor 56, respectively, and the gate of the transistor may be connected to one of theinput conductors 37. In another example of volatile memory, each of the memory cells 58 may include a static random access memory (SRAM) cell. The SRAM cell may have an output that is selectively coupled to the output conductor 56 by an access transistor controlled by one of the input conductors 37. The memory cells 58 may also include nonvolatile memory, such as phase-change memory (e.g., an ovonic device), flash memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magneto-resistive memory, or other types of nonvolatile memory. The memory cells 58 may also include memory cells made out of logic gates, e.g., flip-flops. [0043] FIGS. 4 and 5 depict an example of the search-term cell 54 in operation. FIG. 4 illustrates the search-term cell 54 receiving a term that does not match the cell's search term, and FIG. 5 illustrates a match. [0044] As illustrated by FIG. 4, the search-term cell 54 may be configured to search for one or more terms by storing data in the memory cells 58. The memory cells 58 may each represent a term that the data stream 12 might present, e.g., in FIG. 3, each memory cell 58 represents a single letter or number, starting with the letter "a" and ending with the number "9". Memory cells 58 representing terms that satisfy the search term may be programmed to store a first value, and memory cells 58 that do not represent terms that satisfy the search term may be programmed to store a different value. In the illustrated example, the search-term cell 54 is configured to search for the letter "b". The memory cells 58 that represent "b" may store a 1, or logic high, and the memory cells 58 that do not represent "b" may be programmed to store a 0, or logic low. [0045] To compare a term from the data stream 12 with the search term, the row decoder 28 may select the input conductor 37 coupled to memory cells 58 representing thereceived term. In FIG. 4, the data stream 12 presents a lowercase "e". This term may be presented by the data stream 12 in the form of an eight-bit ASCII code, and the row decoder 28 may interpret this byte as a row address, outputting a signal on the conductor 60 by energizing it. [0046] In response, the memory cell 58 controlled by the conductor 60 may output a signal indicative of the data that the memory cell 58 stores, and the signal may be conveyed by the output conductor 56. In this case, because the letter "e" is not one of the terms specified by the search-term cell 54, it does not match the search term, and the search-term cell 54 outputs a 0 value, indicating no match was found. [0047] In FIG. 5, the data stream 12 presents a character "b". Again, the row decoder 28 may interpret this term as an address, and the row decoder 28 may select the conductor 62. In response, the memory cell 58 representing the letter "b" outputs its stored value, which in this case is a 1, indicating a match. [0048] The search-term cells 54 may be configured to search for more than one term at a time. Multiple memory cells 58 may be programmed to store a 1, specifying a search term that matches with more than one term. For instance, the memory cells 58 representing the letters lowercase "a" and uppercase "A" may be programmed to store a 1, and the search-term cell 54 may search for either term. In another example, the search- term cell 54 may be configured to output a match if any character is received. All of the memory cells 58 may be programmed to store a 1, such that the search-term cell 54 may function as a wildcard term in a search criterion.[0049] FIGS. 6-8 depict the recognition module 22 searching according to a multi- term search criterion, e.g., for a word. Specifically, FIG. 6 illustrates the recognition module 22 detecting the first letter of a word, FIG. 7 illustrates detection of the second letter, and FIG. 8 illustrates detection of the last letter. [0050] As illustrated by FIG. 6, the recognition module 22 may be configured to search for the word "big". Three adjacent feature cells 63, 64, and 66 are illustrated. The feature cell 63 is configured to detect the letter "b". The feature cell 64 is configured to detect the letter "i". And the feature cell 66 is configured to both detect the letter "g" and indicate that the search criterion is satisfied. [0051] FIG. 6 also depicts additional details of the detection array 34. The detection array 34 may include a detection cell 68 in each of the feature cells 63, 64, and 66. Each of the detection cells 68 may include a memory cell 70, such as one of the types of memory cells described above (e.g., a flip-flop), that indicates whether the feature cell 63, 64, or 66 is active or inactive. The detection cells 68 may be configured to output a signal to the activation-routing matrix 36 indicating whether the detection cell both is active and has received a signal from its associated search-term cell 54 indicating a match. Inactive feature cells 63, 64, and 66 may disregard matches. Each of the detection cells 68 may include an AND gate with inputs from the memory cell 70 and the output conductor 56. The output of the AND gate may be routed to both the detection bus 38 and the activation-routing matrix 36, or one or the other. [0052] The activation-routing matrix 36, in turn, may selectively activate the feature cells 63, 64, and 66 by writing to the memory cells 70 in the detection array 34.The activation-routing matrix 36 may activate feature cells 63, 64, or 66 according to the search criterion and which search term is being searched for next in the data stream 12. [0053] In FIG. 6, the data stream 12 presents the letter "b". In response, each of the feature cells 63, 64, and 66 may output a signal on their output conductor 56, indicating the value stored in the memory cell 58 connected to the conductor 62, which represents the letter "b". The detection cells 56 may then each determine whether they have received a signal indicating a match and whether they are active. Because the feature cell 63 is configured to detect the letter "b" and is active, as indicated by its memory cell 70, the detection cell 68 in the feature cell 63 may output a signal to the activation-routing matrix 36 indicating that the first search term of the search criterion has been matched. [0054] As illustrated by FIG. 7, after the first search term is matched, the activation-routing matrix 36 may activate the next feature cell 64 by writing a 1 to its memory cell 70 in its detection cell 68. The activation-routing matrix 36 may also maintain the active state of the feature cell 63, in case the next term satisfies the first search term, e.g., if the sequence of terms "bbig" is received. The first search term of search criteria may be maintained in an active state during a portion or substantially all of the time during which the data stream 12 is searched. [0055] In FIG. 7, the data stream 12 presents the letter "i" to the recognition module 22. In response, each of the feature cells 63, 64, and 66 may output a signal on their output conductor 56, indicating the value stored in the memory cell 58 connected to the conductor 72, which represents the letter "i". The detection cells 56 may then each determine whether they have received a signal indicating a match and whether they are active. Because the feature cell 64 is configured to detect the letter "i" and is active, asindicated by its memory cell 70, the detection cell 68 in the feature cell 64 may output a signal to the activation-routing matrix 36 indicating that the next search term of its search criterion has been matched. [0056] Next, the activation-routing matrix 36 may activate the feature cell 66, as illustrated by FIG. 8. Before evaluating the next term, the feature cell 64 may be deactivated. The feature cell 64 may be deactivated by its detection cell 68 resetting its memory cell 70 between detection cycles or the activation-routing matrix 36 may deactivate the feature cell 64, for example. The feature cell 63 may remain active in case the data stream 12 presents the first term of the search criterion again. [0057] In FIG. 8, the data stream 12 presents the term "g" to the row decoder 28, which selects the conductor 74 representing the term "g". In response, each of the feature cells 63, 64, and 66 may output a signal on their output conductor 56, indicating the value stored in the memory cell 58 connected to the conductor 74, which represents the letter "g". The detection cells 56 may then each determine whether they have received a signal indicating a match and whether they are active. Because the feature cell 66 is configured to detect the letter "g" and is active, as indicated by its memory cell 70, the detection cell 68 in the feature cell 66 may output a signal to the activation routing matrix 36 indicating that the last search term of its search criterion has been matched. [0058] The end of a search criterion or a portion of a search criterion may be identified by the activation-routing matrix 36 or the detection cell 68. These components 36 or 68 may include memory indicating whether their feature cell 63, 64, or 66 specifies the last search term of a search criterion or a component of a search criterion. For example, a search criterion may specify all sentences in which the word "cattle" occurstwice, and the recognition module may output a signal indicating each occurrence of "cattle" within a sentence to the aggregation module, which may count the occurrences to determine whether the search criterion is satisfied. [0059] Feature cells 63, 64, or 66 may be activated under several conditions. A feature cell 63, 64, or 66 may be "always active", meaning that it remains active during all or substantially all of a search. An example of an always active feature cell 63, 64, or 66 is the first feature cell of the search criterion, e.g., feature cell 63. [0060] A feature cell 63, 64, or 66 may be "active when requested", meaning that the feature cell 63, 64, or 66 is active when some condition precedent is matched, e.g., when the preceding search terms in a search criterion are matched. An example is the feature cell 64, which is active when requested by the feature cell 63 in FIGS. 6-8, and the feature cell 66, which active when requested by the feature cell 64. [0061] A feature cell 63, 64, or 66 may be "self activated", meaning that once it is activated, it activates itself as long as its search term is matched. For example, a self activated feature cell having a search term that is matched by any numerical digit may remain active through the sequence "123456xy" until the letter "x" is reached. Each time the search term of the self activated feature cell is matched, it may activate the next feature cell in the search criterion. Thus, an always active feature cell may be formed from a self activating feature cell and an active when requested feature cell: the self activating feature cell may be programmed with all of its memory cells 58 storing a 1, and it may repeatedly activate the active when requested feature cell after each term. In some embodiments, each feature cell 63, 64, and 66 may include a memory cell in its detection cell 68 or in theactivation-routing matrix 36 that specifies whether the feature cell is always active, thereby forming an always active feature cell from a single feature cell. [0062] FIG. 9 depicts an example of a recognition module 22 configured to search according to a first search criterion 75 and a second search criterion 76 in parallel. In this example, the first search criterion 75 specifies the word "big", and the second search criterion 76 specifies the word "cab". A signal indicative of the current term from the data stream 12 may be communicated to feature cells in each search criterion 75 and 76 at generally the same time. Each of the input conductors 37 spans both of the search criteria 75 and 76. As a result, in some embodiments, both of the search criteria 75 and 76 may evaluate the current term generally simultaneously. This is believed to speed the evaluation of search criteria. Other embodiments may include more feature cells configured to evaluate more search criteria in parallel. For example, some embodiments may include more than 100, 500, 1000, 5000, or 10,000 feature cells operating in parallel. These feature cells may evaluate hundreds or thousands of search criteria generally simultaneously. [0063] Search criteria with different numbers of search terms may be formed by allocating more or fewer feature cells to the search criteria. Simple search criteria may consume fewer resources in the form of feature cells than complex search criteria. This is believed to reduce the cost of the pattern-recognition processor 14 (FIG. 2) relative to processors with a large number of generally identical cores, all configured to evaluate complex search criteria. [0064] FIGS. 10-12 depict both an example of a more complex search criterion and features of the activation-routing matrix 36. The activation-routing matrix 36 mayinclude a plurality of activation-routing cells 78, groups of which may be associated with each of the feature cells 63, 64, 66, 80, 82, 84, and 86. For instance, each of the feature cells may include 5, 10, 20, 50, or more activation-routing cells 78. The activation-routing cells 78 may be configured to transmit activation signals to the next search term in a search criterion when a preceding search term is matched. The activation-routing cells 78 may be configured to route activation signals to adjacent feature cells or other activation- routing cells 78 within the same feature cell. The activation-routing cells 78 may include memory that indicates which feature cells correspond to the next search term in a search criterion. [0065] As illustrated by FIGS. 10-12, the recognition module 22 may be configured to search according to more complex search criteria than criteria that specify single words. For instance, the recognition module 22 may be configured to search for words beginning with a prefix 88 and ending with one of two suffixes 90 or 92. The illustrated search criterion specifies words beginning with the letters "c" and "1" in sequence and ending with either the sequence of letters "ap" or the sequence of letters "oud". This is an example of a search criterion specifying multiple target expressions, e.g., the word "clap" or the word "cloud". [0066] In FIG. 10, the data stream 12 presents the letter "c" to the recognition module 22, and feature cell 63 is both active and detects a match. In response, the activation-routing matrix 36 may activate the next feature cell 64. The activation-routing matrix 36 may also maintain the active state of the feature cell 63, as the feature cell 63 is the first search term in the search criterion.[0067] In FIG. 11, the data stream 12 presents a letter "1", and the feature cell 64 recognizes a match and is active. In response, the activation-routing matrix 36 may transmit an activation signal both to the first feature cell 66 of the first suffix 90 and to the first feature cell 82 of the second suffix 92. In other examples, more suffixes may be activated, or multiple prefixes may active one or more suffixes. [0068] Next, as illustrated by FIG. 12, the data stream 12 presents the letter "o" to the recognition module 22, and the feature cell 82 of the second suffix 92 detects a match and is active. In response, the activation-routing matrix 36 may activate the next feature cell 84 of the second suffix 92. The search for the first suffix 90 may die out, as the feature cell 66 is allowed to go inactive. The steps illustrated by FIGS. 10-12 may continue through the letters "u" and "d", or the search may die out until the next time the prefix 88 is matched. [0069] FIG. 13 illustrates an embodiment of the matching-data reporting module 25. The matching-data reporting module 25 may include a counter 94, a circular buffer 96, a match event table 98, and a controller 100. The controller 100 may connect to the counter 94, the circular buffer 96, and the match event table 98. The controller 100 may also connect to the data stream 12, the detection bus 38, and the output bus 26. [0070] The counter 94 may increment or decrement a count each time the data stream 12 presents a term, e.g., once per search cycle, or once per clock cycle, which is not to suggest that a search cycle cannot have the same duration as a clock cycle. The counter 94 may be a synchronous counter having bits that change state at about the same time, or it may be an asynchronous counter. The counter 94 may be a free-running counter that is typically not stopped during normal operation, or it may be a counter that can be stoppedand reset, e.g., with a command from the controller 100. The counter 94 may be a modulo-n counter that repeats after every n terms from the data stream 12. In other embodiments, the counter 94 may be a pattern generator that outputs a repeatable sequence of values, e.g., a linear feedback shift register. The counter 94 may also be a clock that outputs a timestamp periodically or upon request. The counter 94 may be configured to count up to a certain number before repeating, e.g., by resetting. The number of increments before the counter 94 resets may be about equal to the number of terms stored by the circular buffer 96 or the number of bits stored by the circular buffer 96. For example, the counter 94 may be a binary counter configured to count with less than 21 digits, more than 21 digits, e.g., more than about 22 digits, more than about 23 digits, or more than about 25 digits. [0071] The circular buffer 96 may include a plurality of terms cells each configured to store one term from the data stream 12. As the data stream 12 is received, the controller 100 may write terms from the data stream 12 to the circular buffer based on the value of the counter 94. The circular buffer 96 may include about the same number of terms cells as the number of increments that the counter 94 undergoes before resetting. The value of the counter 94 may be used as an address in the circular buffer 96 when storing a term presented by the data stream 12. As a result, in some embodiments, the data stream 12 may repeatedly fill the circular buffer 96, overwriting terms stored in the terms cells in the circular buffer 96 after the counter 94 resets. In other embodiments, other types of first input-first output buffers may be used, such as a linked list. The size of the circular buffer 96 may be selected based on the largest expected or desired amount of matching data that will be reported to the CPU 20 (FIG. 1). The circular buffer 96 may beconfigured to store less than about 2MB, more than about 2 MB, more than about 4 MB, more than about 8 MB, more than about 16 MB, or more than about 32 MB. [0072] The match event table 98 may store the value of the counter 94 each time the detection bus 38 signals the start of a match or the completion of a match. The match table 98 may be formed in the recognition module 22 (FIG. 2). For example, each of the feature cells 30 may include memory for storing the count at the start of a match and memory for storing the count at the end of the match. In other embodiments, the feature cells 30 may be grouped together, e.g., in groups of four or eight, and memory for the group of feature cells may store the count when one of the feature cells 30 is matched at the start of a search criterion or when one of the feature cells 30 in the group is matched at the end of the search criterion. [0073] The match event table 98 also may be separate from the recognition module 22. Each of the feature cells 30 or groups of feature cells 30 may report a start of a criterion being satisfied or the satisfaction of the criterion to the controller 100 (FIG. 13), and the match event table 98 may store data that indicates the value of the counter 94, whether the satisfaction of the criterion is starting or is completed, and whether records of a satisfaction of a criterion that has started are safe to overwrite. The value of the counter 94 at the start or completion of a criterion being satisfied may be referred to as a "match event pointer." Each search criterion may be assigned to several addresses in the match event table 98 in case the search criterion begins to be satisfied while a previous criterion satisfaction is still in progress. Alternatively, each search criterion may be assigned an identification number that is stored in the match event table 98 along with the previously mentioned data. The match event table 98 may include a bit indicating whether thecriterion satisfaction that has started is safe to overwrite, either because the criterion satisfaction has been completed and reported, or because the criterion satisfaction has been unmatched by subsequent data. [0074] The controller 100 or other components of the pattern-recognition processor 14 (FIG. 2) may report completed criterion satisfactions to the CPU 20. In response, the CPU 20 may request a copy of the data that produced the criterion satisfaction. This request may be transmitted through the output bus 26 to the controller 100. The controller 100 may be configured to read the matched data from the circular buffer 96 based on the start and stop counts from the match event table 98 and report the matching data to the CPU 20 (FIG. 1) on the output bus 26. In other embodiments, the start count and the stop count of the criterion satisfaction may be reported to the CPU 20, and the CPU 20 may retrieve the appropriate data from the circular buffer 96. [0075] The operation of the matching-data reporting module 25 is illustrated in greater detail by FIGS. 14-19. As illustrated by FIG. 14, the circular buffer 96 may be represented as a circle of terms cells 102 with the counter 94 pointing to one of the terms cells 102. The phrase "term cell" refers to a group of memory cells configured to store a single term. As the data stream 12 is received, each term may be stored in one of the terms cells 102, specifically the terms cell 102 to which the counter 94 is pointing. As each term is received from the data stream 12, the counter 94 may increment the count by one and point to the next term cell 102. [0076] FIG. 15 illustrates the circular buffer 96 storing the first term presented by the data stream 12. The letter "e" is received from the data stream 12 and stored by the terms cell 102'. Before the next term is presented by the data stream 12, or while the nextterm is presented by the data stream 12, the counter 94 increments to point toward the next terms cell 102". This process may repeat with each successive term presented by the data stream 12 as the counter 94 increments to address each successive terms cell 102 in the circular buffer 96. When the counter 94 resets, the circular buffer 96 may begin to overwrite data stored in the terms cells 102. [0077] While the circular buffer 96 is storing terms from the data stream 12, the data stream 12 may be searched. As described above, other portions of the pattern- recognition processor 14 (FIG. 2) may detect the beginning and the satisfaction of various search criteria. Matches produced by this searching may affect the operation of the matching-data reporting module 25. To illustrate this relationship, the operation of the matching-data reporting module 25 (FIG. 13) will be described with an example in which the pattern-recognition processor 14 (FIG. 2) searches the data stream 12 shown in FIG. 14-19 for two different search criteria. The first criterion, referred to in the figures as criterion A, is the word "virus" followed by the word "download" within the same sentence. Accordingly, the pattern-recognition processor 14 may search for the word "virus" and upon detecting this word, search for both the word "download" and a character that indicates the end of a sentence, such as a period. If the word "download" occurs before the character that indicates the end of a sentence, then criterion A is matched. The second search criterion, referred to in the figures as criterion B, is the word "download" occurring within four words of the word "malware". Once the word "download" is matched, the pattern-recognition processor 14 may both count the number of words, e.g., by counting inter-word characters, and search for the word "malware". Alternatively, if the word "malware" is matched first, the the pattern-recognition processor 14 may bothcount the number of words, e.g., by counting inter-word characters, and search for the word "download". If the word "malware" occurs before four inter-word characters are presented, then criterion B is satisfied. [0078] FIG. 16 illustrates the response of the matching-data reporting module 25 to the first search term in criterion A being matched: the letter "v". As the data stream 12 presents the letter "v", the first feature cell 30 (FIG. 2) of criterion A records the value of the counter 94 in the match event table 98. The first feature cell 30 may output a signal on the detection bus 38 indicating that it is active, has received a match, and is the first feature cell of search criterion A. This information may be stored in the matching-data reporting module 25 (FIG. 13) along with the value of the counter 94. The match event represented in FIG. 16 has a match event pointer 104 corresponding with the value of the count 94 at the start of the match. [0079] As illustrated by FIG. 17, the circular buffer 96 may continue to store terms from the data stream 12 as they are received. When the data stream 12 reaches the letter "d" in the word "and", the first feature cell 30 specifying criterion B may indicate the beginning of a match, as the letter "d" is the first letter of the word "download". In response, the match event table 98 may store the value of the counter 94, either in memory associated with the first feature cell 30 in criterion B or in separate memory. If separate memory is used, data that identifies the second criterion B may also be stored. [0080] When the space after the word "and" is presented by the data stream 12, criterion B may be unmatched, as the second search term in criterion B searches for the letter "o", and not a space. The un-matching of criterion B may be recorded in the match event table 98 by overwriting the match event pointer 106 or designating the match eventpointer 106 as being over writeable. As mentioned above, the match event table 98 may include one or more bits of memory that indicate whether a match event pointer points to a match in progress or a failed match that is safe to overwrite. [0081] Another match event pointer 108 may be stored when the data stream 12 presents the next letter "d", at the beginning of the word "download". Again, the value of the counter 94 may be stored in the match event table 98 to form the match event pointer 108. The match event pointer 108 may overwrite the match event pointer 106, or the match event pointer 108 may be stored in different memory in the match event table 98. [0082] At this stage in the present example, the data stream 12 has begun satisfying two different search criteria at the same time. Criterion A is searching for the rest of the word "download", and criterion B is searching for the rest of the word "download" followed by the word "malware" within the next four words. The circular buffer 96 may store a single copy of the data stream 12, and the match event table 98 may indicate which portions of this copy have satisfied or are in the process of satisfying the different criteria. The different matches of part of criterion B are distinguished by the number following the letter B, e.g., Bl and B2. [0083] As illustrated by FIG. 18, the circular buffer 96 may continue to store terms from the data stream 12 while the pattern-recognition processor 14 (FIG. 2) searches the data stream 12. The last letter of the word "download" may cause the match event table 98 to record another match event 110. The ending letter "d" satisfies the first search term of criterion B. As a result, the match event table 98 may record the value of the counter 94 when this term is presented. When the space after the letter "d" is presented by the data stream 12, this criterion satisfaction fails, and the match event 110 may be designated inthe match event table 98 as indicating the start of a failed criterion satisfaction and being safe to overwrite. [0084] As mentioned above, the criteria may be evaluated in parallel. When the space after the word "download" is recieved, the last search term in criterion A is matched. In response, the final feature cell 30 (FIG. 2) that specifies criterion A may record the value of the counter 94 in the match event table 98. The final feature cell 30 may output a signal on the detection bus 38 indicating that it is matched, it is active, and it is the last search term of criterion A. The match event table 98 may store a new match event 112 by storing the value of the counter 94 and, in some embodiments, the identity of the criteria. [0085] If more than one criterion satisfaction for criteria A is in progress, the match event table 98 may correlate the match event 112 with the earliest match event in the match event table 98 for criteria A. In other embodiments, the match event table 98 may include data for each match event indicating both which criterion is being matched (e.g., A or B) and which match of the criterion caused the match event (e.g., the first or second match of B), as some criterion may be in multiple stages of being satisfied at the same time. Some matching data sets for a given criterion may be shorter than others for the same criterion, so storing data identifying which criterion satisfaction of a criterion caused a match event may facilitate correlating beginning match events with completed match events. For example, a search criterion may specify two different ways to be satisfied, such as: 1) a word that starts with the letter "m" and ends with "e"; or 2) a word including the sequence of letters "est". A data stream that presents the sequence "milestones" matches both ways to satisfy the criterion. In such an example, thesatisfaction that starts second is finished first. Thus, including match identifiers in the match event table 98 may facilitate identifying the matching portion of the data stream 12. [0086] When a criterion is satisfied, the aggregation module 24 (FIG. 2) may report to the CPU 20 through the output bus 26 that a search criterion is satisfied. The aggregation module 24 may also report which search criterion is satisfied upon being further interrogated by the CPU 20, or the aggregation module 24 may identify the criterion without being prompted. [0087] When the CPU 20 receives a signal indicating a criterion satisfaction, the CPU 20 may request the matching data from the matching-data reporting module 25 (FIG. 13). The request may be transmitted to the controller 100 via the output bus 26. In response, the controller 100 may transmit the data in the circular buffer 96 that is between the starting match event pointer and the ending match event pointer. In the example illustrated by FIG. 18, the matching-data reporting module 25 may report the data between the match event pointer 104, indicating the start of the match of criterion A, and the match event pointer 112, indicating the end of the match. [0088] The CPU 20 may use this data to a variety of different ends. For example, the CPU 20 may create a log of matching data or the CPU 20 may report the matching data to a server over a network, e.g., a server operated by an entity that sells subscriptions to search criteria. In some embodiments, the CPU 20 may report the matching data or data based on the matching data to a system administrator or an entity responsible for enforcing copyrighted content that includes the matching data.[0089] Once the matching data is reported to the CPU 20, or once the CPU indicates that it is not requesting the matching data, the entries for the match event pointers 104 and 110 in the match event table 98 may be overwritten or designated as being safe to be overwritten. [0090] Though criterion A is satisfied, criterion B is still midway through a satisfaction in the state illustrated by FIG. 19. Match event pointer 108 has not been designated as indicating the start of a failed match. Thus, as the circular buffer 96 continues to store terms from the data stream 12, the pattern-recognition processor 14 (FIG. 2) may continue to search for the word "malware" within the four words following "download". [0091] Further, each time the letter "d" is presented by the data stream 12, the match event table 98 may store another match event pointer, as indicated by match event pointers 114 and 116. Each of these match event pointers 114 and 116 may be designated as indicating failed matches when subsequent, non-matching terms are presented by the data stream 12. [0092] After the word "malware" is presented, criterion B may be fully satisfied. In response, the match event table 98 may store the match event pointer 118. The match may be reported to the CPU 20, and the CPU 20 may request the matching data from the matching-data reporting module 25. The controller 100 may transmit the portion of the circular buffer 96 between match event pointer 106 and match event pointer 118. Once this data is reported, the match event pointers 106 and 118 may be designated as being safe to overwrite. In other embodiments, the match event pointers 106 and 108 may persist for some time after the completion of a match. For example, the CPU 20 may havea certain number of search cycles to determine whether to request matching data before the match event pointers 106 and 108 are overwritten or are designated as being safe to overwrite. [0093] While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims. |
A device includes a die (100) with a protective overcoat (128) and a substrate (104), the substrate (104) comprising a first region (108) and a second region (112) that are spaced apart. The device also includes an isolation dielectric (124) between the protective overcoat (128) and the die (100). A pre-metal dielectric (PMD) barrier (120) is between the isolation dielectric (124) and the substrate (104), the PMD barrier (120) having a first region (136) that contacts the first region of the substrate (108) and a second region that contacts the second region (112) of the substrate, the first region (136) and the second region of the PMD barrier (120) being spaced apart. A through trench (132) filled with a polymer dielectric (134) extends between the first region (108) and the second region (112) of the substrate (104), and between the first region (120) and the second region (140) of the PMD (120) to contact the isolation dielectric (124). |
CLAIMSWhat is claimed is:1. A device comprising: a die with a protective overcoat and a substrate, the substrate comprising a first region and a second region that are spaced apart; an isolation dielectric between the protective overcoat and the die; a pre-metal dielectric (PMD) barrier between the isolation dielectric and the substrate, the PMD barrier having a first region that contacts the first region of the substrate and a second region that contacts the second region of the substrate, the first region and the second region of the PMD barrier being spaced apart; and a through trench filled with a polymer dielectric that extends between the first region and the second region of the substrate, and between the first region and the second region of the PMD barrier to contact the isolation dielectric.2. The device of claim 1, wherein an end of the through trench proximate the isolation dielectric comprises rounded corners.3. The device of claim 2, wherein the through trench protrudes into the isolation dielectric.4. The device of claim 2, wherein the polymer dielectric has a first coefficient of thermal expansion, and the PMD barrier is formed with material with a second coefficient of thermal expansion, the first coefficient of thermal expansion being greater than the second coefficient of thermal expansion.5. The device of claim 2, wherein the PMD barrier is formed with silicon nitride, and the polymer dielectric is parylene.6. The device of claim 2, further comprising metal patches implanted in the isolation dielectric.7. The device of claim 2, further comprising a metal layer extending over the protective overcoat.8. The device of claim 2, wherein the end of the through trench is a first end, the through trench further comprising a notch at a second end of the through trench distal from the first end.9. The device of claim 2, further comprising a void embedded in the polymer dielectric.10. The device of claim 9, wherein the polymer dielectric comprises a coating of silicon dioxide and parylene, wherein the void is embedded within the parylene.11. The device of claim 2, further comprising: tiered metal patches implanted in the isolation dielectric, wherein each array of tiered metal patches comprises multiple layers of metal patches of varying lengths; and an array of shallow trenches formed near at interface of the trench filled with the polymer dielectric and the isolation dielectric.12. The device of claim 1, further comprising a first contact on a first comer of an end of the through trench proximate the isolation dielectric and a second contact on a second corner of the end of the through trench proximate the isolation dielectric.13. The device of claim 12, wherein the first contact and the second contact comprise at least one of tungsten, aluminum or copper.14. The device of claim 1, further comprising deep trenches in the through trench, wherein the polymer dielectric extends between the deep trenches in the through trench.15. The device of claim 1, wherein the die further comprises a third region spaced apart from the first region and the second region, and the trench traverses a location on the die where the first region, the second region and the third region are separated by the trench.16. The device of claim 15, wherein the trench at the location is one of a curved shaped connection and a Y shaped connection.17. A method for forming a device, the method comprising: depositing a patterned coat of resist on a wafer, wherein a metallization stack is on a first surface of the wafer, the metallization stack comprising a pre-metal dielectric (PMD) barrier and an isolation dielectric; etching through trenches in the wafer and removing the coat of resist, such that the through trench protrudes into the isolation dielectric of the metallization stack; depositing a polymer dielectric on a second surface of the wafer to fill the through trenches; and singulating dies from the wafer, such that the dies include a through trench.18. The method of claim 17, wherein a void is formed in the through trenches.19. The method of claim 17, wherein the metallization stack includes contacts for comers of the through trenches.20. The method of claim 17, wherein the metallization stack includes metal patches implanted in the isolation dielectric. |
THROUGH TRENCH ISOLATION FOR DIE[0001] This description relates to dies. More particularly, this description relates to dies with a through trench for isolation between regions of the dies.BACKGROUND[0002] In electronics, a wafer (also called a slice) is a thin slice of semiconductor, such as a crystalline silicon (c-Si), used for the fabrication of integrated circuits (ICs). The wafer serves as the substrate for microelectronic devices built in and upon the wafer. A wafer undergoes many microfabrication processes, such as doping, ion implantation, etching, thin-film deposition of various materials and photolithographic patterning. Finally, the individual dies that include microcircuits are separated by wafer dicing and packaged as an integrated circuit.[0003] Parylene is an organic polymer that includes hydrogen (H) and carbon (C) atoms. Parylene is hydrophobic and resistant to most chemicals. Coatings of parylene are often applied to electronic circuits and other equipment as electrical insulation, moisture barriers, or protection against corrosion and chemical attack. Parylene coatings are applied by chemical vapor deposition in an atmosphere of the monomer para-xylylene.SUMMARY[0004] A first example relates to a device that includes a die with a protective overcoat and a substrate, the substrate having a first region and a second region that are spaced apart. The device also includes an isolation dielectric between the protective overcoat and the die. A pre-metal dielectric (PMD) barrier is between the isolation dielectric and the substrate, the PMD barrier having a first region that contacts the first region of the substrate and a second region that contacts the second region of the substrate, the first region and the second region of the PMD barrier being spaced apart. A through trench filled with a polymer dielectric extends between the first region and the second region of the substrate, and between the first region and the second region of the PMD barrier to contact the isolation dielectric.[0005] A second example relates to a method for forming a device. The method includes
depositing a patterned coat of resist on a wafer. A metallization stack is on a first surface of the wafer, the metallization stack comprising a pre-metal dielectric (PMD) barrier and an isolation dielectric. The method also includes etching through trenches in the wafer and removing the coat of resist, such that the through trench protrudes into the isolation dielectric of the metallization stack. The method further includes depositing a polymer dielectric on a second surface of the wafer to fill the through trenches and singulating dies from the wafer, such that the dies include a through trench.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 illustrates a cross-section diagram of a region of a first example die employable for an integrated circuit (IC) packages.[0007] FIG. 2 illustrates a cross-section diagram of a region of a second example die employable for an IC package.[0008] FIG. 3 illustrates a cross-section diagram of a region of a third example die employable for an IC package.[0009] FIG. 4 illustrates a cross-section diagram of a region of a fourth example die employable for an IC package.[0010] FIG. 5 illustrates a cross-section diagram of a region of a fifth example die employable for an IC package.[0011] FIG. 6 illustrates a cross-section diagram of a region of a sixth example die employable for an IC package.[0012] FIG. 7 illustrates an overhead view of a die employable for an IC package.[0013] FIG. 8 illustrates three examples of architecture for a trench at a tripoint of a die of an IC package.[0014] FIG. 9 illustrates an IC package that includes a die mounted in a first example IC package that is formed of a plastic molding material.[0015] FIG. 10 illustrates an IC package that includes a die mounted in a second example IC package that is formed of a plastic molding material.[0016] FIG. 11 illustrates an IC package that includes a die mounted on a printed circuit board (PCB).
[0017] FIG. 12 illustrates a first stage of a method for processing a wafer for singulation of dies. [0018] FIG. 13 illustrates a second stage of the method for processing a wafer for singulation of dies.[0019] FIG. 14 illustrates a third stage of the method for processing a wafer for singulation of dies.[0020] FIG. 15 illustrates a fourth stage of the method for processing a wafer for singulation of dies.[0021] FIG. 16 illustrates a fifth stage of the method for processing a wafer for singulation of dies.[0022] FIG. 17 illustrates a sixth stage of the method for processing a wafer for singulation of dies.[0023] FIG. 18 illustrates a seventh stage of the method for processing a wafer for singulation of dies.[0024] FIG. 19 illustrates an eighth stage of the method for processing a wafer for singulation of dies.[0025] FIG. 20 illustrates a flowchart of an example method for forming an IC package. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0026] This description relates to a die for an integrated circuit (IC) package (or a device, more generally). The die has been singulated from a wafer. The die has a metallization stack situated on a surface of the die. The metallization stack includes a protective overcoat (PO). The substrate includes a first region and a second region. The first region and the second region have circuit components embedded therein. Moreover, in some examples, the circuit components embedded in the first region have a first voltage rating and the circuit components in the second region have a second voltage rating, different from the first voltage rating. An isolation dielectric of the metallization stack is situated between the protective overcoat and the die, and a pre-metal dielectric (PMD) barrier of the metallization stack is situated between the isolation dielectric and the die. The PMD barrier has a first region that contacts the first region of the substrate and a second region that contacts the second region of the substrate. The first region and the second region of the PMD barrier and the substrate are spaced apart. The PMD barrier is formed of a
material such as silicon nitride (SiN), and the isolation dielectric is formed of a material such as silicon dioxide (S1O2).[0027] A through trench filled with a polymer dielectric extends between the first region and the second region of the substrate, and between the first region and the second region of the PMD barrier to contact the isolation dielectric. The through trench filled with polymer dielectric electrically isolates the first region of the die and the second region of the die to avoid unwanted electromagnetic interference (EMI). In some examples, the through trench protrudes into the isolation dielectric. Moreover, in some examples, the through trench has rounded corners on an end that is proximate the isolation dielectric. The polymer dielectric is formed with a material such as parylene. The polymer dielectric filling the through trench has a greater coefficient of thermal expansion than the PMD barrier. Further, the isolation dielectric has a lower elastic modulus than the PMD barrier.[0028] To singulate the die, the wafer is sawn into individual dies for device creation. In some examples, an instance of the individual die is employable in a product as-is such as in a wafer scale package with solder. In other examples, a bump bond between a die and product board is added to couple the die and the product board. In some examples, there is a thermal cycle involved to attach a die to the wafer scale package or the product board. For instance, for solder connections, the die is attached to the product board with a reflow process where the product board and the die are placed together and then heated above the melting temperature of the solder (e.g., about 250 °C). In some examples, this attachment process has a short time (e.g., about 5 seconds or less) at peak temperature and a fast cool rate (e.g., about 1 second per 5 degrees Celsius or more) as well. [0029] In some examples, the multiple instances of the die, after singulation, are placed in a package with a protective plastic layer. In this case, the die are then mounted on an interconnect (e.g., a lead frame). In some examples, an additional wire connection is employed (wire bond) to electrically couple the die to the interconnect. In other examples, the mounting between the die and the interconnect is the electrical connection (bump or solder bonding). In examples where wire bonding is employed, the die attach is an insulating layer (epoxy like compound sometimes with added ceramic fillers), a conductive material, such as silver (Ag) filled epoxy, etc. Responsive to mounting the die on the interconnect (and attaching the wires, in some examples),
the die and the interconnect are encased in a molding (e.g., plastic) to form the IC package.[0030] In some examples, these IC package formation operations (or some subset thereof) include a thermal anneal which is between about 100 °C and about 200 °C. The first thermal anneal is employed to attach the die to the interconnect (e.g., lead frame). Also, in some examples, to mount the formed IC package, the IC package is soldered to the product board using the before mentioned reflow process.[0031] As noted, the isolation dielectric has a lower elastic modulus than the PMD barrier and also a larger thermal expansion coefficient. The isolation dielectric elastic modulus also has a temperature coefficient where the stiffness is reduced at higher temperatures such as during the annealing of the IC package and during the reflow than at ambient temperature (e.g., 0 °C to 32 °C). Thus, a force applied by the polymer dielectric filling the through trench due to thermal expansion causes the isolation dielectric to slightly deform. This deformation allows the polymer dielectric to partially compress the isolation dielectric and reduces a transfer of force on the PO. Accordingly, force in on the PO due to thermal expansion of the polymer dielectric filling the through trench is curtailed. Curtailment of this force reduces the chances that components of the metallization stack will crack during solder reflow or another time that the die is heated.[0032] FIG. 1 illustrates a cross-section diagram of a region of a die 100 employable for an integrated circuit (IC) package (a device). The die 100 has been singulated from a wafer. The die 100 includes a substrate 104. The substrate 104 is formed of a semiconductor material such as silicon (Si), in some examples. The substrate 104 includes a first region 108 and a second region 112. A metallization stack 116 overlays the substrate 104. The metallization stack 116 includes a pre-metal dielectric (PMD) barrier 120, an isolation dielectric 124 and a protective overcoat (PO) 128. In some examples, there are additional layers between the PMD barrier 120 and the substrate which include silicon dioxide (S1O2) in such examples. In some cases the metallization is based on Aluminum metal layers with Tungesten (W) Via layers or contact to Si. In other cases the metallization is copper with vias of the same material. The contact is likely Tungsten, Copper, Ruthenium or another conductive layers. The metallization layers might have diffusion barriers or adhesion layers like Ti, TiN, Ta, TaN, TiAl or TiAlN.[0033] The PMD barrier 120 contacts the substrate 104, the isolation dielectric 124 overlies the
PMD barrier 120 and the PO 128 overlies the isolation dielectric 124. In such a situation, the PO 128 protects the metallization stack 116 from exposure to the environment except for exposed metal pads which are not shown. The isolation dielectric 124 includes a PMB barrier. In some examples, the PMD barrier 120 is formed of silicon dioxide (S1O2) based material such as phosphorous silicate glass (PSG) or boron phosphorous silicate glass (BPSG). The PMD barrier 120 underlies this portion of the isolation dielectric 124 (the PMD layer), and this PMD barrier 120 includes materials such as silicon nitride (SiN), silicon oxynitride (SiON) and other silicon dioxide (S1O2) layers in some examples. In other examples, the isolation dielectric 124 is formed with SiN, fluorine doped S1O2 or a low-K dielectric, such as silsesquioxane [RSiOs/zjn.[0034] The first region 108 and the second region 112 contain circuit components (e.g., transistors, resistors, capacitors, etc.) formed with standard processing techniques. The first region 108 and the second region 112 are spaced apart by a through trench 132. The through trench 132 provides dielectric isolation between the first region 108 and the second region 112 of the substrate 104. In this manner, the first region 108 and the second region 112 can have different power domains. As one example, the first region 108 has a high supply voltage (e.g., 80 V or more), wherein some of the components integrated with the first region 108 are rated for the high supply voltage. Conversely, in this example, the second region 112 has a low supply voltage (e.g., 10 V or less), wherein components integrated with the second region 112 of the substrate 104 are rated for the low supply voltage. Inclusion of the through trench 132 prevents unwanted EMI leaking and/or shorts between the two power domains.[0035] The through trench 132 is filled with a polymer dielectric 134, such as parylene, including some of the functional groups of parylene, such as parylene-F, parylene-HT or parylene- AF4, parylene-VT4, parylene-N, or parylene-C. The polymer dielectric 134 filling the through trench 132 has a greater coefficient of thermal expansion than the PMD barrier 120. Stated differently, the polymer dielectric 134 filling the through trench 132 has a first coefficient of thermal expansion and the PMD barrier 120 has a second coefficient of thermal expansion, and the first coefficient of thermal expansion is greater than the second coefficient of thermal expansion.[0036] Also, in some examples, the isolation dielectric 124 has a lower elastic modulus than the
PMD barrier 120 formed of a material such as silicon nitride (SiN), silicon oxynitride (SiON) or silicon carbon oxynitride (SiCON). Stated differently, the isolation dielectric 124 has a first elastic modulus, and the PMD barrier 120 has a second elastic modulus, greater than the first elastic modulus. The elastic modulus of a material (e.g., the isolation dielectric 124 and the PMD barrier 120) characterizes the material's ability to resist being deformed elastically (e.g., non- permanently) when a stress is applied to the material. In examples where the die 100 is heated, the different materials forming the die 100 attempt to expand based on their respective coefficients of thermal expansion. In some examples, the materials are constrained by surrounding materials, and therefore experience force which the materials of the die 100 also apply to the surrounding materials. In examples where the polymer dielectric 134, in the trench 132 has a greater coefficient of thermal expansion than the substrate 104, the polymer dielectric 134 expands more than the substrate 104 and therefore applies stress away from the trench 132. If there is a stiff layer (greater elastic modulus) near the trench 132 then the stress is greater in this layer than nearby.[0037] The PMD barrier 120 includes a first region 136 and a second region 140. The first region 136 of the PMD barrier 120 contacts the first region 108 of the substrate 104 and the second region 140 contacts the second region 112 of the substrate 104. Also, the first region 136 and the second region 140 of the PMD barrier 120 are spaced apart. More particularly, a region of the PMD barrier 120 has been etched away.[0038] A first end 144 of the through trench 132 is proximate the isolation dielectric 124. The first end 144 extends in a first direction. The first end 144 includes a first corner 148 and a second corner 152. The first corner 148 and the second corner 152 are rounded corners with a radius of curvature of about 0.01 micrometers (pm) to about 0.2 pm. Unless otherwise stated, in this description, 'about' preceding a value means +/- 10 percent of the stated value. The through trench 132 is formed such that the polymer dielectric 134 filling the through trench 132 protrudes beyond the PMD barrier 120 and into a region of the isolation dielectric 124.[0039] The through trench 132 also includes a second end 156 that opposes the first end 144. The second end 156 is distal to the isolation dielectric 124. The second end 156 includes a first region 166 and a second region 170 that extend perpendicular to the first direction. Moreover, the first region 166 of the second end 156 of the through trench 132 underlies the first region 108 of
the substrate 104. Similarly, the second region 170 of the second end 156 of the through trench 132 underlies the second region 112 of the substrate 104. The second end 156 of the through trench 132 further includes a third region 174 between the first region 166 and the second region 170. The third region 174 of the second end 156 opposes the portion of the first end 144 that protrudes beyond the PMD barrier 120 and into the isolation dielectric 124. In some examples, the third region 174 of the second end 156 includes a notch 178 (e.g., a void) from the polymer dielectric 134 flowing through the through trench 132.[0040] During fabrication of an IC package, the die 100 needs to survive many thermal cycles depending on the packaging process. The broad outlines for an example packaging process includes wafer scale encapsulation, gold (Au) solder bump, wire bond package in plastic with solder attach to printed circuit board (PCB), bump process to an interconnect (e.g., lead frame) with plastic package with solder attach to the PCB. In examples where the solder attachments are employed, the die 100 is heated to above solder reflow temperature which for common lead free solder is around 250 °C. In some examples, this solder reflow temperature is the greatest temperature reached during the packaging process. Heating of the die 100 causes components of the die 100 to expand. However, as noted, the polymer dielectric 134 filling the through trench 132 has a greater thermal coefficient of thermal expansion than the PMD barrier 120. In conventional approaches, the PMD barrier 120 extends over the through trench 132 (e.g., no portion is etched away). Thus, in this conventional approach, thermal expansion of the polymer dielectric 134 causes the PMD barrier 120 to apply force in the direction indicated by the arrow 182. In some cases, this force is large enough to create a crack which can grow through the other the layers or some subset thereof. The greater stiffness of the PMD barrier 120 compared to the materials around it concentrates the force and makes it easier to fail by cracking. This force (in the conventional approach) is transferred to the isolation dielectric 124, and again transferred to the PO 128, such that the PO 128 cracks in some instances.[0041] In contrast to the conventional approach, because a portion of the PMD barrier 120 overlying the first end 144 of the through trench 132 has been etched away, thermal expansion of the polymer dielectric 134 filling the through trench 132 in a direction indicated by an arrow 182 does not apply a force (in that same direction) on the PMD barrier 120. Also, as noted, the isolation
dielectric 124 has a lower elastic modulus than the PMD barrier 120. Thus, a force applied by the polymer dielectric 134 filling the through trench 132 in the direction of the arrow 182 due to thermal expansion causes the isolation dielectric 124 to slightly deform. This deformation allows the polymer dielectric 134 to partially compress the isolation dielectric 124 and reduces a transfer of force on the PO 128. Stated differently, because the isolation dielectric 124 is partially compressed by the thermal expansion of the polymer dielectric 134, the force applied by such thermal expansion of the polymer dielectric 134 is partially absorbed by the isolation dielectric 124, and this absorbed force is not transferred to the PO 128. Accordingly, force in the direction of the arrow 182 on the PO 128 due to thermal expansion of the polymer dielectric 134 filling the through trench 132 is curtailed. Curtailment of this force reduces the chances that components of the metallization stack 116 (including the PO 128) will crack during solder reflow or another time that the die 100 is heated.[0042] FIG. 2 illustrates a cross-section diagram of a region of a die 200 employable for an IC package. The die 200 has been singulated from a wafer, and is employable to implement the die 100 of FIG. 1. Thus, for simplicity, the same reference numbers are employed in FIGS. 1 and 2. Moreover, some features are not reintroduced.[0043] In the example illustrated in FIG. 2, the through trench 132 is filled with a polymer dielectric 204 (e.g., Parylene) that is coated with a layer of silicon dioxide (S1O2) or silicon nitride (SiN), which is referred to as a coating 208. The coating 208 is formed by plasma-enhanced chemical vapor deposition (PECVD) of the silicon dioxide (S1O2); atomic layer deposition of Si02 by pulsed deposition of Si containing organic such as TEOS with oxidizer such as H20, 02, 03, plasma 02, N20, etc.; chemical vapor deposition using TEOS + ozone to form Si02; or PECVD, ALD or CVD of silicon nitride (SiN) prior to filling the remaining portion of the through trench 132 with the polymer dielectric 204; or CVD, ALD or PECVD of A1203. With few exceptions the, the coating 208 tapers from the second end 156 to the first end 144. That is, the thickness of the coating 208 is thickest at the second end 156 of the through trench 132 and is thinnest at the first end 144 of the through trench 132, such that the polymer dielectric 204 has a dovetail shape within the through trench 132. The polymer dielectric 204 improves the dielectric properties of the trench 132 and also increase the strength by increasing the thickness of the
dielectric overlaying the trench 132. Another technique to create the trench shape is to use a substrate etch process where the diameter of the hole at the beginning of the hole 132 is narrower than in the middle or at the bottom of the hole. In some cases the hole is only wider at the bottom 144 near the PMD barrier 120.[0044] Furthermore, the polymer dielectric 204 includes a void 212 form from the silicon dioxide (S1O2) or silicon nitride (SiN) prior to filling the remaining portion of the through trench 132 with the polymer dielectric 204. The void 212 is an unfilled region of the through trench 132 that is circumscribed by the polymer dielectric 204.[0045] To attach the die 200 to an interconnect, the die 200 is heated during a die attach bake. Heating of the die 200 causes components of the die 200 to expand. The polymer dielectric 204 has a greater thermal coefficient of thermal expansion than the PMD barrier 120. Because of the presence of the void 212, the polymer dielectric 204 expands in directions indicated by arrows 220 and 224 to fill the void 212. Also, force is generated in a direction indicated by the arrow 182. Because a portion of the PMD barrier 120 overlying the first end 144 of the through trench 132 has been etched away, thermal expansion of the polymer dielectric 204 filling the through trench 132 in a direction indicated by an arrow 182 has a reduced force (in that same direction) on the PMD barrier 120.[0046] Also, a force applied by the polymer dielectric 204 filling the through trench 132 in the direction of the arrow 182 due to thermal expansion causes the isolation dielectric 124 to slightly deform. This deformation allows the polymer dielectric 204 to partially compress the isolation dielectric 124 and reduces a transfer of force on the PO 128. Also, the expansion of the polymer dielectric 204 in the direction indicated by the arrow 182 is reduced relative to the example illustrated in FIG. 1 because of the presence of the void 212 (which is compressed during the thermal expansion of the polymer dielectric 204). Accordingly, force in the direction of the arrow 182 on the PO 128 due to thermal expansion of the polymer dielectric 204 filling the through trench 132 is curtailed. Curtailment of this force reduces the chances that components of the metallization stack 116 (including the PO 128) will crack during solder reflow or another time that the die 200 is heated.[0047] FIG. 3 illustrates a cross-section diagram of a region of a die 300 employable for an IC
package. The die 300 has been singulated from a wafer, and is employable to implement the die 100 of FIG. 1. Thus, for simplicity, the same reference numbers are employed in FIGS. 1 and 3 to denote the same structure. Moreover, some features are not reintroduced.[0048] In the example illustrated in FIG. 3, the through trench 132 is filled with a polymer dielectric 302 (e.g., parylene). A first contact 304 overlays and contacts the first corner 148 of the first end 144 of the through trench 132. Also, a second contact 308 overlays and contacts and the second comer 152 of the through trench 132.[0049] The first contact 304 and the second contact 308 have a first region 312 and a second region 316. The first region 312 of the first contact 304 and the second contact 308 has a rectangular prism shape. The second region 316 of the first contact 304 and the second contact 308 also has a rectangular prism shape. In the example illustrated, the second region 316 has a larger volume than the first region 312. Also, the first region 312 of the first contact 304 is proximate the first corner 148 and the first region 312 of the second contact 308 is proximate the second corner 152. The second region 316 overlays the first region 312 of the first contact 304 and the second contact 308. The second region 316 of the first contact 304 is distal to the first corner 148 and the second region 316 of the second contact 308 is distal to the second comer 152.[0050] The first region 312 and/or the second region 316 of the first contact 304 and the second contact 308 are formed of a metal, such as tungsten (W), aluminum (Al), copper (Cu) or some combination thereof. The first contact 304 and the second contact 308 are formed of material that is more rigid than material forming the PMD barrier 120 (e.g., silicon nitride) or the isolation dielectric 124 (e.g., silicon dioxide). Accordingly, the first contact 304 and the second contact 308 strengthen the first corner 148 and the second corner 152, respectively of the through trench 132. Thus, the first contact 304 and the second contact 308 resist movement from the application of the force in the direction of the arrow 182 due to thermal expansion of the polymer dielectric 404 (e.g. parylene) filling the through trench 132. Thus, the resistance to movement in the direction of the arrow 182 by the first contact 304 and the second contact 308 further curtails the chances that components of the metallization stack 116 (including the PO 128) will crack during solder reflow or another time that the die 300 is heated.[0051] FIG. 4 illustrates a cross-section diagram of a region of a die 400 employable for an IC
package. The die 400 has been singulated from a wafer, and is employable to implement the die 100 of FIG. 1. Thus, for simplicity, the same reference numbers are employed in FIGS. 1 and 4 to denote the same structure. Moreover, some features are not reintroduced.[0052] In the example illustrated in FIG. 4, the through trench 132 is filled with a polymer dielectric 404 (e.g., parylene). Also, K number of dummy metal patches 408 are implanted in the isolation dielectric 124 (e.g., silicon dioxide), where K is an integer greater than or equal to one. Each of the dummy metal patches 408 are spaced apart from each other. Moreover, additional dummy metal patches 408 are offset (in and/or out of the diagram illustrated) from the dummy metal patches 408 illustrated in FIG. 4.[0053] The dummy metal patches 408 are situated to extend over a length of the through trench 132. The dummy metal patches 408 add rigidity to the isolation dielectric 124 (e.g., silicon dioxide). Accordingly, inclusion of the dummy metal patches 408 causes the isolation dielectric 124 to resist transfer of force to the PO 128 in response to application of a force in the direction of the arrow 182 due to thermal expansion of the polymer dielectric 404 (e.g., parylene). Thus, the resistance to transfer of force in the direction of the arrow 182 by inclusion of the dummy metal patches 408 in the isolation dielectric 124 curtails the chances that components of the metallization stack 116 (including the PO 128) will crack during solder reflow or another time that the die 400 is heated.[0054] FIG. 5 illustrates a cross-section diagram of a region of a die 450 employable for an IC package. The die 450 has been singulated from a wafer, and is employable to implement the die 100 of FIG. 1. Thus, for simplicity, the same reference numbers are employed in FIGS. 1 and 5 to denote the same structure. Moreover, some features are not reintroduced.[0055] In the example illustrated in FIG. 5, the through trench 132 is filled with a polymer dielectric 454 (e.g., parylene). Also, R number of tiered dummy metal patches 458 are placed in the isolation dielectric 124 (e.g., silicon dioxide), where R is an integer greater than or equal to one. Each of the tiered dummy metal patches 458 are spaced apart from each other. Each of the tiered dummy metal patches 458 includes two (2) or more layers of dummy metal patches having varying lengths. In the example illustrated, each tiered dummy metal patch 458 includes a first dummy metal patch 462, a second dummy metal patch 466 and a third dummy metal patch 470,
but in other examples, there are more or less dummy metal patches in each tiered dummy metal patch 458. The first dummy metal patch 462 is proximate the PMD barrier 120 and the third dummy metal patch 470 is proximate the PO 128. In some cases the first dummy metal patch 462 is even wider than the trench such that the edge of the trench does not overlap this metal layer even due to intrinsic misalignment between the trench and this metal layer. The second dummy metal patch 466 is situated between the first dummy metal patch 462 and the third dummy metal patch 470. The tiered dummy metal patches 458 are situated to extend over a length of the through trench 132. Tiers (layers) of the tiered dummy metal patches 458 are connected with vias 472. [0056] The first dummy metal patch 462 has a shortest length, the third dummy metal patch 470 has a longest length, and the second dummy metal patch 466 has a length between the length of the first dummy metal patch 462 and the third dummy metal patch 470. Accordingly, dummy metal patches of the tiered dummy metal patches 458 have different lengths. Thus, the third dummy metal patches 470 of two (2) different tiered dummy metal patches 458 are closer than the first dummy metal patches 462 of the same two (2) tiered dummy metal patches 458. Stated differently, a gap between the third dummy metal patches 470 of two (2) different tiered dummy metal patches 458 is narrower than a gap between the first dummy metal patches 462 of the same two (2) tiered dummy metal patches 458. The tiered dummy metal patches 458 add rigidity to the isolation dielectric 124 (e.g., silicon dioxide) because the dielectric strength of the isolation dielectric 124 (e.g., silicon dioxide) is greater than the dielectric strength of the polymer dielectric 454 (e.g., parylene). Also, the vias 472 add additional strength by tying the different tiers of the tiered dummy metal patches 458 together.[0057] Further, the die 450 includes an array of shallow trench isolation features 476 that extends over the first end 144 of the trench 132. Stated differently, the array of shallow trench isolation features 476 extends over the trench 132. In some examples, the shallow trench isolation features 476 are only present near a center of the trench 132. In other examples, the shallow trench isolation features 476 extend to provide a specific overlap of the trench 132 (filled with the polymer dielectric 454) and the isolation dielectric 124. The array of shallow trench isolation features 476 are underneath but connected to the PMD barrier 120. The array of shallow trench isolation features 476 are filled with the isolation dielectric 124 such as silicon dioxide and in someexamples, the array of shallow trench isolation features 476 also include silicon nitride (SiN) or silicon oxynitride (SiON). The array of shallow trench isolation features 476 create a complicated surface topography in the interface of the polymer dielectric 454 and the isolation dielectric 124 to improve adhesion near the first end 144 of the polymer dielectric 454 and the isolation dielectric 124. The improvement in adhesion enables the polymer dielectric 454 to expand on heating and relax as heating increases without delamination. Further, as the polymer dielectric 454 cools, the polymer dielectric 454 is under tension and the rough surface provided by the array of shallow trench isolation features 476 increase surface area to keep the polymer dielectric 454 adhered to the isolation dielectric 124 near the first end 144 under an increase of a tensile load. In various examples, the array of shallow trench isolation features 476 have random sizes (perpendicular to a trench width) and random spacing. In some examples, the shallow trench isolation features 476 within regions of the trench 132 are smaller than the shallow trench isolation features 476 outside the trench 132, indicating that the shallow trench isolation features 476 have been partially etched. In some such examples, the shallow trench isolation features 476 follow an outline of the trench 132 and are longer in a first direction measured from the PMD barrier 120 toward the second region 170 of the trench 132 (e.g., a height) than a second direction perpendicular to the first direction (e.g., a width). Also, in some examples, the array of shallow trench isolation features 476 are selected to increase a surface area by reducing sizes of individual shallow trenches in the array of shallow trench isolation features 476.[0058] Inclusion of the tiered dummy metal patches 458 and the array of shallow trenches causes the isolation dielectric 124 to resist transfer of force to the PO 128 in response to application of a force in the direction of the arrow 182 due to thermal expansion of the polymer dielectric 454 (e.g., parylene). Thus, the resistance to transfer of force in the direction of the arrow 182 by inclusion of the tiered dummy metal patches 458 in the isolation dielectric 124 and the array of shallow trench isolation features 476 further curtails the chances that components of the metallization stack 116 (including the PO 128) will crack during solder reflow or another time that the die 450 is heated.[0059] FIG. 6 illustrates a cross-section diagram of a region of a die 500 employable for an IC package. The die 500 has been singulated from a wafer, and is employable to implement the
die 100 of FIG. 1. Thus, simplicity, the same reference numbers are employed in FIGS. 1 and 5 to denote the same structure. Moreover, some features are not reintroduced.[0060] In the example illustrated in FIG. 6, the through trench 132 is filled with a polymer dielectric 504 (e.g., parylene). Also, R number of deep trenches 508, where R is an integer greater than or equal to one are situated in the through trench 132. The deep trenches 508 are situated at edges and/or a center of the through trench 132. In the example illustrated, there are three deep trenches 508, such that a deep trench 508 is situated over both edges and over a center of the through trench 132. In contrast to the array of shallow trench isolation features 476 of FIG. 4, the deep trenches 508 are deeper than the array of shallow trench isolation features 476. In various examples, the depth of the deep trenches 508 ranges from about 5 um to about 40 um. In some examples, the depth of the deep trenches 508 is less than a thickness of first region 108 and the second region 112 of the substrate. For instance, in one example, the depth of the deep trenches 508 is about 20% of a thickness of the first region 108 and the second region 112 of the substrate. In this example, the PMD barrier 120 includes portions that extend over each deep trench 508. In some such examples, the PMD barrier 120 includes a third region 516 that contacts a deep trench 508 situated in a middle of the through trench 132. In this situation, the polymer dielectric 504 fills in gaps between each deep trench 508 to improve electrical isolation and strength to the isolation dielectric 124.[0061] Furthermore, in some examples, a metal layer 520 overlays the PO 128. The metal layer 520 improves the rigidity of the PO 128. Accordingly, adding the metal layer 520 increases the stress that is appliable to the PO 128 before the PO 128 cracks during solder reflow or another time that the die 500 is heated.[0062] FIG. 7 illustrates an overhead view of a layout for a die 600 for an IC package. The die 600 includes a first region 604, a second region 608 and a third region 612. In some examples, the third region 612 has a high supply voltage, wherein components integrated with the third region 612 are rated for the high supply voltage. Conversely, in this example, the second region 608 has a low supply voltage (e.g., 10 V or less), and components in the second region 608 are rated for the low supply voltage. Further in this example, components in the first region 604 are rated for a third supply voltage (e.g., medium voltage) between the low supply voltage and the high supply
voltage. A trench 616 electrically isolates the first region 604, the second region 608 and the third region 612 from each other. The trench 616 has a cross section corresponding to the cross section of the die 100 of FIG. 1, the die 200 of FIG. 2, the die 300 of FIG. 3, the die 400 of FIG. 4, the die 450 of FIG. 5 or the die 500 of FIG. 6. Thus the trench 616 is filled with a polymer dielectric (e.g., parylene).[0063] In this example, a first coupling capacitor 620 electrically couples the second region 608 to the third region 612. In this manner, components in the second region 608 and the third region 612 communicate. Also, in this example, a second coupling capacitor 624 couples the first region 604 to an external region (not show) to enable communication between the die 600 to external components. The first coupling capacitor 620 and the second coupling capacitor 624 have a low parasitic capacitance (e.g., about 100 femtofarads or less). This type of capacitors can be used to transmit power as well as data. For power transmission the capacitor size is larger such as 200fF and larger.[0064] At certain locations of the layout of the die 600, the first region 604, the second region 608 and the third region 612 come near to form a tripoint such as a particular tripoint 630. Stated differently, at the tripoint 630 (a given location of the die 600), the first region 604, the second region 608 and the third region 612 are separated by the trench 616. There are multiple architectures for the trench 616 at such a junction.[0065] Although the die 600 provides isolation with a single trench 616, in other examples, it is possible to increase the isolation failure and decrease the capacitance coupling between regions by providing multiple trenches. For example, it is possible to have two rows of trenches and a region (e.g., the third region 612) between the trenches connected to a ground connection with controlled parasitic capacitance and resistance to curtail noise coupling. In examples where both greater isolation and capacitance coupling are needed, providing multiple trenches takes additional areas and also employs additional perimeter to achieve the same capacitance due to an increase in a thickness of dielectric reducing a capacitance density.[0066] FIG. 8 illustrates a first architecture 650, a second architecture 660 and a third architecture 670 for the trench 616 of FIG. 6 at a location with three (3) regions coming together, such as the tripoint 630 of FIG. 6. The first architecture 650 is referred to as a curved shaped
connection. The second architecture 660 is referred to as a Y shaped connection and the third architecture 670 is referred to as a T shaped connection. Each of the first architecture 650 (the curved connection), the second architecture 660 (the Y shaped connection) and the third architecture 670 (the T shaped connection) create additional stress relative to a simple structure that has an elevated chance of mechanical failure due to cracking.[0067] The second architecture 660 (the Y shaped connection) has a lowest stress and lowest chance of mechanical failure relative to the first architecture 650 (the curved connection) and the third architecture 670 (the T shaped connection) for a given width of the trench 616 of FIG. 7. The second architecture 660 (the Y shaped connection) has the lowest stress and the lowest chance of mechanical failure because the second architecture 660 has smoother corners than the first architecture 650 and the third architecture 670, which reduces mechanical stress. As one example, the second architecture 660 (the Y shaped connection) has 120 degree comers, and the third architecture 670 (the T shaped connection) has 90 degree comers. Moreover, the second architecture 660 is further enhanced by adding a rounded comer (e.g., a circle segment) with additional processing to widen an opening at the connection. In still further examples, dummy metal patches (e.g., the dummy metal patches 408 of FIG. 4 or the tiered dummy metal patches 458 of FIG. 5) are added to further increase a strength at the connections illustrated. Additionally or alternatively, additional polymer dielectric (e.g., parylene) is added at an edge of the trench, and at the corners of the connection.[0068] FIG. 9 illustrates an IC package 700 that includes a die 704 mounted in an IC package 708 that is formed of a plastic molding material. The die 704 is implemented with the die 200 of FIG. 2, the die 300 of FIG. 3, the die 400 of FIG. 4, the die 450 of FIG. 5 and/or the die 500 of FIG. 6. The IC package 708 is a wire bond package that includes a plurality of wire bonds 712. The wire bonds 712 electrically couple the die 704, such as a voltage region within the die 704 to a pad 720 of an interconnect 724 (alternatively referred to as a lead frame). The wire bonds 712 are coupled to the die 704 with a corresponding solder ball 728 that is formed on a conductive pad 732 formed on a first surface of the die 704.[0069] A second surface of the die 704 that opposes the first surface of the die 704 is mounted on a pad 740 of the interconnect 724 (e.g., a center pad). More specifically, a die attach
material 744 (e.g., solder paste) is sandwiched between the pad 740 of the interconnect 724 and the second surface of the die 704.[0070] The die 704 includes a first trench 760 and a second trench 764 that separate regions of the die 704. In some examples, the first trench 760 and the second trench 764 are employable to implement the through trench 132 of FIGS. 1-6. These regions of the die 704 that are separated by the first trench 760 and/or the second trench 764 are employable to implement different voltage levels, as describe herein.[0071] FIG. 10 illustrates an IC package 750 that includes a die 754 mounted in an IC package 758 that is formed of a plastic molding material. The die 754 is implemented with the die 200 of FIG. 2, the die 300 of FIG. 3, the die 400 of FIG. 4 and/or the die 500 of FIG. 6. The IC package 758 is a solder to interconnect package that includes a plurality of solder balls 762 coupled to a surface of the die 754. The solder ball 762 are coupled to metal contacts 766 (e.g., formed of aluminum or copper) mounted on the surface of the die 754 and to pads 770 of an interconnect 774.[0072] The die 754 includes a first trench 760 and a second trench 764 that separate regions of the die 754. In some examples, the first trench 760 and the second trench 764 are employable to implement the through trench 132 of FIGS. 1-5. These regions of the die 754 that are separated by the first trench 760 and/or the second trench 764 are employable to implement different voltage levels, as describe herein.[0073] FIG. 11 illustrates an IC package 800 that includes a die 804 mounted on a printed circuit board (PCB) 808. In some such an examples, the die 804 is encased in a molding material (not shown) and mounted on the PCB 808 prior to singulation. The die 804 is implemented with the die 200 of FIG. 2, the die 300 of FIG. 3, the die 400 of FIG. 4, the die 450 of FIG. 5 and/or the die 500 of FIG. 6. The die 804 is coupled to pads 810 of the PCB 808 through a plurality of solder balls 812 coupled to a first surface of the die 804. The solder balls 812 are coupled to metal contacts 816 (e.g., formed of aluminum) mounted on the first surface of the die 804 and to pads 810 of the PCB 808. The pads 810 are coupled to vias within the PCB 808. In some examples, the IC package 800 includes an additional polymer layer (not shown) between the die 804 and the PCB 808 to increase the mechanical properties and improve a voltage rating between
different bump regions. This polymer layer is sometimes referred to as an underfill application. [0074] The die 804 includes a first trench 830 and a second trench 834 that separate regions of the die 804. In some examples, the first trench 830 and the second trench 834 are employable to implement the through trench 132 of FIGS. 1-6. These regions of the die 804 that are separated by the first trench 830 and/or the second trench 834 are employable to implement different voltage levels, as describe herein.[0075] FIGS. 9-11 illustrate different ways that a die, such as the die 100 of FIG. 1, the die 200 of FIG. 2, the die 300 of FIG. 3, the die 400 of FIG. 4, the die 450 of FIG. 5 and/or the die 500 of FIG. 6 is mounted in a package to form an IC package. As demonstrated in FIGS. 9-11, the dies illustrated throughout this description are process agnostic.[0076] FIGS. 12-19 illustrate stages of a method of processing a wafer for singulation of dies, such as the die 100 of FIG. 1, the die 200 of FIG. 2, the die 300 of FIG. 3, the die 400 of FIG. 4, the die 450 of FIG. 5 and/or the die 500 of FIG. 6 is mounted in a package to form an IC package, such as the IC package 700 of FIG. 9, the IC package 750 of FIG. 8 or the IC package 800 of FIG. 11. The method of FIGS. 12- 19 illustrates how the wafer is processed to add through trenches for isolation trenches.[0077] In a first stage of the method, as illustrated in FIG. 12 at 900, a wafer 1000 is provided. The wafer 1000 includes a substrate 1004 that has circuit components (e.g., transistors, resistors and/or capacitors embedded) therein. A metallization stack 1008 is situated on a first surface 1010 of the substrate 1004. The metallization stack 1008 includes a PMD barrier 1012 formed of a dielectric material, such as silicon nitride (SiN), and an isolation dielectric 1014, such as silicon dioxide (SiCk). The metallization stack 1008 also includes a protective overcoat 1015 . Moreover, conductive pads 1016 that are formed of a metal, such as aluminum (Al) are formed on the metallization stack 1008.[0078] In a second stage of the method, as illustrated in FIG. 13, at 910, the wafer 1000 is flipped an adhesive tape 1020 is adhered to the wafer 1000 for further processing. In a third stage of the method, as illustrated in FIG. 14, at 920, the substrate 1004 is ground and polished to a thickness of about 80 to about 1020 micrometers (pm). In a fourth stage of the method, as illustrated in FIG. 15, at 930 a coating of resist 1024 is patterned on the wafer 1000. The coating of resist 1024
includes gaps 1028 the facilitate forming of through trenches. More particularly, in a fifth stage of the method, as illustrated in FIG. 16 at 940, a first through trench 1032 and a second through trench 1036 are etched in the substrate 1004 and the resist 1024 is removed. The first through trench 1032 and the second through trench 1036 are etched sufficiently to expose the metallization stack 1008. More particularly, the PMD barrier 1012 of the metallization stack 1008 is etched and a relatively small portion of the isolation dielectric is also etched, such that the first through trench 1032 and the second through trench 1036 protrude into the isolation dielectric 1014. As a result of the etching of the substrate 1004, the substrate 1004 is separated into regions, namely a first region 1040, a second region 1044 and a third region 1046. It is possible to deposit an additional dielectric such as Si02, SiON or SiN or AlOx after trench formation and before polymer fill as described above in connection with FIG 2.[0079] In a sixth stage of the method, as illustrated in FIG. 17, at 950, a polymer dielectric 1048, such as parylene is applied to the wafer 1000. The polymer dielectric 1048 fills the first through trench 1032 and the second through trench 1036, and forms a layer overlaying a second surface 1052 of the substrate 1004, wherein the second surface 1052 opposes the first surface 1010. Also, at 950, additional processing actions are also included in some examples. For instance, in some examples, voids (e.g., the void 212 of FIG. 2) in the first through trench 1032 and the second through trench 1036 are formed.[0080] In a seventh stage of the method, as illustrated in FIG. 18, at 960, the second region 1044 of the substrate 1004 is cut at a location 1056 to singulate a first die 1060 and a second die 1064. In various examples, the wafer 1000 is cut at the location 1056 by a saw, a laser, an ion beam or a plasma cutter. The first die 1060 and/or the second die 1064 are employable to implement the die 100 of FIG. 1, the die 200 of FIG. 2, the die 300 of FIG. 3, the die 400 of FIG. 4, the die 450 of FIG. 5 or the die 500 of FIG. 6. In some examples, the first region 1040 of the first die 1060 and the third region 1046 of the second die 1064 have embedded components rated for different voltage levels than the second region 1044 (split between the first die 1060 and the second die 1064).[0081] In an eighth stage of the method, as illustrated in FIG. 19, at 970, the first die 1060 is flipped and mounted on an interconnect 1068 (e.g., a lead frame). Also, at 970, wire bonds 1072
couple pads of the interconnect 1068 to the conductive pads 1016 of the first die 1060. Further at 970, the first die 1060 and the interconnect 1068 are encased in a molding 1076 formed of plastic to form an IC package 1080. To mount the first die 1060 on the interconnect 1068 and attach the wire bonds 1072, the first die 1060 is heated for solder reflow. However, because the PMD barrier 1012 has been etched, force caused by thermal expansion of the polymer dielectric 1048 is not transferred to the protective overcoat 1015 through the PMD barrier 1012. Accordingly, the chances of cracking the protective overcoat 1015 during such solder reflow are reduced.[0082] FIG. 20 illustrates a flowchart of an example method 1100 for forming an IC package. At 1110, a patterned coat of resist is deposited on a wafer (e.g., the wafer 1000 of FIGS. 12-18). A metallization stack (e.g., the metallization stack 116 of FIG. 1) is situated on a first surface of the wafer. The metallization stack includes a PMD barrier (e.g., the PMD barrier 120 of FIG. 1) and an isolation dielectric (e.g., the isolation dielectric 124 of FIG. 1).[0083] At 1115 through trenches (e.g., the through trench 132 of FIG. 1) are etched in the wafer, such that the through trench protrudes into the isolation dielectric of the metallization stack and the coat of resist is removed. At 1120, a polymer dielectric (e.g., parylene) is deposited on a second surface of the wafer to fill the through trenches. At 1125, dies are singulated from the wafer, such that the dies include a through trench. At 1130, a die of the sigulated dies is mounted on an interconnect (e.g. the interconnect 1068 of FIG. 19). At 1135, the die and the interconnect are encased in a molding (e.g., the molding 1076 of FIG. 19).[0084] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. |
A semiconductor interconnect structure having a substrate with an interconnect structure patterned thereon, a barrier layer, a pre-seed layer, a seed layer, a bulk interconnect layer, and a sealing layer. A process for creating such structures is described. The barrier layer is formed using atomic layer deposition techniques. Subsequently, a pre-seed layer is formed to create a heteroepitaxial interface between the barrier and pre-seed layers. This is accomplished using atomic layer epitaxy techniques to form the pre-seed layer. Thereafter, a seed layer is formed by standard deposition techniques to create a homoepitaxial interface between the seed and pre-seed layers. Upon this layered structure further bulk deposition of conducting materials is done. Excess material is removed from the bulk layer and a sealing layer is formed on top to complete the interconnect structure. |
We claim: 1. A method for forming interconnecting conductive lines and vias on a semiconductor substrate during a semiconductor fabrication process, comprising the steps of:(a) providing a semiconductor substrate having an in-laid circuit pattern corresponding to a conductor wiring pattern, formed thereon; (b) forming a barrier layer over said semiconductor surface, including said in-laid circuit pattern; (c) forming a pre-seed layer over said barrier layer; (d) forming a seed layer over said pre-seed layer; (e) forming a bulk interconnect layer over said pre-seed layer; and (f) subjecting said substrate to further processing. 2. The method, as recited in claim 1, wherein said step (b) of forming said barrier layer includes the step of forming said barrier layer by using atomic layer deposition.3. The method, as recited in claim 2, wherein said barrier layer is formed using a material selected from the group consisting of TiN, WN, TaN, Ta, and the silicide compounds thereof.4. The method, as recited in claim 2, wherein said barrier layer is constructed of a titanium nitride film formed using precursor materials which include TiCl4 and NH3.5. The method, as recited in claim 2, wherein said barrier layer is constructed of a tungsten nitride film formed using precursor materials which include WF6 and NH3.6. The method, as recited in claim 2, wherein said barrier layer is constructed of tungsten nitride film formed using precursor materials which include W(CO)3 and NH3.7. The method, as recited in claim 2, wherein said barrier layer is constructed of tantalum nitride film formed using precursor materials which include TaCl5 and NH3.8. The method, as recited in claim 2, wherein said step (c) of forming a pre-seed layer includes the step of forming said pre-seed layer by using atomic layer epitaxy.9. The method, as recited in claim 8, wherein said pre-seed layer is formed using a highly conductive material.10. The method, as recited in claim 8, wherein said pre-seed layer is formed essentially of copper.11. The method, as recited in claim 10, wherein said copper pre-seed layer is formed using Cu(II)[beta]-diketonate precursor materials.12. The method, as recited in claim 11, wherein said Cu(II)P-diketonate precursor materials are selected from the group consisting essentially of Cu(II)-2,2,6,6,-tetramethyl-3,5-heptandionate (Cu(thd)2) and Cu(II)-1,1,1,5,5,5-hexafluro-2,4-pentanedionate (Cu(hfac)2).13. The method, as recited in claim 8, wherein the step (d) of forming said seed layer includes forming said seed layer by using a technique selected from a group consisting essentially of chemical vapor deposition and metal organic chemical vapor deposition.14. The method, as recited in claim 13, wherein the step (d) of forming said seed layer includes forming a seed layer essentially comprising of copper.15. The method, as recited in claim 14, wherein said step (d) of forming a seed layer includes using a Cu(I)[beta]-diketonate as a precursor for forming said copper seed layer.16. The method, as recited in claim 8, wherein said step (e) of forming a said bulk interconnect layer includes forming said bulk interconnect layer by using a technique selected from a group consisting essentially of chemical vapor deposition and electroplating.17. The method, as recited in claim 8, wherein said step (f) of subjecting said substrate to further processing includes the step of removing excess material from said bulk interconnect layer.18. The method, as recited in claim 17, wherein said step (f) of removing excess material from said bulk interconnect layer comprises removing said excess material by using chemical mechanical polishing.19. The method, as recited in claim 18, wherein said step (f) of subjecting said substrate to further processing includes forming a top sealing layer over said bulk interconnect layer.20. A method for forming a seed layer on a semiconductor substrate during a semiconductor fabrication process, comprising the steps of:(a) providing a semiconductor substrate having an in-laid circuit pattern formed thereon; (b) forming a barrier layer over said semiconductor surface; (c) forming a pre-seed layer over said barrier layer, creating an first interface between said pre-seed layer and said barrier layer; and (d) forming a seed layer over said pre-seed layer, creating a second interface between said seed layer and said pre-seed layer. 21. The method, as recited in claim 20, wherein said step (c) of creating an first interface between said pre-seed layer and said barrier layer creates a heteroepitaxial interface.22. The method, as recited in claim 21, wherein said step (c) of creating a second interface between said seed layer and said pre-seed layer creates a homoepitaxial interface.23. The method, as recited in claim 22, wherein said step (b) of forming a barrier layer over said semiconductor surface comprises forming said barrier layer using atomic layer deposition techniques.24. The method, as recited in claim 23, wherein said step (c) of forming a pre-seed layer comprises forming said pre-seed layer using atomic layer epitaxy techniques.25. The method, as recited in claim 24, wherein said step (d) of forming a seed layer comprises forming said seed layer using a technique selected a group consisting essentially of chemical vapor deposition and metal organic chemical vapor deposition.26. The method, as recited in claim 25, wherein said barrier layer is formed to a thickness in a range of about 20 to 300 Å.27. The method, as recited in claim 26, wherein said pre-seed layer is formed to a thickness in a range of about 1.5 to 10 Å.28. The method, as recited in claim 26, wherein said seed layer is formed to a thickness in a range of about 50 to 2000 Å. |
TECHNICAL FIELDThe present invention relates to methods of semiconductor fabrication. In particular, the present invention relates to methods of forming copper metallization structures.BACKGROUND OF THE INVENTIONIn the field of semiconductor fabrication techniques, an industry-wide transition from aluminum to copper interconnects is in progress. Currently, copper interconnects are formed using a so-called "damascene" or "dual-damascene" fabrication process. Briefly, a damascene metallization process forms conducting interconnects by the deposition of conducting metals in recesses formed on a semiconductor wafer surface. Typically, semiconductor devices (e.g., transistors) are formed on a semiconductor substrate. These devices are typically covered with an oxide layer. Material is removed from selected regions of the oxide layer creating openings in the semiconductor substrate surface. The openings correspond to a circuit interconnect pattern forming an "in-laid" circuit pattern. This creates a semiconductor substrate having an in-laid circuit pattern corresponding to a conductor wiring pattern. Once the in-laid patterns have been formed in the oxide layer a barrier layer is formed, upon which, a conducting "seed layer" is fabricated. Such seed layers are frequently constructed of copper. This so-called seed layer provides a conducting foundation for a subsequently formed bulk copper interconnect layer which is usually formed by electroplating. After the bulk copper has been deposited excess copper is removed using, for example, chemical-mechanical polishing. The surface is then cleaned and sealed with a sealing layer. Further processing may then be performed.Currently, the barrier layer is deposited over an etched substrate using physical vapor deposition (PVD) or chemical vapor deposition (CVD) techniques. Commonly used barrier materials are tantalum nitride, tungsten nitride, titanium nitride or silicon compounds of those materials. Barrier layer deposition by PVD has the advantage of creating barrier layer films of high purity and uniform chemical composition. The drawback of PVD techniques is the difficulty in obtaining good step coverage (a layer which evenly covers the underlying substrate is said to have good step coverage). On the other hand, CVD techniques or metal organic chemical vapor deposition (MOCVD) techniques provide excellent step coverage, even in narrow trenches having high aspect ratios (aspect ratio is the ratio of trench depth to trench width). The trade off with CVD and MOCVD techniques is that these processes are "dirty" in comparison to PVD techniques. CVD and MOCVD incorporate large amounts of carbon and oxygen impurities into deposited films and are hence "dirty." These impurities reduce the adhesion of the barrier layer to the underlying substrate. Similarly, the impurities reduce the adhesion of a subsequently formed seed layer to the barrier layer. This results in reduced film quality, void creation, increased electromigration problems, and reduced circuit reliability. Thus, a process engineer is faced with a delicate balancing act when choosing a deposition technique to form barrier layers.After barrier layer deposition, a seed layer of conducting material is deposited. Typically, this material is copper, but other conducting materials may be used. The seed layer provides a low resistance conduction path for a plating current used in the electro-deposition of a subsequent bulk copper interconnect layer. Additionally, the seed layer provides a nucleation layer for the initiation of the subsequent electroplating of the bulk copper interconnect layer. Copper is the preferred seed layer material not only because of its high conductivity, but because it is the ideal nucleation layer for the growth of the subsequently electro-deposited copper film. The seed layer carries electroplating current from the edge of the wafer to the center, allowing the plating current source to contact the wafer only near the edge. The thickness of the seed layer must be sufficient such that the voltage drop from wafer edge to wafer center does not negatively impact the uniformity of the plating process. Additionally, the seed layer carries current into the bottom of vias and trenches. The thickness of the seed layer must be sufficient such that any voltage drop does not significantly retard the plating process at the bottom of the via or the trench relative to the top.As with the barrier layer, the copper seed layer may be deposited using PVD, CVD, or MOCVD techniques. Seed layer deposition suffers from the same limitations as barrier layer deposition. When using PVD, the uneven step coverage in the seed layer results in an excessively thick copper seed layer near the top of trench structures while trench sidewalls and bottoms have a relatively thinner coating of copper film. This results in a "pinching-off" of the bottom of the trench during subsequent plating steps, leading to the existence of large voids and poor quality interconnect and via structures.As explained above, step coverage problems inherent in PVD processes may be overcome using MOCVD or CVD techniques. MOCVD and CVD of copper are attractive because they are capable of depositing the seed layer at nearly 100 percent step coverage. This results in copper film of nearly uniform thickness throughout a wide range of surface conformations. As with the barrier layer, this advantage is especially useful in narrow trenches with high aspect ratios.Unfortunately, when using a highly reactive substance such as copper, CVD and MOCVD become even "dirtier" processes. MOCVD and CVD processing environments are filled with impurities which readily react with copper. Extraneous materials, such as oxygen and carbon, are readily incorporated into the copper seed layer. This degrades the quality and reliability of the seed layer. The impurities reduce seed layer adhesion to the underlying barrier layer. Additionally, the impurities increase the resistivity of the copper seed layer and degrade the uniformity of the subsequently deposited bulk copper interconnect layer. The impurities also lead to poor bonding with the subsequently formed bulk copper interconnect layer.In summary, existing processes of copper interconnect formation suffer from a number of drawbacks, including difficulties in forming seed and barrier layers in vias and trenches having high aspect ratios (i.e., deep trenches having narrow trench widths), poor step coverage (non-uniform surface coverage), and void formation in the barrier, seed, and bulk interconnect layers of the damascened process. Additionally, existing techniques exhibit poor adhesion between the barrier and seed layers leading to an increased incidence of void formation at the barrier layer/seed layer interface. This difficulty leads to increased incidence of electromigration and increased incidence of circuit unreliability. Additionally, existing processes are not easily extendible into smaller dimensions (i.e., below 0.1 [mu]m). As a result, there is a need for an improved interconnect structure including improved barrier and seed layers as well as the method of forming these structures and layers.Accordingly, there is a need for improved processes and semiconductor metallization structures that provide:enhanced step coverage of the seed and barrier layers in deep sub-0.25-[mu]m vias and trenches;reduced incidence of void formation at via and trench sidewalls during subsequent bulk copper deposition;enhanced adhesion between the layers of a barrier layer/seed layer/bulk layer structure;increased electromigration resistance in interconnect structures; andextension of the copper damascene process to extremely small dimensions beyond 0.1 [mu]m in width or diameter.SUMMARY OF THE INVENTIONAccordingly, the present invention discloses improved barrier and seed layers as well as methods for constructing them. The present invention also discloses an improved interconnect structure as well as a method for construction.In accordance with the principles of the present invention, there is provided a new interconnect structure and the method of forming the interconnect structure. The present invention is an interconnect structure having a barrier layer formed over a patterned semiconductor substrate using atomic layer deposition; a pre-seed layer formed using atomic layer epitaxy; a thick seed layer; a bulk copper interconnect layer; and a top sealing layer. The method of the present invention comprises providing a semiconductor substrate having an inlaid circuit pattern on its surface corresponding to a conductor wiring pattern; depositing a layer of barrier material over said surface using atomic layer deposition; depositing a pre-seed layer of conducting material using atomic layer epitaxy; depositing a seed layer of conducting material; depositing a bulk interconnect layer; further processing which may include planarizing said interconnect layer and forming a top sealing layer.Other features of the present invention are disclosed or apparent in the section entitled "DETAILED DESCRIPTION OF THE INVENTION."BRIEF DESCRIPTION OF DRAWINGSFor fuller understanding of the present invention, reference is made to the accompanying drawings in the section headed Detailed Description of the Invention. In the drawings:FIG. 1 is a flowchart depicting a method of copper interconnect formation employing the principles of the present invention.FIG. 2 is a cross section view of a semiconductor substrate patterned in readiness for the process of the present invention.FIG. 3 is a magnified view of the semiconductor substrate of FIG. 2.FIG. 4 is a schematic representation of the semiconductor substrate of FIG. 2 inside a typical process apparatus.FIG. 5 is the semiconductor substrate of FIG. 3 after barrier layer formation using atomic layer deposition techniques.FIG. 6 is the semiconductor substrate of FIG. 4 after formation of a pre-seed layer using atomic layer epitaxy techniques.FIG. 7 is the semiconductor substrate of FIG. 6 after depositing a seed layer.FIG. 8 is the semiconductor substrate of FIG. 7 after formation of a bulk interconnect layer.FIG. 9 is the semiconductor substrate of FIG. 8 after formation of a completed interconnect structure.Reference numbers refer to the same or equivalent parts of the present invention throughout the several figures of the drawings.DETAILED DESCRIPTION OF THE INVENTIONA flowchart showing a method of metal interconnect formation in accordance with the principles of the present invention is depicted in FIG. 1. The process begins in Step 102 in which there is provided a semiconductor substrate patterned in readiness for the deposition of a conducting interconnect. The pattern applied to the semiconductor substrate is an in-laid circuit pattern corresponding to a conductor wiring pattern. An exemplar of such a substrate is shown in FIG. 2. FIG. 2 depicts a cross-section view of a semiconductor wafer 200 having circuit elements 210 and "in-laid" regions 220 (the in-laid regions 220 are interchangeably referred to herein as in-laid regions, portions, and in-laid surfaces) where material has been removed from the surface of the wafer 200 to allow the deposition of conducting material to interconnect circuit elements 210. The area "X" is an in-laid portion 220 of the wafer 200. Area "X" is depicted in FIG. 3. Methods of constructing such substrates are known to those having ordinary skill in the art.Referring to FIG. 1, FIG. 4, and FIG. 5, in Step 104, a barrier layer 401 is formed on the in-laid surface 220. A thin barrier layer is formed using atomic layer deposition (ALD). Typically, the ALD process is performed using a Chemical Vapor Deposition (CVD) process tool 500, for example, a CVD reactor manufactured by Genus, Inc. of Sunnyvale, Calif.Referring to FIG. 4, in applying the barrier layer 401 according to the invention, the entire wafer 200 is placed in a process tool 500, for example, a CVD machine. Gas reactants (also known as precursors) are introduced to a vacuum chamber of the process tool 500.The ALD process is carried out in a vacuum chamber at a pressure in the range of about 1-50 mTorr and at a temperature in the range of about 100[deg.] C.-400[deg.] C., and preferably 300[deg.] C.-400[deg.] C. The primary feature of the ALD process is the formation of the barrier layer 401 by a multiplicity of process cycles in which each cycle produces essentially an equivalent monolayer of the barrier material. The number of cycles used depends on the thickness desired but generally exceeds 1,000 cycles. For example, 1,200 cycles form a coating of approximately 40 nanometers thick. A typical process of forming the barrier layer 401 is illustrated as follows.A semiconductor wafer 200 having an in-laid current pattern corresponding to a conductor wiring pattern is loaded into a process chamber of the process tool 500 and the chamber is heated to a temperature of approximately 160-400[deg.] C. The chamber is purged with nitrogen (N2) gas for a period of several minutes to an hour, for example, 1,000 seconds. Once the chamber is evacuated, the precursors are introduced into the chamber of the process tool 500. In the specific example described here, the barrier layer 401 is formed of titanium nitride (TiN) and the precursor gases are titanium chloride (TiCl4) and ammonia (NH3). The precursors are introduced alternately during each cycle of the process so that each process cycle results in an equivalent atomic layer of the TiCl4 deposited on all surfaces in the chamber. The TiCl4 source is then turned off and the system is purged with N2 to flush all unreacted TiCl4 from the reaction chamber. Thereafter, NH3 is introduced to convert the deposited TiCl4 compound to TiN. This is followed by an N2 purge. For example, the TiCl4 is introduced at a flow rate of about 2 SCCM for about 15 seconds, followed by a nitrogen purge at about 90 SCCM for 30 seconds, and then NH3 at a flow rate of about 25 SCCM for about 20 seconds, followed by a nitrogen purge at about 90 SCCM for six seconds. This procedure is continued for the desired number of cycles. Typically, pressures ranging from about 1-30 mTorr are suitable for these processes. A satisfactory pressure is about 3 mTorr. Further details of the process and the specific processing parameters for alternative materials are given by T. Suntola in "Materials Science Reports," Vol. 4, No. 7, pp. 26-312, December 1989, and U.S. Pat. No. 4,058,430, both of which are incorporated herein by reference.Typically, the barrier layer 401 is formed using tantalum (Ta). Satisfactory compounds used to fabricate the barrier layer 401 also include, but are not limited to, titanium nitride (TiN), tungsten nitride (WN), tantalum nitride (TaN), tantalum (Ta), or the silicide compounds thereof (e.g., TiSiN, WSiN, or TaSiN). Barrier layers 401, may be formed using WN. Such WN layers may be formed using the following precursor materials: WF6 and NH3 or W(CO)3 and NH3. For a TaN barrier layer 401, TaCl5 and NH3 are used as precursors.The barrier layer 401 is ALD deposited to a thickness in the range of about 20-300 Å. For example, a preferred embodiment uses a WN barrier layer 401 deposited to a thickness of approximately 70 Å. After ALD of the barrier layer 401, the barrier layer precursors are evacuated from the CVD process tool 500. The ALD barrier layer 401 exhibits excellent step coverage being very conformal to the surface topography. Additionally, the ALD barrier layer 401 exhibits excellent adhesion characteristics with the underlying substrate (which is typically a dielectric material having a low dielectric constant).With reference to FIGS. 1, 4, and 6, a pre-seed layer 402 comprised of conducting material is formed over said barrier layer 401 in Step 106. The pre-seed layer 402 is formed using atomic layer epitaxy (ALE). The pre-seed layer 402 is formed without removing the wafer 200 from the CVD tool process 500. The details of a satisfactory atomic layer epitaxy process are outlined in the article "Atomic Layer Epitaxy of Copper," Journal of Electrochemical Society, Vol. 145, N8, p. 2929, 1998, P. Martensson & J-O. Carlsson, which is incorporated by reference herein.The ALE process is carried out at a pressure in the range of 5-10 Torr and at a temperature in the range of 150-400[deg.] C. and preferably in the range of 150-250[deg.] C. As disclosed above, the primary feature of atomic layer epitaxy or deposition processes is the formation of layers by a multiplicity of process cycles in which each cycle produces an essentially equivalent monolayer of the appropriate film. As is known in the art, a cycle is considered to be all steps required to produce said equivalent monolayer. The number of cycles used depends on the thickness desired. A typical process of forming the ALE pre-seed layer of Step 106 is described, infra.The wafer 200 remains in the process chamber after the barrier layer 401 is formed. The chamber of the process tool 500 is then heated to approximately 150-250[deg.] C. The chamber is then purged with N2 for a period of several minutes to an hour. After the purge, pre-seed layer precursors are introduced into the chamber. A typical pre-seed layer 402 is formed of copper (Cu). Preferred ALE precursors are Cu(II)[beta]-diketonates, such as Cu(II)-2,2,6,6,-tetramethyl-3,5-heptandionate (Cu(thd)2) and Cu(II)-1,1,1,5,5,5-hexafluro-2,4-pentanedionate (Cu(hfac)2.The ALE process continues until a copper pre-seed layer 402 of between 1.5-10 Å is formed. A preferred thickness being about 5 Å. The pre-seed layer 402 is largely crystalline and forms an excellent underlayer for the subsequent formation of thicker copper layers. Additionally, the ALE Cu pre-seed layer 402 demonstrates good crystallographic ordering at the ALD/ALE interface. Excellent bond alignment exists between the barrier layer 401 and the pre-seed layer 402. The interface between the ALD barrier layer 401 and the ALE pre-seed layer 402 is not contaminated by a native oxide and is a heteroepitaxial interface. A heteroepitaxial interface is an interface between two different materials (e.g., Ta and Cu) having atomic matching at the interface. The advantages of such heteroepitaxial interfaces are: (1) a reduction in the number of defects in the interface, thereby reducing the number of void nucleation sites; (2) a reduction in interface diffusion; (3) an enhancement of the crystalline <111> Cu structure, a factor which improves resistance to electromigration. Thus, the interface between the ALD barrier layer 401 and the ALE pre-seed layer 402 is largely defect-free. This leads to surprisingly strong adhesion between the barrier layer 401 and pre-seed layer 402, resulting in a reduced incidence of voiding and increased resistance to electromigration.Referring to FIGS. 1, 4, and 7, in Step 108 a seed layer 403 is deposited using CVD or MOCVD techniques. Again, this step may be performed without removing the wafer 200 from the CVD process tool 500. The seed layer 403 is formed of conducting material, preferably Cu. To achieve the deposition of Cu CVD precursors, for example Cu(I)[beta]-ketonates, are used being a preferred precursor being Cu(I)-trimethylvinylsilyl hexafluoroacetylacetonate (Cu (hfac)(tmvs)). Due to the already highly ordered nature of the pre-seed layer 402, the seed layer 403 forms homoepitaxially without interfacial changes. The pre-seed layer 402 and the seed layer 403 having exact atomic matching at the interface between the two layers 402, 403. This leaves the pre-seed layer 402/seed layer 403 interface largely defect-free. As a result, good adhesion exists between the pre-seed layer 402 and seed layer 403 creating a strong bond at the interface between these two layers 402, 403, leading to less voiding and higher reliability. The seed layer 403 is typically quite thick relative to the pre-seed layer 402, being deposited to a thickness of between about 50-2000 Å thick. In very narrow trenches, this thick Cu seed layer 403 may serve to form the final copper interconnect layer, requiring no further Cu deposition. However, in most applications a thicker bulk copper interconnect layer 404 is subsequently deposited to complete interconnect structures.Referring to FIGS. 1, 4, and 8, a bulk deposition of copper is performed in Step 110. Typically, a bulk copper interconnect layer 404 is formed, either by CVD or electroplating (EP). If the interconnect layer 404 is formed using CVD techniques, the wafer 200 need not be removed from the CVD process tool 500. Typically, this interconnect layer 404 is formed until the inlaid region 220 is filled with interconnect material.Referring to FIGS. 1 and 9, after bulk copper deposition the wafer 200 is typically subjected to further processing (as in Step 112). For example, excess copper of the interconnect layer 404 may be removed, typically using chemical mechanical polishing (CMP) to remove the topmost regions of the interconnect layer 404. Typically, this is followed by the formation of a top sealing layer 405. The top sealing layer 405 is typically formed of a material having high resistance to Cu diffusion to prevent the Cu from "poisoning" the wafer200. A typical material is, for example, Si3N4. In fact, the same materials used in the formation of the barrier layer 401 may be used to form the top sealing layer 405 if selectively deposited on Cu line surfaces. Additionally, other structures may be subsequently formed atop the substrate 200 after the interconnect is completed, for example, forming multi-level circuit structures. A further advantage of the method of the present invention is that it is extendible to extremely small geometries (e.g., less than 0.1 [mu]m) with high aspect ratios. A further advantage is that the method of the present invention can be accomplished in a single chamber of a CVD process tool 500.Further, the barrier layer 401, pre-seed layer 402, and seed layer 403 provide a novel underlayer upon which high quality interconnect layers 404 may be fabricated. Additionally, the barrier layer 401, pre-seed layer 403, seed layer 403, bulk interconnect layer 404, and sealing layer 405, together, form a structure which demonstrates low incidence of voiding, high electromigration resistance, and high reliability.It will be appreciated that many modifications can be made to the embodiments described above without departing from the spirit and scope of the invention. In particular, it should be noted that, the barrier layer 401 may formed of a wide range of materials including but not limited to, titanium nitride (TiN), tungsten nitride (WN), tantalum nitride (TaN), tantalum (Ta), or silicide compounds thereof. Further, the copper pre-seed layer 402 may be formed using a wide range of precursor materials including but not limited to Cu(II)[beta]-diketonates such as Cu(II)-2,2,6,6,-tetramethyl-3,5-heptandionate or Cu(II)-1,1,1,5,5,5-hexafluro-2,4-pentanedionate. Still further, the copper seed layer 403 may be formed using a wide range of precursor materials including but not limited to Cu(I)[beta]-ketonates such as Cu(I)-trimethylvinylsilyl hexafluoroacetylacetonate (Cu (hfac)(tmvs)).Information as herein shown and described in detail is fully capable of attaining the above-described object of the invention, the presently preferred embodiment of the invention, and is, thus, representative of the subject matter which is broadly contemplated by the present invention. The scope of the present invention fully encompasses other embodiments which may become obvious to those skilled in the art, and is to be limited, accordingly, by nothing other than the appended claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather "one or more." All structural and functional equivalents to the elements of the above-described preferred embodiment and additional embodiments that are known to those of ordinary skill in the art are hereby expressly incorporated by reference. And are intended to be encompassed by the present claims. Moreover, no requirement exists for a device or method to address each and every problem sought to be resolved by the present invention, for such to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. However, it should be readily apparent to those of ordinary skill in the art that various changes and modifications in form, semiconductor material, and fabrication material detail may be made without departing from the spirit and scope of the inventions as set forth in the appended claims. No claim herein is to be construed under the provisions of 35 U.S.C. 112, sixth paragraph, unless the element is expressly recited using the phrase "means for." |
Machine-readable media, methods, and apparatus are described to issue transactions to a memory. In some embodiments, a memory controller may select pending transactions based upon selection criteria and may issue the selected transactions to memory. Further, the memory controller may close a page of the memory accessed by a write transaction in response to determining that the write transaction is the last write transaction of a series of one or more write transactions. |
What is claimed is: 1. A method comprising detecting a last write of a series of writes to the memory, and closing a page of the memory in response to detecting the last write to the memory. 2. The method of claim 1 wherein the last write is detected in response to determining that a write buffer is empty. 3. The method of claim 1 wherein the last write is detected in response to determining that a read to the memory is pending. 4. The method of claim 1 wherein the last write is detected in response to determining that a number of pending writes has a predetermined relationship to a lower threshold. 5. The method of claim 1 wherein closing comprises issuing a command to the memory that closes the page. 6. The method of claim 1 wherein closing comprises issuing the last write to the memory with a qualifier that causes the memory to close the page. 7. A method comprising detecting a last write to a page of memory, and closing the page of the memory in response to detecting the last write to the page of memory. 8. The method of claim 7 wherein the last write is detected in response to determining that the write buffer is empty. 9. The method of claim 7 wherein the last write is detected in response to determining that a read to the memory is pending. <Desc/Clms Page number 14> 10. The method of claim 7 wherein the last write is detected in response to determining that a number of pending writes has a predetermined relationship to a lower threshold. 11. The method of claim 7 wherein the last write is detected in response to determining that a number of pending writes to the page of memory has a predetermined relationship to a lower threshold. 12. The method of claim 7 wherein closing comprises issuing a command to the memory that closes the page. 13. The method of claim 7 wherein closing comprises issuing the last write to the memory with a qualifier that causes the memory to close the page after the last write. 14. A memory controller comprising a read buffer to store read transactions to be issued to a memory, a write buffer to store write transactions to be issued to the memory, a memory interface to issue read transactions and write transactions to the memory and to close a page of the memory that was accessed by a last write transaction of a series of write transactions, and control logic to select transactions from the read buffer and the write buffer based upon selection criteria, to cause the memory interface to issue the selected transactions, and to instruct the memory interface to close the page accessed by the last write transaction of the series in response to detecting the last write transaction of the series. 15. The memory controller of claim 14 wherein the control logic determines that a write transaction is the the last write transaction of the series in response to determining that a write buffer is empty. <Desc/Clms Page number 15> 16. The memory controller of claim 14 wherein the control logic determines that a write transaction is the the last write transaction of the series in response to determining that the read buffer comprises at least one read transaction. 17. The memory controller of claim 14 wherein the control logic determines that a write transaction is the the last write transaction of the series in response to determining that the write buffer comprises a lower threshold of write transactions to the memory. 18. The memory controller of claim 14 wherein the control logic determines that a write transaction is the the last write transaction of the series in response to determining that the write buffer comprises a lower threshold of write transactions to the page of memory. 19. The memory controller of claim 14 wherein the control logic determines that a write transaction is the the last write transaction of the series in response to determining that the write buffer comprises no write transactions to the page of memory. 20. The memory controller of claim 14 wherein the memory interface is to issue a command to the memory to close page accessed by the last write transaction of the series. 21. The memory controller of claim 14 wherein the memory interface is issue the last write transaction to the memory with a qualifier that causes the memory to close the page accessed by the last write transaction. 22. A system comprising volatile random access memory, a processor to issue read transactions and write transactions, and <Desc/Clms Page number 16> a memory controller to buffer the read transactions and the write transactions issued by the processor, to issue the read transactions and write transactions to the volatile random access memory based upon selection criteria, and to close a page accessed by a last write transaction of a series of write transactions. 23. The system of claim 22 wherein the memory controller detects the last write in response to determining that no write transactions are buffered. 24. The system of claim 22 wherein the memory controller detects the last write in response to determining that at least one read transaction is pending. 25. The system of claim 22 wherein the memory controller detects the last write in response to determining that the memory controller has a lower threshold of write transactions buffered. 26. The system of claim 22 wherein the memory controller detects the last write in response to determining that the memory controller has a lower threshold of write transactions to the page buffered. 27. The system of claim 22 wherein the memory controller issues a precharge command to the memory to close the page accessed by the last write transaction. 28. The system of claim 22 wherein the memory controller issues the last write to the memory with an auto-precharge qualifier to cause the memory to close the page accessed by the last write. 29. A machine readable medium comprising a plurality of instructions that in response to being executed result in a computing device detecting a last write of a series writes to the memory, and <Desc/Clms Page number 17> signaling that a page of the memory is to be closed in response to detecting the last write to the memory. 30. The machine readable medium of claim 29 wherein the plurality of instructions further result in the computing device determining that a write of the series of writes is the last write of the series in response to determining that a write buffer is empty. 31. The machine readable medium of claim 29 wherein the plurality of instructions further result in the computing device determining that a write of the series of writes is the last write of the series in response to determining that a read to the memory is pending. 32. The machine readable medium of claim 29 wherein the plurality of instructions further result in the computing device determining that a write of the series of writes is the last write of the series in response to determining that a number of pending writes has a predetermined relationship to a lower threshold. |
<Desc/Clms Page number 1> BUFFERED WRITES AND MEMORY PAGE CONTROL BACKGROUND Computing devices typically comprise a processor, memory, and a memory controller to provide the processor as well as other components of the computing device with access to the memory. The performance of such computing devices is strongly influenced by the memory latency of the computing device. In general, the"memory read latency"is the length of time between when the processor requests the memory controller to retrieve data from the memory and when the memory controller provides the processor with the requested data. Similarly, the"memory write latency"is generally the length of time between when the processor requests the memory controller to write data to the memory and when the memory controller indicates to the processor that the data has been or will be written to the memory. To reduce the effect of memory latency on the computing device, memory controllers typically buffer write transactions of the processor and later write the data of the transaction to memory at a more appropriate time. As far as the processor is concerned, the write transaction is complete once buffered by the memory controller. The processor, therefore, may continue without waiting for the data of the write transaction to be actual written to memory. Conversely, read transactions are not complete from the standpoint of the processor until the data is read from memory and returned to the processor. Accordingly, performance of a computing device is typically more dependent upon read latency than write latency. In light of this, memory controllers tend to favor servicing read transactions over servicing write transactions. <Desc/Clms Page number 2> Moreover, memory latency is influenced by the proportion of page-hit, page-miss, and page-empty transactions encountered. Computing devices typical comprise hierarchal memory arrangements in which memory is arranged in channels, ranks, banks, pages, and columns. In particular, each channel may comprise one or more ranks, each rank may comprise one or more banks, and each bank may comprise one or more pages. Further, each page may comprise one or more columns. When accessing memory, the memory controller typically opens a page of the memory and then accesses one or more columns of the opened page. For a page-hit access, the memory controller may leave a page open after accessing a column of the page for a previous memory request and may access a different column of the open page. For a page-miss access, the memory controller may close an open page of a bank, may open another page of the same bank, and may access a column of the newly opened page. A page- miss access generally has about three times the latency as a page-hit access. For a page-empty access, the memory controller may open a closed page of a bank, and may access a column of the newly opened page for the memory transaction. A page-empty access generally has about twice the latency as a page-hit access. BRIEF DESCRIPTION OF THE DRAWINGS The invention described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels <Desc/Clms Page number 3> have been repeated among the figures to indicate corresponding or analogous elements. FIG. 1 illustrates an embodiment of a computing device. FIG. 2 illustrates an embodiment of a memory controller of the computing device of FIG. 1. FIG. 3 illustrates an embodiment of a method that may be used by the memory controller of FIG. 2 to schdule transactions and close pages of the memory. DETAILED DESCRIPTION The following description describes techniques that attempt to decrease overall memory latency by intelligently closing pages of the memory. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to"one embodiment", "an embodiment", "an example embodiment", etc. , indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment <Desc/Clms Page number 4> may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily-referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine- readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e. g. , a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e. g. , carrier waves, infrared signals, digital signals, etc. ), and others. Further, firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc. An example embodiment of a computing device 100 is shown in FIG. 1. The computing device 100 may comprise one or more processors 102. The processors 102 may perform actions in response to executing instructions. For <Desc/Clms Page number 5> example, the processors 102 may issue transactions such as memory read transactions and memory write transactions on a processor bus 104. The computing device 100 may further comprise a chipset 106. The chipset 106 may comprise one or more integrated circuit packages or chips that couple the processors 102 to memory 108, Basic Input/Output System (BIOS) firmware 110 and other components 112 (e. g. a mouse, keyboard, video controller, hard disk, floppy disk, etc. ). The chipset 106 may comprise a processor bus interface 114 to receive transactions from the processors 102 and to issue transactions to the processors 102 via the processor bus 104. The chipset 106 may further comprise a memory controller 116 to issue read and write transactions to the memory 108 via a memory bus 118. The chipset 106 may further comprise one or more component interfaces (not shown) to access the other components 112 via buses 120 such as, for example, peripheral component interconnect (PCI) buses, accelerated graphics port (AGP) buses, universal serial bus (USB) buses, low pin count (LPC) buses, and/or other 1/0 buses. In one embodiment, the BIOS firmware 110 comprises routines which the computing device 100 may execute during system startup in order to initialize the processors 102, chipset 106, and other components of the computing device 100. Moreover, the BIOS firmware 110 may comprise routines or drivers which the computing device 100 may execute to communicate with one or more components of the computing device 100. The memory 108 may comprise memory devices providing addressable storage locations that the memory controller 116 may read data from and/or write data to. The memory 108 may comprise one or more different types of memory devices such as, for example, dynamic random access memory (DRAM) devices, <Desc/Clms Page number 6> synchronous dynamic random access memory (SDRAM) devices, double data rate (DDR) SDRAM devices, quad data rate (QDR) SDRAM devices, or other volatile or non-volatile memory devices. Moreover, the memory 108 may be arranged in a hierarchal manner. For example, the memory 108 may be arranged in channels, ranks, banks, pages, and columns. As depicted in FIG. 2, the memory controller 116 may comprise a write- cache 198 that comprises a read buffer 200 and a write buffer 202. The memory controller 116 may further comprise control logic 204 and a memory interface 206. The read buffer 200 may buffer the address and data of a read transaction until the requested data is retrieved from the memory 108 and returned to the requester (e. g. processor 102). Similarly, the write buffer 202 may buffer the address and data of a write transaction until the data is written to the memory 108. The read buffer 200 and write buffer 202 may each support buffering of one or more transactions. The control logic 204 may select a transaction from the buffers 200,202 based upon various criteria and may request the memory interface 206 to service the selected transaction. Computer performance is typically more dependent upon memory read performance than memory write performance. Accordingly, the control logic 204 in one embodiment in general favors read transactions over write transactions and thus generally causes write transactions to wait until the read buffer is empty. In another embodiment, the control logic 204 may further wait until data needs to be evicted from the write-cache 198 before writing data of the write buffer 202 back to the memory 108. The control logic 204, however, may select write transactions over read transactions under certain conditions such as, for example, the write buffer 202 <Desc/Clms Page number 7> becoming full or the number of pending write transactions in the write buffer 202 having a predetermined relationship to an upper threshold that indicates that the write buffer 202 is nearly full. In which case, the control logic 204 may completely flush the write buffer thus presenting all pending write transactions to memory interface 206 for servicing. The control logic 204 may alternatively partially flush the write buffer 202. For example, the control logic 204 may present the memory interface 206 with a predetermined number of write transactions (e. g. 4) or may present the memory interface 206 with write transactions from the write buffer 202 until the number of pending write transactions has a predetermined relationship with a lower threshold. The control logic 204 may further satisfy a read transaction with data stored in the write cache 198. Satisfying the read transaction with data from the write buffer 202 may reduce the latency of the read transactions since the memory controller 116 is able to satisfy the request without retrieving the data from memory 108. Further, servicing read transactions with cached data of the write buffer 202 may help reduce the latency of other read transactions due to fewer read transactions consuming bandwidth between the memory controller 116 and the memory 108. Furthermore, the control logic 204 may combine, in the write buffer 202, data of write transactions that target the same locations of the memory 108. Again, combining write transactions in the write buffer 202 may reduce the latency of memory transactions since write combining may reduce the number of write transactions between the memory controller 116 and the memory 108. The memory interface 206 may read data from memory 108 in response to read transactions and may write data to memory 108 in response to write transactions. In particular, the memory interface 206 may decode an address of a <Desc/Clms Page number 8> transaction and may apply memory select signals to the memory in order to open pages of the memory 108 for reading and/or writing. Moreover, the memory interface 208 may close an opened page by issuing a precharge command or by issuing a transaction to the memory 108 with an auto-precharge qualifier that causes the memory 108 to close the page of the transaction after servicing the transaction. As indicated above, the control logic 204 favors read transactions over write transactions. Accordingly, write transactions tend to be interspersed between read transactions that were issued by the processor 102 considerably after the write transactions. Read transactions in such an environment tend to exhibit poor spatial locality of reference to the write transactions due to their temporal separation. If pages accessed by write transactions are left open, then read transactions that follow the last write transactions of write transaction series tend to result in a higher proportion of page-miss accesses to page-hit access. In an effort to reduce overall memory latency, the control logic 204 in one embodiment closes a page accessed by a last write transaction to reduce the likelihood that a read transaction following the last write transaction results in a page-miss access. As stated previously, a page-empty access has about twice the latency of a page- hit access, but a page-miss access has about thrice the latency of a page-hit access. Therefore, if leaving the page accessed by the last write transaction of one or more consecutive write transactions would result in more page-miss accesses than page-hit access, then closing the page would reduce the latency of read transactions following write transactions. Therefore, as shown in FIG. 3, the memory controller 116 in an attempt to reduce memory latency may schdule transactions and close pages in a <Desc/Clms Page number 9> manner that attempts to reduce overall memory latency experienced by the processor 102. The method of FIG. 3 in general favors read transactions over write transactions and generally closes pages of a memory 108 that were accessed by the last write transactions of a series of one or more write transactions. However, the memory controller 116 may close pages of the memory 108 based upon additional criteria. In response to determining that there is an available time slot for issuing a memory transaction, the control logic 204 in block 300 may determine whether there are any pending memory transactions. In particular, the control logic 204 may determine that there are no pending memory transactions if neither the read buffer 200 nor the write buffer 202 comprise transactions to be issued to the memory 108. In response to determining there are no pending transactions, the control logic 204 may enter an idle state or exit the scheduling method of FIG. 3 until the next available time slot for issuing a memory transaction. Otherwise, the control logic 204 in block 302 may select a transaction from the read buffer 200 or the write buffer 202 based upon selection criteria or rules. For example, in one embodiment, the control logic 204 may favor read transaction over write transactions and may select a read transaction if the read buffer 200 comprises a read transaction. In response to the read buffer 200 comprising no read transactions to be issued to the memory 108, the control logic 204 may select a write transaction from the write buffer 202. In another embodiment, the control logic 204 may further select a write transaction from the write buffer 202 even though the read buffer 200 comprises pending read transactions. In particular, the control logic 204 may select a write transaction in response to determining that the write buffer 202 is full or in response to <Desc/Clms Page number 10> determining that the write buffer 202 comprises an upper threshold of write transactions that indicates the write buffer 202 is nearly full. In yet another embodiment, after detecting that the write buffer 202 is full or nearly full, the control logic 204 may continue to select write transactions over read transactions until a predetermined number (e. g. 4) of write transactions have been selected, until the write buffer 202 is empty, or until the write buffer 202 comprises a lower threshold of write transactions. For example, the lower threshold may correspond to the write buffer 202 being half filled. In block 304, the control logic 204 may determine whether the selected transaction is a read transaction or a write transaction. In response to determining that the selected transaction is a read transaction, the control logic 204 in block 306 may cause the memory interface 206 to issue the selected read transaction to the memory to obtain the requested data from the memory 108. Otherwise, the control logic 204 in block 308 may determine whether the selected write transaction is the last write transaction of a series of one or more write transactions. For example, in one embodiment, the control logic 204 may determine that selected write transaction is the last write transaction in response to determining that the write buffer 202 comprise no other pending transactions. In another embodiment, the control logic 204 may further determine that the selected write transaction is the last write transaction in response to determining that the read buffer 200 comprises at least one read transaction to issue to the memory 108. The control logic 204 may further determine that the selected write transaction is the last write transaction of the series in response to determining that the write buffer comprises no more write transactions to the page of memory 108 to be accessed by the selected write transaction. The control logic 204 may <Desc/Clms Page number 11> also determine that the select write transaction is the last write transaction in response to memory interface 206 issuing a predetermined number of consecutive write transactions to the memory 108. Further yet, the control logic 204 may determine that the selected write transaction is the last write transaction of the series in response to determining that write buffer 202 comprises a lower threshold of write transactions. It should be appreciated that the control logic 204 may determine the last write transaction of a series of one or more write transactions based upon one or more of the above identified criteria and/or other criteria. In response to determining that the selected write transaction is not the last write transaction of a series of write transactions, the control logic 204 in block 310 may cause the memory interface 206 to issue the selected write transaction to the memory 108 in order to write the data supplied by the write transaction to the memory 108. Otherwise, the control logic 204 in block 312 may instruct the memory interface 206 to close a page accessed by the write transaction. In response to being instructed to close the page accessed by the write transaction, the memory interface 206 in block 314 may issue the selected write transaction to the memory 108 and may close the page accessed by the write transaction. In one embodiment, the memory interface 206 may issue the write transaction to the memory 108 and then may issue a precharge command to the memory 108 to close the page after the write transaction. In another embodiment, the memory interface 206 may issue the write transaction to the memory 108 with an auto-precharge qualifier that causes the memory 108 to close the page accessed by the write transaction after the data of the write transaction is written to the page. <Desc/Clms Page number 12> The computing device 100 may perform all or a subset of the example method in response to executing instructions of a machine readable medium such as, for example, read only memory (ROM) ; random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices ; and/or electrical, optical, acoustical or other form of propagated signals such as, for example, carrier waves, infrared signals, digital signals, analog signals. Furthermore, while the example method is illustrated as a sequence of operations, the computing device 100 in some embodiments may perform operations of the method in parallel or in a different order. While certain features of the invention have been described with reference to example embodiments, the description is not intended to be construed in a limiting sense. Various modifications of the example embodiments, as well as other embodiments of the invention, which are apparent to persons skilled in the art to which the invention pertains are deemed to lie within the spirit and scope of the invention. |
Some embodiments include apparatuses and methods of operating the apparatuses. One of the apparatuses includes volatile memory cells located along a pillar that has a length extending in a direction perpendicular to a substrate of a memory device. Each of the volatile memory cells includes a capacitor and at least one transistor. The capacitor includes a capacitor plate. The capacitor plate is either formed from a portion a semiconductor material of the pillar or formed from a conductive material separated from the pillar by a dielectric. |
1. A memory device comprising:A pillar including a length extending in a direction perpendicular to the substrate, the pillar including a first segment and a second segment, each of the first segment and the second segment including a first conductive A portion of semiconductor material of a second conductivity type contacting a portion of semiconductor material of a second conductivity type;A first volatile memory unit, the first volatile memory unit includes:a first conductive material positioned along the first section and connected to thefirst segment separation; anda first further conductive material connected to the first further conductive material via a first further dielectricThe first conductive material is separated; andA second volatile memory unit, the second volatile memory unit includes:a second conductive material positioned along the second section and connected to theThe second section separates;a second further conductive material separated from the second conductive material by a second further dielectric, wherein:The first dielectric and the second dielectric are part of a dielectric region extending continuously along the sidewall of the column;A first conductive material portion is positioned along the first conductivity type semiconductor material portion of the first section of the post, and the first conductive material portion is located between the first volatile memory cell and the third conductive material portion. between volatile memory cells; andA second conductive material portion is positioned along the first conductivity type semiconductor material portion of the second section of the post, and the second volatile memory cell is located between the first conductive material portion and the second conductive material portion. between two parts of conductive material.2. The memory device of claim 1, whereinthe first conductive material forms a portion of a storage node of the first volatile memory cell; andThe second conductive material forms a portion of a storage node of the second volatile memory cell.3. The memory device of claim 2, whereinThe portion of the semiconductor material of the second conductivity type of the first section of the pillar forms a portion of a channel of a transistor included in the first volatile memory cell; andThe portion of the semiconductor material of the second conductivity type of the second section of the pillar forms a portion of a channel of a transistor included in the second volatile memory cell.4. The memory device of claim 3, whereinthe first additional conductive material includes portions of sidewalls surrounding the first additional dielectric; andThe second further conductive material includes portions of the sidewalls surrounding the second further dielectric.5. The memory device of claim 1, whereinThe first conductivity type is n-type; andThe second conductivity type is p-type.6. A memory device comprising:a first pillar comprising a length extending in a direction perpendicular to the substrate,a second pillar, said second pillar comprising a length extending in said direction perpendicular to said substrate;A first volatile memory unit, the first volatile memory unit includes:A first conductive material, the first conductive material including a first portion and a second portion, the first portion of the first conductive material being positioned along a first section of the first column and connected to the third portion through a first dielectric. The first section of a post is separated and the second portion of the first conductive material contacts the conductive material of the first section of the second post; andA second volatile memory unit, the second volatile memory unit includes:A second conductive material including a first portion and a second portion, the first portion of the second conductive material being positioned along the second section of the first column and connected to the third portion through a second dielectric. The second section of a post is separated and the second portion of the second conductive material contacts the conductive material of the second section of the second post.7. The memory device of claim 6, whereinthe first conductive material forms a portion of a storage node of the first volatile memory cell; andThe second conductive material forms a portion of a storage node of the second volatile memory cell.8. The memory device of claim 6, whereinthe first dielectric includes a portion of the sidewall surrounding the first section of the first column;said first portion of first conductive material surrounding sidewalls of said first dielectric;the second dielectric includes a portion of the sidewall surrounding the second section of the first post; andThe first portion of the second conductive material surrounds the sidewalls of the second dielectric.9. The memory device of claim 8, whereinthe second portion of the first conductive material surrounds a sidewall of the conductive material of the first section of the second post; andThe second portion of the second conductive material surrounds the sidewall of the conductive material of the second section of the second post.10. The memory device of claim 6, further comprising:A third conductive material, the third conductive material includes a first portion and a second portion, the first portion of the third conductive material is positioned along the first pillar and connected to the third portion of the first pillar through a third dielectric. three sections are separated, and the second portion of the third conductive material is positioned along the third section of the second post and separated from the third section of the second post by a third additional dielectric;A fourth conductive material, the fourth conductive material includes a first portion and a second portion, the first portion of the fourth conductive material is positioned along the fourth section of the first column and connected to the third portion through a fourth dielectric. The third section of a post is separated, and the second portion of the fourth conductive material is positioned along the fourth section of the second post and connected to the second post through a fourth additional dielectric. The fourth section separates.11. The memory device of claim 10, whereinthe third conductive material forms a portion of a word line associated with the first volatile memory cell; andThe fourth conductive material forms a portion of a word line associated with the second volatile memory cell.12. The memory device of claim 10, whereinthe third section and the fourth section of the first column are located between the first section and the second section of the first column; andThe third section and the fourth section of the second column are located between the first section and the second section of the second column.13. The memory device of claim 10, whereinThe third section of the first column is located between the first section and the second section of the first column, and the second section of the second column is located between the first column and the second section of the first column. between the third paragraph and the fourth paragraph; andThe third section of the second column is located between the first section and the second section of the second column, and the second section of the second column is located between the second column between the third paragraph and the fourth paragraph.14. The memory device of claim 6, further comprising:additional conductive material contacting a third section of conductive material of the second post, wherein the third section of the second post is between the first volatile memory cell and the second volatile memory between units.15. The memory device of claim 6, further comprising:Additional conductive material contacting the conductive material of the second pillar, wherein the first volatile memory cell is between the additional conductive material and the second volatile memory cell.16. A memory device comprising:a column comprising a length extending in a direction perpendicular to the substrate;A first volatile memory cell positioned along a first segment of the pillar, the first volatile memory cell containing a first storage node contained within in a portion of said first section of said column;A second volatile memory cell positioned along a second segment of the pillar, the second volatile memory cell containing a second storage node contained within a portion of the second segment of the column, each of the portion of the first segment and the portion of the second segment being formed from a first conductivity type semiconductor material;The pillar includes a third segment positioned between the first segment and the second segment, the third segment including a portion formed of a second conductivity type semiconductor material, wherein all portions of the third segment said portion contacts each of said portion of said first segment and said portion of said second segment; andelectrically conductive material contacting said portion of said third segment, wherein:the column includes an additional portion contacting a first side of the portion of the first segment, the additional portion having a first thickness in the direction of the length of the column; andThe portion of the third section contacts a second side of the portion of the first section, the portion of the third section having a second direction in the direction of the length of the post. thickness, and the second thickness is greater than the first thickness.17. A memory device comprising:a column comprising a length extending in a direction perpendicular to the substrate;A first volatile memory cell positioned along a first segment of the pillar, the first volatile memory cell containing a first storage node contained within in a portion of said first section of said column;A second volatile memory cell positioned along a second segment of the pillar, the second volatile memory cell containing a second storage node contained within a portion of the second segment of the column, each of the portion of the first segment and the portion of the second segment being formed from a first conductivity type semiconductor material; andThe pillar includes a third segment positioned between the first segment and the second segment, the third segment including a portion formed of a second conductivity type semiconductor material, wherein:The first section of the column includes an additional portion contacting the portion of the first section; andThe second section of the column includes an additional portion contacting the section of the second section, and the additional section of the first section and the additional section of the second section Each additional portion of is formed from a semiconductor material of the second conductivity type.18. The memory device of claim 17, whereinThe first conductivity type is n-type; andThe second conductivity type is p-type.19. A memory device comprising:a substrate included in a volatile memory device; anda post included in the volatile memory device, the post including a length extending in a direction perpendicular to the substrate, the post including a first portion, a second portion contacting the first portion , a third part contacting the second part, a fourth part contacting the third part and a fifth part contacting the fourth part, wherein the first part, the third part and the fifth part Each of the portions is formed from a first conductivity type semiconductor material, and each of the second portion and the fourth portion is formed from a second conductivity type semiconductor material, wherein:the second portion has a first thickness in the direction of the length of the column; andThe fourth portion has a second thickness in the direction of the length of the column, and the second thickness is greater than the first thickness.20. The memory device of claim 19, whereinThe first conductivity type is n-type; andThe second conductivity type is p-type.21. The memory device of claim 19, further comprising a conductive material separated from the fifth portion by a dielectric.22. The memory device of claim 20, further comprising a conductive material contacting the fifth portion.23. The memory device of claim 19, further comprising a conductive material contacting the first portion.24. The memory device of claim 23, further comprising a conductive material separated from the fifth portion by a dielectric.25. The memory device of claim 23, further comprising a conductive material contacting the fifth portion.26. The memory device of claim 19, wherein the column further comprisesa sixth portion contacting the fifth portion and a seventh portion contacting the sixth portion, the sixth portion comprising the second conductivity type semiconductor material, and the seventh portion comprising the first conductivity type type of semiconductor material.27. The memory device of claim 26, further comprising:a first conductive material contacting the first portion; andA second conductive material contacting the seventh portion. |
Volatile memory device containing stacked memory cellsRelated applicationsThis application claims the benefit of priority from U.S. Application Serial No. 62/551,542, filed August 29, 2017, which is incorporated herein by reference in its entirety.Background techniqueMemory devices are widely used in computers and many other electronic items to store information. Memory devices are generally divided into two types: volatile memory devices and non-volatile memory devices. Examples of volatile memory devices include dynamic random access memory (DRAM) devices. Examples of non-volatile memory devices include flash memory devices (eg, Flash Memory Sticks). Memory devices typically have many memory cells. In volatile memory devices, if the power supply is disconnected from the memory device, the information stored in the memory cells is lost. In a non-volatile memory device, the information stored in the memory cell is retained even if the power supply is disconnected from the memory device.The description herein relates to volatile memory devices. Most conventional volatile memory devices have a planar structure (ie, a two-dimensional structure) in which memory cells are formed in a single level of the device. As the demand for device storage density increases, many conventional technologies provide methods for shrinking the size of memory cells to increase device storage density for a given device area. However, if memory cells are to be scaled down to a certain size, physical limitations and manufacturing constraints may pose challenges to this conventional technology. Unlike some conventional memory devices, the memory devices described herein include features that may overcome challenges faced by conventional technologies.Description of the drawingsFigure 1 illustrates a block diagram of a device containing a volatile memory unit in the form of a memory device, in accordance with some embodiments described herein.Figure 2A shows a schematic diagram of a portion of a memory device including a memory array in accordance with some embodiments described herein.Figure 2B shows a schematic diagram of a portion of the memory device of Figure 2A.Figure 2C is a graph illustrating example values of voltages of signals provided to the memory device of Figure 2B during example write and read operations, in accordance with some embodiments described herein.2D illustrates a side view (eg, cross-sectional view) of the structure of a portion of the memory device schematically shown in FIG. 2B, in which the memory cell structure of each memory cell, in accordance with some embodiments described herein Can include sections from twin columns.2E-2I illustrate different portions (eg, partial top views) of the memory device of FIG. 2D including some elements of the memory device viewed from different cross-section lines of FIG. 2D, in accordance with some embodiments described herein.Figure 3A shows a schematic diagram of a portion of a memory device that may be a variation of the memory device of Figure 2A, in accordance with some embodiments described herein.Figure 3B shows a schematic diagram of a portion of the memory device of Figure 3A.3C is a graph illustrating example values of voltages of signals provided to the memory device of FIG. 3B during example write and read operations, in accordance with some embodiments described herein.Figure 3D is a graph illustrating example values of voltages of signals provided to the memory device of Figure 3B during additional example write and read operations of the memory device, in accordance with some embodiments described herein.Figure 3E illustrates a side view (eg, cross-sectional view) of the structure of a portion of the memory device schematically shown in Figure 3B, in accordance with some embodiments described herein.Figure 3F illustrates a portion (eg, partial top view) of the memory device of Figure 3E, in accordance with some embodiments described herein.4A shows a schematic diagram of a portion of a memory device including a memory cell, in which the memory cell structure of each memory cell may include portions from a single pillar, in accordance with some embodiments described herein.4B illustrates a side view (eg, cross-sectional view) of the structure of a portion of the memory device schematically shown in FIG. 4A, in accordance with some embodiments described herein.Figure 4C shows a portion of the memory device of Figure 4B.4D-4F illustrate different portions (eg, partial top views) of the memory device of FIG. 4C including some elements of the memory device viewed from different cross-section lines of FIG. 4C, in accordance with some embodiments described herein.Figure 4G shows a schematic diagram of a portion of the memory device of Figure 4A.4H is a graph illustrating example values of voltages of signals provided to portions of the memory device of FIG. 4G during three different example write operations, in accordance with some embodiments described herein.Figure 4I is a flowchart illustrating different stages of a read operation for the memory device of Figure 4A, in accordance with some embodiments described herein.Figure 4J shows a schematic diagram of a portion of the memory device of Figure 2A.Figure 4K is a graph showing the value of the signal in Figure 4J during the pre-readout phase based on the impact ionization (II) current mechanism.Figure 4K' is a graph illustrating the value of the signal in Figure 4J during a pre-read phase using an alternative pre-read scheme based on a gate-induced drain leakage (GIDL) current mechanism.Figure 4L shows a schematic diagram of a portion of the memory device of Figure 4A.Figure 4M is a graph showing the values of the signals in Figure 4L during a readout phase using a threshold voltage shift based readout scheme.Figure 4M' is a graph showing the value of the signal in Figure 4L during a readout phase using an alternative readout scheme based on the properties of a built-in bipolar junction transistor (BJT) (eg, self-latching).Figure 4N is a diagram showing the relationship between some of the signals in Figure 4M.Figure 4O shows a schematic diagram of a portion of the memory device of Figure 4A.Figure 4P is a graph showing the values of the signals in Figure 4O during the reset phase.Figure 4Q shows a schematic diagram of a portion of the memory device of Figure 4A.Figure 4R is a graph showing the values of the signal in Figure 4Q during the recovery phase.Figure 5A shows a schematic diagram of a portion of another memory device containing a memory cell having a memory cell structure from a single pillar, in accordance with some embodiments described herein.Figure 5B illustrates a side view (eg, cross-sectional view) of the structure of a portion of the memory device schematically shown in Figure 5A, in accordance with some embodiments described herein.Figure 5C shows a portion of the memory device of Figure 5B.Figure 5D shows a schematic diagram of a portion of the memory device of Figure 5A including two memory cells.Figure 5E is a graph illustrating example values of voltages of signals provided to a portion of the memory device of Figure 5D during three different example write operations, in accordance with some embodiments described herein.Figure 5F is a flowchart illustrating different stages of a read operation of the memory device of Figures 5A-5C, in accordance with some embodiments described herein.Figure 5G shows a schematic diagram of a portion of the memory device of Figure 5A.Figure 5H is a graph showing the values of the signals in Figure 5G during the pre-readout phase based on the impact ionization current mechanism.Figure 5H' is a graph showing the value of the signal in Figure 5G during the pre-read phase using an alternative pre-read scheme based on the GIDL current mechanism.Figure 5I shows a schematic diagram of a portion of the memory device of Figure 5A.Figure 5J is a graph showing the value of the signal in Figure 5I during the readout phase using a threshold voltage shift based readout scheme.Figure 5J' is a graph showing the value of the signal in Figure 5I during a readout phase using an alternative readout scheme based on the properties of built-in bipolar junction transistors (eg, self-latching).Figure 5K shows a schematic diagram of a portion of the memory device of Figure 5A.Figure 5L is a graph showing the values of the signals in Figure 5K during the reset phase.Figure 5M shows a schematic diagram of a portion of the memory device of Figure 5A.Figure 5N is a graph showing the values of the signal in Figure 5M during the recovery phase.6 illustrates the structure of a portion of a memory cell positioned along a segment of a column of a memory device in accordance with some embodiments described herein.Detailed waysThe memory devices described herein include volatile memory cells arranged in a 3D (three-dimensional) structure. In a 3D structure, memory cells are stacked vertically on top of each other in multiple levels of memory devices. Because the memory cells are stacked vertically, the storage density of the described memory devices may be higher than that of conventional volatile memory devices for a given device area. The 3D structure also allows for increasing the storage density of the described memory devices without substantially reducing feature size (eg, memory cell size). The memory devices described herein may have an effective feature size of 2F2 or less. Different variations of the described memory device are discussed in detail below with reference to FIGS. 1 to 6 .FIG. 1 illustrates a block diagram of a device containing a volatile memory unit in the form of memory device 100 in accordance with some embodiments described herein. Memory device 100 includes memory array 101, which may contain memory cells 102. Memory device 100 is a volatile memory device (eg, a DRAM device) such that memory cell 102 is a volatile memory cell. Therefore, if supply power (eg, supply voltage VDD) is disconnected from memory device 100, information stored in memory unit 102 may be lost (eg, invalid). In the following, VDD is mentioned to mean a certain voltage level, however, the voltage level is not limited to the supply voltage (eg, VDD) of the memory device (eg, memory device 100). For example, if a memory device (eg, memory device 100) has an internal voltage generator (not shown in FIG. 1) that generates an internal voltage based on VDD, such internal voltage may be used in place of VDD.In the physical structure of memory device 100 , memory cells 102 may be formed vertically (eg, stacked on top of each other in different layers) in different levels above a substrate (eg, a semiconductor substrate) of memory device 100 . The structure of the memory array 101 including the memory cell 102 may include the structures of the memory array and memory cells described below with reference to FIGS. 2A to 6 .As shown in FIG. 1, memory device 100 may include access lines 104 (or "word lines") and data lines (eg, bit lines) 105. Memory device 100 may use signals (eg, word line signals) on access line 104 and data line 105 for accessing memory cell 102 to provide data to be stored (eg, written) on memory cell 102 or from memory. Information (eg, data) read out (eg, read) by the unit.Memory device 100 may include an address register 106 for receiving address information ADDR (eg, row address signals and column address signals) on lines (eg, address lines) 107 . Memory device 100 may include row access circuitry (eg, x-decoder) 108 and column access circuitry (eg, y-decoder) 109 operable to decode address information ADDR from address register 106. Based on the decoded address information, memory device 100 can determine which memory cells 102 to access during memory operations. Memory device 100 may perform write operations for storing information in memory unit 102 and read operations for reading (eg, reading out) information (eg, previously stored information) in memory unit 102 . Memory device 100 may also perform operations (eg, refresh operations) for refreshing (eg, validating) values of information stored in memory unit 102 . Each of the memory cells 102 may be configured to store information that may represent a binary zero ("0") or a binary one ("1").Memory device 100 may receive supply voltages including supply voltages VDD and Vss on lines 130 and 132, respectively. The supply voltage Vss may operate at ground potential (eg, a value of approximately zero volts). The supply voltage VDD may include an external voltage provided to the memory device 100 from an external power source such as a battery or an alternating current to direct current (AC-DC) converter circuitry.As shown in FIG. 1 , memory device 100 may include a memory control unit 118 for controlling memory operations (eg, read and write) of memory device 100 based on control signals on line (eg, control line) 120 write operation). Examples of signals on line 120 include row access strobe signal RAS*, column access strobe signal CAS*, write enable signal WE*, chip select signal CS*, clock signal CK, and clock enable signal CKE. These signals may be part of the signals provided to a dynamic random access memory (DRAM) device.As shown in FIG. 1, memory device 100 may include lines (eg, global data lines) 112 that may carry signals DQ0 through DQN. In a read operation, the values (eg, logic 0 and logic 1) of the information (in the form of signals DQ0 through DQN) provided to line 112 (read from memory cell 102 ) may be based on the signals on data line 105 Values from DL0 and DL0* to DLN and DLN*. In a write operation, the value of information (eg, “0” (binary 0) or “1” (binary 1)) provided to data line 105 (to be stored in memory unit 102 ) may be based on Values of signals DQ0 to DQN.Memory device 100 may include readout circuitry 103 , selection circuitry 115 , and input/output (I/O) circuitry 116 . Column access circuitry 109 may selectively activate signals on lines (eg, select lines) based on address signal ADDR. Select circuitry 115 may select the signal on data line 105 in response to the signal on line 114 . The signal on data line 105 may represent the value of information to be stored in memory unit 102 (eg, during a write operation) or read (eg, read) from memory unit 102 (eg, during a read operation). the value of the information.I/O circuitry 116 may be operable to provide information read from memory unit 102 to line 112 (eg, during a read operation) and to provide information from line 112 (eg, provided by an external device) to data Line 105 to be stored in memory unit 102 (eg, during a write operation). Lines 112 may include nodes within memory device 100 or pins (or solder balls) on a package where memory device 100 may reside. Other devices external to memory device 100 (eg, a memory controller or processor) may communicate with memory device 100 via lines 107, 112, and 120.Memory device 100 may include other components that are not shown to help focus on the embodiments described herein. Memory device 100 may be configured to include at least a portion of a memory device having associated structure and operations described below with reference to FIGS. 2A-6 .Those of ordinary skill in the art will recognize that memory device 100 may include other components, several of which are not shown in FIG. 1 in order not to obscure the example embodiments described herein. At least a portion of memory device 100 (eg, a portion of memory array 101) may include similar or identical structures to any of the memory devices described below with reference to FIGS. 2A-6.Figure 2A shows a schematic diagram of a portion of a memory device 200 including a memory array 201 in accordance with some embodiments described herein. Memory device 200 may correspond to memory device 100 of FIG. 1 . For example, memory array 201 may form part of memory array 101 of FIG. 1 .As shown in Figure 2A, memory device 200 may include memory cells 210-217, which are volatile memory cells (eg, DRAM cells). Each of the memory cells 210 to 217 may include two transistors T1 and T2 and a capacitor 202 such that each of the memory cells 210 to 217 may be referred to as a 2T1C memory cell. For simplicity, the transistors of different memory cells among memory cells 210 to 217 are given the same label T1 and T2, and the capacitors of different memory cells among memory cells 210 to 217 are given the same label (ie, 202).Memory cells 210 to 217 may be arranged in memory cell groups (eg, strings) 2010 and 2011. Each of the memory cell groups 2010 and 2011 may contain the same number of memory cells. For example, memory cell group 2010 may include four memory cells 210, 211, 212, and 213, and memory cell group 2011 may include four memory cells 214, 215, 216, and 217. As an example, Figure 2A shows four memory cells located in each of memory cell groups 2010 and 2011. The number of memory cells in memory cell groups 2010 and 2011 may differ from four.FIG. 2A shows directions x, y, and z that may correspond to directions x, y, and z of the structure (physical structure) of the memory device 200 shown in FIGS. 2D-2I. As described in greater detail below with reference to FIGS. 2D-2I , the memory cells in each of memory cell groups 2010 and 2011 may be formed vertically (eg, in a vertically stacked manner in the z-direction). stacked on top of each other) on the substrate of memory device 200 .Memory device 200 (FIG. 2A) may perform write operations for storing information in memory units 210-217, and read operations for reading (eg, reading) information from memory units 210-217. Each of memory cells 210 through 217 may be randomly selected during a read or write operation. During a write operation of memory device 200, information may be stored in one or more selected memory cells. During a read operation of memory device 200, information may be read from one or more selected memory cells.As shown in Figure 2A, memory device 200 may include decoupling components (eg, isolation components) 281-286 that are not memory cells. Certain decoupling components among decoupling components 281 - 286 may prevent current flow across the particular decoupling components (described in greater detail below). In the physical structure of memory device 200, each of decoupling components 281-286 may be a component (eg, a transistor) that is permanently turned off (eg, always placed in an off state). Alternatively, each of decoupling components 281 - 286 may be a dielectric material (eg, silicon oxide) that may prevent conduction of electrical current therethrough.As shown in FIG. 2A, memory device 200 may include read data lines (eg, read bit lines) 220 that may be shared by groups of memory cells 2010 and 2011. Memory device 200 may include a common conductive line 290 coupled to memory cell groups 2010 and 2011. Common conductive line 290 may be coupled to ground during operation of memory device 200 (eg, read or write operations).Read data line 220 may carry signal (eg, read data line signal) BL_R0. During a read operation of memory device 200, the value of signal BL_R0 (eg, a current or voltage value) may be used to determine the value of information read (eg, sensed) from a selected memory cell (eg, "0" or "1"). The selected memory cells may be from memory cell group 2010 or memory cell group 2011. During a read operation of memory device 200, the memory cells of memory cell group 2010 and memory cell group 2011 may be selected one at a time to provide information read from the selected memory cell.Memory device 200 may include individual board lines 250-257. Board lines 250, 251, 252 and 253 may carry signals PL00, PL01, PL02 and PL03 respectively. Board lines 254, 255, 256 and 257 may carry signals PL10, PL11, PL12 and PL13 respectively.During a read operation of memory device 200, signals PL00, PL01, PL02, and PL03 on corresponding plate lines 250 to 253 may be provided with different voltages. Depending on the value of the information stored in the selected memory cell, a certain amount (eg, a predetermined amount) of current may or may not pass through the memory cells 210, 211, 212, and 213 between the read data line 220 and the common conductive line 290 flow. Based on the presence or absence of such amounts of electrical current, memory device 200 may (eg, by using detection circuitry (not shown in FIG. 2A)) determine the value of the information stored in the selected memory cell (eg, "0" or "1").As shown in Figure 2A, memory device 200 may include read select lines 260 and 261 coupled to memory cell groups 2010 and 2011, respectively. Read select lines 260 and 261 may carry signals (eg, read select signals) RSL0 and RSL1, respectively. During a read operation of memory device 200, read select signals RSL0 and RSL1 may be selectively activated to couple corresponding memory cell groups (2010 or 2011) to read data lines 220.Memory device 200 may include select transistors 270 and 271 that may be controlled (eg, turned on or off) by signals RSL0 and RSL1, respectively. Groups of memory cells 2010 and 2011 may be selected one at a time during a read operation to read information from memory cells 210 through 217. For example, during a read operation, if one of memory cells 210, 211, 212, and 213 is selected, signal RSL0 may be activated (eg, provided with a positive voltage) to turn on select transistor 270 and couple to memory cell group 2010, thereby reading data line 220. In this example, when signal RSL0 is activated, signal RSL1 may be deactivated (eg, provided with zero volts) to turn off select transistor 271 so that memory cell group 2011 is not coupled to read data line 220 . In another example, if one of memory cells 214, 215, 216, and 217 is selected, signal RSL1 may be activated (eg, provided with a positive voltage) to turn on select transistor 271 and couple to the memory Cell group 2011, thereby reading data line 220. In this example, when signal RSL1 is activated, signal RSL0 may be deactivated (eg, provided with zero volts) such that memory cell group 2010 is not coupled to read data line 220 .Memory device 200 may include write data lines (write bit lines) 231 and 232 that may be shared by memory cell groups 2010 and 2011. Write data lines 231 and 232 may carry signals BL_WA and BL_WB, respectively. During a write operation of memory device 200 , signals BL_WA and BL_WB may be provided with a voltage whose value may be based on a value (eg, "0" or "1") of information to be stored in one or more selected memory cells. . Two memory cells within a group can share the write data line. For example, memory cells 210 and 211 may share write data line 231, and memory cells 212 and 213 may share write data line 232. In another example, memory cells 214 and 215 may share write data line 231 , and memory cells 216 and 217 may share write data line 232 .Memory device 200 may include write word lines 240-247 (which may be part of the access lines of memory device 200). Write word lines 240, 241, 242, and 243 may carry signals WWL00, WWL01, WWL02, and WWL03, respectively. Write word lines 244, 245, 246, and 247 may carry signals WWL10, WWL11, WWL12, and WWL13, respectively.During write operations of memory device 200, write word lines 240, 241, 242, and 243 (associated with memory cell group 2010) may be used to provide access to memory cells 210, 211, 212, and 213, respectively. To facilitate storing information in one or more selected memory cells in memory cell group 2010.During write operations of memory device 200, write word lines 244, 245, 246, and 247 (associated with memory cell group 2011) may be used to provide access to memory cells 214, 215, 216, and 217, respectively. To facilitate storing information in one or more selected memory cells in memory cell group 2011.Information stored in a particular memory cell (among memory cells 210 - 217 ) of memory device 200 may be based on the presence or absence of a certain amount (eg, a predetermined amount) of charge in capacitor 202 of the particular memory cell. . The amount of charge placed on capacitor 202 of a particular memory cell may be based on the value of the voltage provided to signals BL_WA and BL_WB during a write operation. During a read operation for reading information from a selected memory cell, the presence or absence of an amount of current between read data line 220 and common conductive line 290 is based on an amount of charge in the selected memory cell. the presence or absence of capacitor 202.2A shows read data line 220 and write data lines 231 and 232 shared by two memory cell groups (eg, 2010 and 2011) as an example. However, read data line 220 and write data lines 231 and 232 may be formed by other memory cell groups (not shown) of memory device 200 similar to memory cell groups 2010 and 2011 (eg, memory cell groups in the y direction). shared.Write word lines 240, 241, 242, and 243 may be shared by other memory cell groups (not shown) in the x-direction of memory device 200. Plate lines 250, 251, 252, and 253 may be shared by other memory cell groups (not shown) in the x-direction of memory device 200.As shown in Figure 2A, two memory cells (eg, 212 and 213) of the same memory cell group (eg, 2010) may share a write data line (eg, 232). Therefore, the number of write data lines (eg, two data lines in FIG. 2A) may be half the number of memory cells in each memory cell group (eg, four memory cells in FIG. 2A). For example, if each memory cell group in FIG. 2A has six memory cells, memory device 200 may include three write data lines (similar to write data lines 231 and 232) shared by the corresponding six memory cell pairs. .As shown in FIG. 2A , memory device 200 may include other elements such as read data line 221 (and corresponding signal BL_RN), read select lines 262 and 263 (and corresponding signals RSL2 and RSL3), and select transistors 272 and 273 . Such other elements are similar to those described above. Therefore, for simplicity, a detailed description of such other elements of memory device 200 is omitted from the description herein.FIG. 2B shows a schematic diagram of a portion of memory device 200 of FIG. 2A including group 2010 of memory cells. As shown in Figure 2B, capacitor 202 may include capacitor plates (eg, terminals) 202a and 202b. Capacitor plate 202a may form part of (or may be the same as) a storage node (eg, a memory element) of a corresponding memory cell of memory device 200. The capacitor plate 202a of a particular memory cell may hold a charge that may be used to represent a value (eg, "0" or "1") for the information stored in that particular memory cell. Capacitor plate 202a may be coupled to a terminal (eg, source or drain) of transistor T2 via conductive connection 203.Capacitor plate 202b of capacitor 202 may also be the gate of transistor T1 of the corresponding memory cell. Therefore, capacitor plate 202b of capacitor 202 and the gate of transistor T1 are the same element. The combination of capacitor 202 and transistor T1 may also be referred to as a storage capacitor-transistor (eg, gain cell). During a write operation for storing information in memory (eg, memory cell 213), the storage capacitor-transistor of memory device 200 may allow a relatively small amount of charge to be stored in capacitor plate 202a to represent the amount of charge stored in the memory. The value of the message (e.g. "1"). The relatively small amount of charge may allow the memory cells of memory device 200 to be relatively small in size. During a read operation in which information is read from a memory cell, the storage capacitor-transistor combination may operate to amplify charge (eg, current). Because the amount of charge is relatively small, amplification (eg, gain) of the charge can improve the accuracy of information read from the memory cells of memory device 200 .During a write operation to store information in a selected memory cell (eg, memory cell 213), charge may be provided (or not provided) to the selected memory cell, depending on the value of the information to be stored in the selected memory cell. Capacitor plate 202a of a memory cell (eg, memory cell 213). For example, if a "0" (binary 0) is to be stored in memory cell 213 (selected memory cell), no charge may be provided to capacitor plate 202a. In this example, signal BL_WB on write data line 232 may be provided with zero volts (or alternatively a negative voltage), transistor T2 of memory cell 213 may be on, and transistor T2 of memory cell 212 may be off. In another example, if a "1" (binary 1) is to be stored in memory cell 213 (selected memory cell), a certain amount (eg, a predetermined amount) of charge may be provided to the capacitor plate of memory cell 213 202a. In this example, the signal BL_WB on the write data line 232 may be provided with a positive voltage, the transistor T2 of the memory unit 213 may be turned on, and the transistor T2 of the memory unit 212 may be turned off.During a read operation that reads (eg, reads out) information previously stored in a selected memory cell (eg, memory cell 212) of a group of memory cells (eg, 2010), the voltage (eg, V1>0 ) is applied to the gate of transistor T1 of the unselected memory cells of the memory cell group (e.g., memory cells 210, 211, and 213), such that no memory cells are selected regardless of the value of the information stored in the selected memory cells. All transistors T1 will be turned on. Another voltage (eg, V0<V1) may be provided to the gate of transistor T1 of the selected memory cell. Depending on the value of the information previously stored in the selected memory cell (eg, "0" or "1"), the memory cell's transistor T1 may be turned on or may remain off.During a read operation, signal BL_R0 on read data line 220 may have different values depending on the state (eg, on or off) of transistor T1 of the selected memory cell. Memory device 200 may detect different values of signal BL_R0 to determine the value of the information stored in the selected memory cell. For example, in FIG. 2B , if memory cell 212 is selected to be read, a voltage (eg, zero volts) may be provided to signal PL02 (which controls the gate of transistor T1 of memory cell 212 ), and the voltage may be V1 is applied to the gate of transistor T1 of memory cells 210, 211, and 213. In this example, transistor T1 of memory unit 213 may be turned on or may remain off depending on the value of the information previously stored in memory unit 212 (eg, binary 0 or binary 1). Memory device 200 may detect different values of signal BL_R0 to determine the value of information stored in memory unit 212 .Figure 2C is a graph illustrating example values of voltages of signals provided to memory device 200 of Figure 2B during example write and read operations of memory device 200, in accordance with some embodiments described herein. The signals in Figure 2C (WWL00 to WWL03, PL00 to PL03, BL_WA, BL_WB, RSL0, and BL_R0) are the same as those shown in Figure 2B. As shown in FIG. 2C , in each of the write and read operations, depending on which memory cell among the memory cells 210 , 211 , 212 , and 213 is selected, a signal may be provided with a voltage having a specific value (to (in volts). In Figure 2C, assume that memory cell 212 (shown in Figure 2B) is the selected (target) memory cell during write operations and read operations, and memory cells 210, 211, and 213 are not selected (unselected). The following description refers to Figures 2B and 2C.During a write operation of memory device 200 (FIG. 2C), signal WWL02 (associated with selected memory cell 212) may be provided with voltage V1 (positive voltage), such as WWL02=V1, to turn on the transistor of memory cell 212 T2. As an example, voltage V1 may have a value greater than the supply voltage (eg, VDD). Signals WWL00, WWL01, and WWL03 (associated with unselected memory cells 210, 211, and 213, respectively) may be provided with a voltage V0 (e.g., substantially equal to VDD), such as WWL00=WWL01=WWL03=V0, to turn off the memory cells 210, 211 and 213 for transistor T2. Information (eg, "0" or "1") may be stored in memory unit 212 (via turned-on transistor T2 of memory unit 212) by providing voltage VBL_W to signal BL_WB. The value of voltage VBL_W may be based on the value of information to be stored in memory unit 212 . For example, if a "0" is to be stored in the memory cell 212, the voltage VBL_W may have one value (eg, VBL_W=0V or VBL_W<0V), and if a "1" is to be stored in the memory cell 212, the voltage VBL_W may have another value (eg, VBL_W>0V (eg, or VBL_W=1V)).Other signals of memory device 200 during write operations may be provided with voltages as shown in Figure 2C. For example, each of signals PLOO, PL01, PL02, and PL03 (associated with both selected and unselected memory cells) may be provided with voltage V0, and each of signals BL_WA, RSL0, and BL_R0 Voltage V0 can be provided.The value of the voltage applied to the signal of Figure 2C may be used for any selected memory cell of memory cell group 2010 (Figure 2B) during a write operation. For example, if memory cell 213 is selected during a write operation (memory cells 210, 211, and 212 are not selected), the values of the voltages provided to signals WWL02 and WWL03 in FIG. 2C (e.g., WWL02=V0 and WWL03=V1 ) can be exchanged, and the other signals can remain at the values shown in Figure 2C.In another example, if memory cell 210 is selected during a write operation (memory cells 211, 212, and 213 are not selected), the values of the voltages provided to signals WWL00 and WWL02 in Figure 2C may be swapped (e.g., WWL00 =V1 and WWL02=V0), the values of the voltages provided to BL_WA and BL_WB in FIG. 2C may be exchanged (eg, BL_WB=VBL_W and BL_WA=V0), and other signals may remain at the values shown in FIG. 2C.In another example, if memory cell 211 is selected during a write operation (memory cells 210, 212, and 213 are not selected), the values of the voltages provided to signals WWL01 and WWL02 in Figure 2C may be swapped (e.g., WWL01 =V1 and WWL02=V0), the values of the voltages provided to BL_WA and BL_WB in FIG. 2C may be exchanged (eg, BL_WB=VBL_W and BL_WA=V0), and other signals may remain at the values shown in FIG. 2C.As shown in FIG. 2B , memory cells 210 and 211 may share write data line 231 , and memory cells 212 and 213 may share write data line 232 (which is different from data line 231 ). In such a configuration, two memory cells associated with different write data lines may be selected in parallel (e.g., simultaneously) during the same write operation to store (e.g., in parallel) information on both in selected memory cells. For example, in a write operation, memory units 210 and 212 may be selected in parallel; memory units 210 and 213 may be selected in parallel; memory units 211 and 212 may be selected in parallel; and memory units 211 and 213 may be selected in parallel. As an example, if memory cells 210 and 212 are selected (eg, selected in parallel) in a write operation, the voltage may be provided with a value such that WWL00 = WWL02 = V1 (transistor T2 of memory cells 210 and 212 is turned on), WWL01 = WWL03 = V0 (transistors T2 of memory cells 211 and 213 are turned on), and other signals can remain at the values shown in Figure 2C. In this example, the values of the information to be stored in selected memory cells 210 and 212 may be the same (e.g., by providing the same voltage to signals BL_WA and BL_WB) or may be different (e.g., by providing the same voltage to signal BL_WA and BL_WB provide different voltages).The following description discusses example read operations for memory device 200 of Figure 2B. As assumed above, during a read operation, memory cell 212 (FIG. 2B) is the selected memory cell, while memory cells 210, 211, and 213 are unselected memory cells. In the description herein, specific values of voltage are taken as examples. However, the voltage can have different values. During a read operation (FIG. 2C), signals WWL00, WWL01, WWL02, and WWL03 may be provided with voltage V0 (eg, WWL00=WWL01=WWL02=WWL03=V0) because transistors T2 of memory cells 210, 211, 212, and 213 Can remain off during read operations (or may not need to be on). Signal PL02 (associated with selected memory cell 212) may be provided with voltage V0. Signals PL00, PL01 and PL03 (associated with unselected memory cells 210, 211 and 213 respectively) may be provided with voltage V2, such as PL00=PL01=PL03=V2. As an example, voltage V2 may have a value substantially equal to VDD.Other signals of memory device 200 during read operations may be provided with voltages as shown in Figure 2C. For example, signal RSL0 may be provided with voltage V2 (for turning on select transistor 270), and each of signals BL_WA and BL_WB may be provided with voltage V0.Based on the applied voltage V2 shown in FIG. 2C , the transistors T1 of the memory cells 210 , 211 and 213 may be turned on regardless of (eg, independent of) the value of the information stored in the memory cells 210 , 211 and 213 . ). Based on the applied voltage V0, transistor Tl of memory cell 212 may turn on or may remain off (may not conduct). For example, transistor T1 of memory cell 212 may be turned on when the information stored in memory unit 212 is a "0" and may be turned off (or remain turned off) when the information stored in memory unit 212 is a "1" . If transistor T1 of memory cell 212 is turned on, a certain amount of current may flow (through turned-on transistor T1 of each of memory cells 210, 211, 212, and 213) between read data line 220 and the common conductive line. 290 flows on the current path between. If transistor T1 of memory cell 212 remains off (or turned off), a certain amount of current may not flow between read data line 220 and common conductive line 290 (e.g., because there is no conductive path through the turned off memory cell The transistor T1 of 212 is formed).In Figure 2C, signal BL_R0 may have voltage VBL_R. The value of voltage VBL_W may be based on the presence or absence of current (eg, an amount of current) flowing between read data line 220 and common conductive line 290 (the presence or absence of current is based on the value stored in memory unit 212 the value of the information). For example, if the information stored in the memory unit 212 is "1", the value of the voltage VBL_W may be 0<VBL_R<1V (or 0<VBL_R=1), and if the information stored in the memory unit 212 is "0" ”, then the value of voltage VBL_W can be VBL_R=0. Based on the value of voltage VBL_W associated with signal BL_R0, memory device 200 may determine the value of the information stored in memory unit 212 during this example read operation.The above description assumes that memory cell 212 is the selected memory cell during a read operation. If other memory cells (210, 211 and 213) in the memory device are selected, the values of the signals in the graph shown in Figure 2C may be similar. For example, if memory unit 210 is selected, voltages V0, V2, V2, and V2 may be provided to signals PL00, PL01, PL02, and PL03, respectively; if memory unit 211 is selected, signals PL00, PL01, PL02, and PL03 may be supplied, respectively. Voltages V2, V0, V2, and V2 are provided; if memory cell 213 is selected, signals PL00, PL01, PL02, and PL03 may be provided with voltages V2, V2, V2, and V0, respectively. In this example, the other signals may remain at the values shown in Figure 2C.The memory cells of memory device 200 (eg, memory cells 210, 211, 212, and 213) may be randomly selected during write operations or read operations. Alternatively, the memory cells of memory device 200 (eg, memory cells 210, 211, 212, and 213) may be selected sequentially during write operations, read operations, or both.2D illustrates a side view (eg, cross-sectional view) of the structure of a portion of memory device 200 schematically shown in FIG. 2B , wherein memory cells 210, 211, 212, in accordance with some embodiments described herein. The memory cell structure of each memory cell in and 213 may include portions from dual pillars. For simplicity, cross-sectional lines (eg, hatching) have been omitted from most elements shown in the figures described herein.As shown in FIG. 2D , memory device 200 may include a substrate 299 on which memory cells 210 , 211 , 212 , and 213 may be formed (eg, vertically) with respect to the z-direction at different locations in memory device 200 . level (physical internal level). Substrate 299 may include single crystalline (also referred to as single crystal) semiconductor material. For example, substrate 299 may include single crystalline silicon (also referred to as monocrystalline silicon). The single crystalline semiconductor material of substrate 299 may contain impurities such that substrate 299 may have a specific conductivity type (eg, n-type or p-type). Substrate 299 may include circuitry 295 formed in substrate 299 . Circuitry 295 may include a sense amplifier (which may be similar to sense circuitry 103 of FIG. 1 ), decoder circuitry (which may be similar to row access circuitry 108 and column access circuitry 109 of FIG. 1 ), and Other circuitry for memory devices such as memory device 100 (eg, DRAM devices).Memory device 200 may include pillars (eg, pillars of semiconductor material) 301 and 302 whose lengths extend in the z-direction perpendicular to substrate 299 (eg, outward from the substrate). The z-direction may be the vertical direction of the memory device 200 , which is the direction between the common conductive line 290 and the read data line 220 . As shown in Figure 2D, posts 301 and 302 are parallel to each other in the z-direction. As described in more detail below, each of memory cells 210, 211, 212, and 213 has a memory cell structure that includes a portion of two pillars (dual pillars) 301 and 302.In FIG. 2D, the portion marked "n+" may be an n-type semiconductor material portion (n-type semiconductor material region). The material of the n+ portion includes a semiconductor material (eg, silicon) that may be doped (eg, implanted) with a dopant (eg, an impurity) such that the n+ portion is a conductive doped portion (doped region) that can conduct electrical current. The portion labeled "P_Si" may be a semiconductor material (eg, silicon) and of a different type (eg, conductive type) than the n+ portion. Part of P_Si may be p-type semiconductor material (p-type semiconductor material region). For example, portion P_Si may be a p-type polysilicon portion. As described below, when a voltage is applied to a conductive element (eg, a write word line) adjacent to the particular portion P_Si, a channel (eg, a conductive path) may be formed in the particular portion P_Si, and the channel will A specific P_Si portion is electrically connected to two n+ portions adjacent to said specific portion P_Si.As shown in Figure 2D, each of the columns 301 and 302 may include different segments, where each of the segments may include n+ portions, P_Si portions, or a combination of n+ portions and P_Si portions. For example, as shown in FIG. 2D , pillar 301 may have a segment that includes portion 301 a (n+ portion) and portion 301 d (P_Si portion) of the structure (eg, material) of capacitor plate 202 a adjacent memory cell 213 . In another example, pillar 301 may have a segment that includes portion 301c (n+ portion) and portion 301e (P_Si portion) of the structure (eg, material) of capacitor plate 202a adjacent memory cell 212. In a further example, column 301 may have a segment including portion 301b (n+ portion) adjacent portion 301d (P_Si portion). Figure 2D also shows column 302 having portions 302a, 302b, 302c (n+ portion), 302d, and 302e (P_Si portion) contained in respective segments of column 302.Each of the transistors T1 may comprise a particular portion P_Si of pillar 301 and a combined portion of two n+ portions of pillar 301 adjacent to said particular P_Si portion. For example, portion 301d (P_Si portion) and portions 301a and 301b (n+ portions) may form portions of the body, source, and drain, respectively, of transistor Tl of memory cell 213. In another example, portion 301e (P_Si portion) and portions 301b and 301c (n+ portions) may form portions of the body, source, and drain, respectively, of transistor Tl of memory cell 212.Each of transistors T2 may include a combination of a particular portion P_Si of pillar 302 and two n+ portions of pillar 302 adjacent to the particular P_Si portion. For example, portion 302d (P_Si portion) and portions 302a and 302b (n+ portions) may form portions of the body, source, and drain, respectively, of transistor T2 of memory cell 213. In another example, portion 302e (P_Si portion) and portions 302b and 302c (n+ portions) may form portions of the body, source, and drain, respectively, of transistor T2 of memory cell 212.As shown in Figure 2D, the memory cell structures of memory cells 212 and 213 may include conductive materials 312 and 313, respectively. Examples of each of conductive materials 312 and 313 include polysilicon (eg, conductively doped polysilicon), metal, or other conductive materials.Conductive material 312 may include portions that form a portion of capacitor plate 202a of memory cell 212, portions that contact (eg, electrically connect to (directly coupled)) portion 302a (n+ portion) of post 302, and conductive materials that form memory cell 212. Part of the connection 203.Conductive material 313 may include portions that form a portion of capacitor plate 202a of memory cell 213, portions that contact (eg, electrically connect to (directly coupled)) portion 302b (n+ portion) of post 302, and conductive materials that form memory cell 213. Part of the connection 203.The memory cell structure of each of the memory cells 210 and 211 is similar to that of the memory cells 212 and 213, as shown in FIG. 2D. For simplicity, a detailed description of the memory cell structures of memory cells 210 and 211 is omitted from the description of FIG. 2D.As shown in FIG. 2D , memory device 200 may include dielectric (eg, dielectric material) 304 that may extend continuously along the length and sidewalls of pillar 301 . Capacitor plate 202a of each of memory cells 210, 211, 212, and 213 may be separated (eg, electrically isolated) from pillar 301 by dielectric 304.Memory device 200 may include dielectric (eg, dielectric material) 305 . The capacitor plate 202a of each of the memory cells 210, 211, 212, and 213 may be separated (eg, electrically isolated) from the corresponding plate line (among the plate lines 250, 251, 252, and 253) by one of the dielectrics 305 ).Memory device 200 may include dielectrics (eg, dielectric material) 306 and 307 positioned at corresponding locations (adjacent corresponding segments) of pillars 302, as shown in Figure 2D. Each of write word lines 240 , 241 , 242 , and 243 may be separated (eg, electrically isolated) from pillar 302 by a corresponding one of dielectrics 306 . Each of write data lines 231 and 232 may contact (eg, electrically connect) a corresponding n+ portion of post 302 . Each of plate lines 250 , 251 , 252 , and 253 may be separated (eg, electrically isolated) from column 302 by a corresponding one of dielectrics 307 .Dielectrics 304, 305, 306, and 307 may be formed from the same dielectric material or different dielectric materials. For example, dielectrics 304, 305, 306, and 307 may be formed of silicon dioxide. In another example, dielectrics 304, 306, and 307 may be formed from silicon dioxide, and dielectric 305 may be formed from a dielectric material having a dielectric constant greater than that of silicon dioxide.As shown in FIG. 2D, the length of each of read select line 260, write word lines 240 to 243, and plate lines 250 to 253 may be in the x direction perpendicular to the z dimension. The length of each of the read data line 220 and the write data lines 231 and 232 may be in a y direction (not shown) perpendicular to the x dimension.Common conductive lines 290 may include conductive material (eg, conductive regions) and may be formed over a portion of substrate 299 (eg, by depositing conductive material over substrate 299). Alternatively, common conductive line 290 may be formed in or on a portion of substrate 299 (eg, by doping a portion of substrate 299).Memory device 200 may include conductive portions 293, which may include conductively doped polysilicon, metal, or other conductive materials. Conductive portion 293 may be coupled to ground (not shown). Although common conductive line 290 may be coupled to ground, connecting post 301 to ground through conductive portion 293 may further improve the conductive path (eg, current flow) between read data line 220 and ground during read operations of memory device 200 path).As shown in Figure 2D, each of decoupling components 281, 282, and 283 may include a P_Si portion of post 302, a portion of one of dielectrics 307, and a portion of conductive lines among conductive lines 281a, 282a, and 283a. . Examples of conductive lines 281a, 282a, and 283a include conductively doped polysilicon, metal, or other conductive materials. Decoupling components 281, 282, and 283 are in an "off" state (eg, permanently off (always off)) during operations of memory device 200 (eg, write and read operations).As mentioned above with reference to Figure 2A, each of the decoupling components 281 to 286 may be permanently placed in an off state. The off-state of each of decoupling components 281, 282, and 283 may prevent (e.g., block) current from flowing from one location to another across each of decoupling components 281, 282, and 283. a location. This can create electrical separation between elements associated with post 302, where it is undesirable for current to flow between such elements. For example, decoupling component 282 in Figure 2D can create electrical separation between write data lines 231 and 232. This separation prevents information intended for storage in selected memory cells from being stored in unselected memory cells. For example, decoupling component 282 may prevent information from write data line 231 intended for storage in selected memory cells 211 from being stored in unselected memory cells 212 and prevent information from write data line 232 intended for storage in unselected memory cells 212 . Information to be stored in the selected memory unit 212 is stored in the unselected memory unit 211 .In alternative structures of memory device 200, the structure of decoupling components 281, 282, and 283 may differ from that shown in FIG. 2D as long as each of decoupling components 281, 282, and 283 may be electrically Isolate components. For example, in this alternative configuration, each of decoupling components 281 , 282 , and 283 may include dielectric material in a corresponding portion of post 302 . In this example, each of portions 302f, 302g, and 302h may be a dielectric portion (eg, a silicon oxide portion).In FIG. 2D, each of the read data line 220, the write data lines 231 and 232, the read select line 260, the write word lines 240 to 243, the plate lines 250 to 253, and the capacitor plate 202a may be formed by A conductive material (or combination of conductive materials) is formed. Examples of such conductive materials include polysilicon (eg, conductively doped polysilicon), metals, or other conductive materials.Conductive material 313 and other elements (eg, plate lines, write word lines, and write data lines) may be positioned along corresponding segments of posts 301 and 302, as shown in Figure 2D. For example, conductive material 313 may include portions located along the segment of post 301 that includes portions 301a and 301d (portions that form part of capacitor plate 202a of memory cell 213). Conductive material 313 may also include portions that contact portion 302a (n+ portion) of post 302. In another example, conductive material 312 may include portions located along the segment of post 301 that includes portions 301c and 301e (portions that form part of capacitor plate 202a of memory cell 212). Conductive material 312 may also include portions that contact portion 302c (n+ portion) of post 302. The conductive material of plate lines 250 to 253, write word lines 240 to 243, and write data lines 231 and 232 may be positioned along corresponding segments of posts 301 and 302, as shown in Figure 2D.In Figure 2D, lines 2E, 2F, 2G, 2H, and 2I are cross-sectional lines. As discussed below, portions (eg, partial top views) of memory device 200 taken from lines 2E, 2F, 2G, 2H, and 2I are shown in FIGS. 2E, 2F, 2G, 2H, and 2I, respectively.2E illustrates a portion (eg, partial top view) of memory device 200 including some elements viewed from line 2E of FIG. 2D downward to substrate 299 of FIG. 2D , in accordance with some embodiments described herein. . For the sake of simplicity, detailed descriptions of the same elements shown in FIGS. 2A-2D (and other figures described below) will not be repeated.To illustrate the relative positions of some of the elements of memory device 200 (eg, memory cells 213 and 217), FIG. 2E illustrates the structure of memory device 200 that is schematically shown in FIG. 2C but not in FIG. 2D. The locations of some of the elements shown above. For example, FIG. 2E shows memory cell 217 (FIG. 2A), read select line 261 (FIG. 2C), plate line 257 (FIG. 2C), schematically shown in FIG. 2C but not structurally shown in FIG. 2D. 2C) and write word line 247 (FIG. 2C). In another example, Figure 2E shows an X decoder and a Y decoder not shown in Figure 2D. The X decoder and Y decoder in FIG. 2E may be part of the circuitry 295 in the substrate 299 in FIG. 2D of the memory device 200. The X decoder and Y decoder (FIG. 2E) may be part of the respective row and column access circuitry of memory device 200.As shown in FIG. 2E, read select line 260, plate line 253 (located below (below) the read select line 260 with respect to the z direction), and write word line 243 (located below the plate line 253 in the z direction) Each line in the lower (below) may have a length extending in the x-direction. Not shown in FIG. 2E are write word lines 242, 241, and 240 (FIG. 2D) positioned below write word line 243.Similarly, in Figure 2E, read select line 261, plate line 257 (located below read select line 261 with respect to the z direction), and write word line 247 (located below plate line 257 with respect to the z direction) ) may have a length extending in the x-direction. Not shown in FIG. 2E are write word lines 244, 245, and 246 (FIG. 2A) positioned below write word line 247.As shown in FIG. 2E , each of the read data line 220 , the write data line 232 , and the write data line 231 (located below the write data line 232 in the z direction) may have a line extending in the y direction. length.2F illustrates a portion (eg, a partial top view) of a memory device 200 including some elements viewed from line 2F of FIG. 2D down to substrate 299 of FIG. 2D , in accordance with some embodiments described herein. . As shown in Figure 2F, portion 301a, which is the segment of column 301 that includes the n+ portion, may include sidewalls 301a' (eg, circular sidewalls). Dielectric 304 may include sidewalls 304' (eg, circular sidewalls). Capacitor plate 202a (formed from a portion of conductive material 313 in Figure 2D) may include sidewalls 202a' (eg, circular sidewalls). Dielectric 305 may include sidewalls 305' (eg, circular sidewalls).Dielectric 304 may include portions surrounding sidewall 301a'. Capacitor plate 202a may include portions of sidewalls 304' surrounding dielectric 304. Dielectric 305 may include portions surrounding sidewall 202a' of capacitor plate 202a. The conductive material of plate lines 253 may include portions surrounding sidewalls 305' of dielectric 305.2G illustrates a portion (eg, partial top view) of memory device 200 including some elements viewed from line 2G of FIG. 2D downward to substrate 299 of FIG. 2D , in accordance with some embodiments described herein. . As shown in FIG. 2G , conductive material 313 may include portions that form capacitor plate 202a and portions that contact (eg, electrically connect to) portion 302a of post 302 (n+ portion). Material 313 also contains portions that form part of conductive connection 203 .2H illustrates a portion (eg, a partial top view) of a memory device 200 including some elements viewed from line 2H of FIG. 2D down to substrate 299 of FIG. 2D in accordance with some embodiments described herein. . As shown in FIG. 2H , write word line 243 (which is formed from a conductive material) may include a portion separated from portion 301 b of pillar 301 by dielectric 304 and a portion separated from portion 302 d of pillar 302 by dielectric 306 .2I illustrates a portion (eg, a partial top view) of a memory device 200 including some elements viewed from line 2I of FIG. 2D down to substrate 299 of FIG. 2D, in accordance with some embodiments described herein. . As shown in Figure 2I, decoupling component 280 may include a portion of conductive line 281a separated from portion 302f (P_Si portion) of post 302 by dielectric 307. Conductive portion 293 may contact (electrically connect) the n+ portion of post 301 .As described above with reference to FIGS. 2A-2I , memory device 200 may include memory cells (eg, 210, 211, 212, and 213) stacked over a substrate (eg, substrate 299). Memory cells (e.g., 210, 211, 212, and 213) may be grouped into separate memory cell groups, wherein memory device 200 may include associated with each memory cell group for providing Multiple (eg, two) pieces of information in corresponding memory cells are written to data lines (eg, 231 and 232).In alternative constructions, memory device 200 may have more than two write data lines associated with each of memory cell groups 2010 and 2011. For example, in this alternative configuration, memory device 200 may include memory cells 210, 211, 212, and 213 respectively such that each of the four write data lines may be coupled to memory cells 210, 211 The four write data lines of the corresponding memory cells among , 212 and 213 . Four write data lines may be shared between memory cell groups 2010 and 2011. In an alternative configuration (eg, four write data lines), memory cell groups 2010 and 2011 may share a read data line, such as read data line 220 shown in Figure 2A.Memory device 200 may include other variations (eg, a single write data line associated with each group of memory cells). One such variant is described in detail with reference to Figures 3A to 3F.Figure 3A shows a schematic diagram of a portion of a memory device 300 that may be a variation of memory device 200 of Figure 2A, in accordance with some embodiments described herein. Memory device 300 may include similar or identical elements to those of memory device 200 . For simplicity, similar or identical elements between memory devices 200 and 300 are given the same reference numerals.As shown in Figure 3A, memory device 300 includes one (eg, only a single) write data line (eg, write data line 330) for each of memory cell groups 2010 and 2011. For comparison, memory device 200 includes more than one write data line for each of memory cell groups 2010 and 2011 (eg, two write data lines 231 and 232). In Figure 3A, write data line 330 may carry signal BL_W0. Write data line 330 may be shared by memory cell groups 2010 and 2011 of memory device 300 .FIG. 3B shows a schematic diagram of a portion of memory device 300 of FIG. 3A including group 2010 of memory cells. As shown in FIG. 3B , memory cells 210, 211, 212, and 213 may be coupled between the write data line 330 and the common conductive line 290.Memory device 300 may perform write operations for storing information in memory units 210, 211, 212, and 213. The write operation in the memory device 300 may be a sequential write operation such that information may be stored in the memory units 210, 211, 212, and 213 sequentially. For example, in a sequential write operation, memory units 210, 211, 212, and 213 may be selected to store information one at a time in an order starting with memory unit 210 and ending with memory unit 213 (eg, sequential order). In this sequential order, memory unit 210 may be the first memory unit of memory unit group 2010 selected to store information, and memory unit 213 may be the last memory unit of memory unit group 2010 selected to store information. This means that the memory device 300 may store the information in the memory unit 211 after (eg, only after) the information has been stored in the memory unit 210. The memory device 300 may store the information in the memory unit 210 and The information is stored in memory unit 212 after (eg, only after) 211, and memory device 300 may store the information in memory unit 212 after (eg, only after) the information has been stored in memory units 210, 211, and 212. stored in memory unit 213.During a write operation of memory device 300, information to be stored in selected memory cells among memory cells 210, 211, 212, and 213 may be provided from write data line 330. The value of the information to be stored in the selected memory cell (eg, "0" or "1") may be based on the value of the voltage provided to signal BL_W0.Memory device 300 may perform read operations for reading (eg, reading) information from memory units 210, 211, 212, and 213. Read operations in memory device 300 may be similar to read operations of memory device 200 of FIG. 2A (eg, random read operations). For example, during a read operation of memory device 300, read data line 220 may be provided with information read from a selected one of memory cells 210, 211, 212, and 213. Signal BL_R0 on read data line 220 may have different values depending on the value of the information stored in the selected memory cell (eg, binary 0 or binary 1). Memory device 300 may detect different values of signal BL_R0 to determine the value of the information stored in the selected memory cell.3C is a graph illustrating example values of voltages of signals provided to memory device 300 of FIG. 3B during example write and read operations of memory device 300 in accordance with some embodiments described herein. The signals in Figure 3C (WWL00 to WWL03, PL00 to PL03, BL_W0, RSL0, and BL_R0) are the same as those shown in Figure 3B. In the example write and read operations of Figure 3C, it is assumed that memory cell 210 is the selected memory cell and memory cells 211, 212, and 213 are not selected (unselected). As described above with reference to Figure 3B, write operations in memory device 300 may be sequential write operations. Therefore, in the example write operation associated with Figure 3C, when memory unit 210 is selected to store information, memory units 211, 212, and 213 may not have the information stored therein. The following description refers to Figures 3B and 3C.As shown in Figure 3C, during a write operation, signals WWL00, WWL01, WWL02, and WWL03 (associated with memory cells 210, 211, 212, and 213, respectively) may be provided with voltage V1, such that WWL00=WWL01=WWL02=WWL03= V1. Based on the applied voltage V1, the transistor T2 (FIG. 3B) of the memory cells 210, 211, 212, and 213 may be turned on. Information from write data line 330 may be stored in memory cell 210 (via turned on transistor T2 of memory cell 210 and by providing voltage VBL_W to signal BL_W0). The value of voltage VBL_W (in volts) may be based on the value of the information to be stored in memory unit 210 (eg, "0" or "1"). Other signals of memory device 300 during write operations may be provided with voltages as shown in Figure 3C. For example, each of the signals PL00, PL01, PL02, and PL03 may be provided with the same voltage V0, and each of the signals RSL0 and BL_R0 may also be provided with the voltage V0.During the read operation associated with FIG. 3C (memory cell 210 is the selected memory cell), signals WWL00, WWL01, WWL02, and WWL03 may be provided with voltage V0 (eg, WWL00=WWL01=WWL02=WWL03=V0). Signal PLOO (associated with selected memory cell 210) may be provided with voltage V0. Signals PL01, PL02 and PL03 (associated with unselected memory cells 211, 212 and 213 respectively) may be supplied with voltage V2. Other signals of memory device 300 during read operations may be provided with voltages as shown in Figure 3C. For example, signal RSL0 may be provided with voltage V2 (used to turn on select transistor 270), and signal BL_W0 may be provided with voltage V0. Signal BL_R0 may have voltage VBL_R. Based on the value of voltage VBL_R, memory device 300 may determine the value of information stored in memory unit 210 during the read operations described herein.Figure 3D is a graph illustrating example values of voltages of signals provided to memory device 300 of Figure 3B during example write and read operations of memory device 300, in accordance with some embodiments described herein. In the example write and read operations of Figure 3D, it is assumed that memory cell 212 is the selected memory cell and memory cells 210, 211, and 213 are not selected (unselected). As described above with reference to Figure 3B, write operations in memory device 300 may be sequential write operations. Therefore, when memory unit 212 is selected to store information, other information is already stored in memory units 210 and 211, and no information is stored in memory unit 213. The following description refers to Figures 3B, 3C and 3D.During a write operation of memory device 300 (FIG. 3C), signals WWL00 and WWL01 (associated with memory cells 210 and 211, respectively) may be provided with voltage V0, such as WWL00=WWL01=V0. Signals WWL02 and WWL03 (associated with memory cells 212 and 213 respectively) may be provided with voltage V1, such as WWL02=WWL03=V1. Based on the applied voltage V1, the transistor T2 of the memory cells 210 and 211 (FIG. 3B) may be turned off, and the transistor T2 of the memory cells 212 and 213 may be turned on. Information from write data line 330 may be stored in memory cell 212 (via turned-on transistor T2 of memory cells 212 and 213 and by providing voltage VBL_W to signal BL_W0). The value of voltage VBL_W may be based on the value of information to be stored in memory unit 212 . Other signals of memory device 300 during write operations may be provided with voltages as shown in Figure 3C. For example, each of signals PL00, PL01, PL02, and PL03 may be provided with voltage V0, and each of signals RSL0 and BL_R0 may be provided with voltage V0.During the read operation associated with Figure 3D (memory cell 212 is the selected memory cell), the signals of memory device 300 shown in Figure 3D may be the same as those shown in Figure 3C. For the sake of simplicity, the detailed operations of the read operations associated with Figure 3D are not repeated here.Figure 3E illustrates a side view (eg, cross-sectional view) of the structure of a portion of the memory device 300 shown schematically in Figure 3B, in accordance with some embodiments described herein. The structure of the memory device 300 shown in FIG. 3E includes similar or identical elements to the structure of the memory device 200 shown in FIG. 2D. For simplicity, similar or identical elements between memory devices 200 (FIG. 2D) and 300 (FIG. 3E) are given the same reference numerals.As described above with reference to FIG. 3A , differences between memory devices 200 and 300 include the number of write data lines coupled to memory cell groups of memory device 300 . As shown in Figure 3E, memory device 300 includes a single write data line 330 associated with memory cells 210, 211, 212, and 213. Unlike memory device 200 of Figure 2D, memory device 300 of Figure 3E may exclude (not include) decoupling components 282 and 283 (Figure 2D). In FIG. 3E , line 3F is a cross-sectional line through which a portion of memory device 300 can be viewed (eg, a partial top view).3F illustrates a portion (eg, a partial top view) of memory device 300 including some elements viewed from line 3F of FIG. 3E down to substrate 299 (FIG. 3E), in accordance with some embodiments described herein. element. As shown in FIG. 3F , the write data line 330 may have a length extending in the y direction, which is the same direction as the length of the read data line 220 . The structure of other elements of the memory device 300 shown in FIG. 3E is similar to the structure of the memory device 200 shown in FIGS. 2D to 2I. Therefore, detailed descriptions of other elements of memory device 300 are omitted for simplicity.4A illustrates a schematic diagram of a portion of a memory device 400 including memory cells in accordance with some embodiments described herein, wherein the memory cell structure of each of the memory cells 410, 411, 412, and 413 may include data from a single part of the column. The memory cell structures of the memory cells of the memory device 400 are described below with reference to FIGS. 4B to 4F. As shown in Figure 4A, memory device 400 may include memory array 401. Memory device 400 may correspond to memory device 100 of FIG. 1 . For example, memory array 401 may form part of memory array 101 of FIG. 1 .As shown in Figure 4A, memory device 400 may include groups (eg, strings) of memory cells 401A and 401B. Each of the memory cell groups 401A and 401B may contain the same number of memory cells. For example, memory cell group 401A may include four memory cells 410A, 411A, 412A, and 413A, and memory cell group 401B may include four memory cells 410B, 411B, 412B, and 413B. FIG. 4A shows four memory cells in each of memory cell groups 401A and 401B as an example. The memory cells in memory device 400 are volatile memory cells (eg, DRAM cells).4A shows directions x, y, and z that may correspond to directions x, y, and z of the structure (physical structure) of the memory device 400 shown in FIGS. 4B-4F. As described in greater detail below with reference to FIGS. 4B-4F , the memory cells in each of the memory cell groups 401A and 401B may be formed vertically (eg, in a vertically stacked manner in the z direction). stacked on top of each other) on the substrate of memory device 400 .As shown in Figure 4A, memory device 400 may include switches (eg, transistors) N0, N1, and N2 coupled to memory cells in each of memory cell groups 401A and 401B. Memory device 400 may include conductive lines 480a, 481a, and 482a that may carry signals CS0, CS1, and CS2, respectively. During write and read operations of memory device 400, memory device 400 may use signals CS0, CS1, and CS2 to control (eg, turn on or off) switches N0, N1, and N2, respectively.Memory device 400 may include data lines (bit lines) 430A, 431A, and 432A associated with memory cell group 401A. Data lines 430A, 431A, and 432A may carry signals BLOA, BL1A, and BL2A, respectively, to provide information to be stored in a corresponding memory cell of memory cell group 401A (eg, during a write operation) or from the memory cell group. Information is read (eg, read out) by the respective memory cells 410A, 411A, 412A, and 413A (eg, during a read operation).Memory device 400 may include data lines (bit lines) 430B, 431B, and 432B associated with memory cell group 401B. Data lines 430B, 431B, and 432B may carry signals BLOB, BL1B, and BL2B, respectively, to provide information to be stored in corresponding memory cells 410B, 411B, 412B, and 413B of memory cell group 401B (eg, during a write operation) or information read (eg, read out) from the corresponding memory cell (eg, during a read operation).Memory device 400 may include word lines 440, 441, 442, and 443 that may be shared by groups of memory cells 401A and 401B. Word lines 440, 441, 442, and 443 may carry signals WL0, WL1, WL2, and WL3, respectively. During a write operation or a read operation, memory device 400 may use word lines 440, 441, 442, and 443 to access memory cells of memory cell groups 401A and 401B.Memory device 400 may include board lines 450, 451, 452, and 453 shared by memory cell groups 401A and 401B. Plate lines 450, 451, 452 and 453 may carry signals PL0, PL1, PL2 and PL3 respectively. Each of plate lines 450, 451, 452, and 453 may serve as a common plate (eg, may be coupled to ground) for capacitors (described below) of respective memory cells of memory cell groups 401A and 401B. Memory device 400 may include a common conductive line 490 that may be similar to common conductive line 290 of memory device 200 or 300 described above.As shown in FIG. 4A , each of the memory cells 410A, 411A, 412A, and 413A and each of the memory cells 410B, 411B, 412B, and 413B may include a transistor T3 and a capacitor C such that these memory cells Each memory cell in may be referred to as a 1T1C memory cell. For simplicity, transistors of different memory cells among the memory cells of memory device 400 are given the same reference T3, and capacitors of different memory cells of memory device 400 are given the same reference C.As shown in Figure 4A, capacitor C may include capacitor plate 402a and another capacitor plate that may be part of (eg, electrically connected to) a respective one of plate lines 450, 451, 452, and 453. Capacitor plate 402a may form part of a storage node (eg, a memory element) of a corresponding one of the memory cells of memory device 400. Capacitor plate 402a of a particular memory cell may hold a charge that may be used to represent a value (eg, "0" or "1") for information stored in that particular memory cell. Capacitor plate 402a in a particular memory cell may be electrically connected (eg, directly coupled) to a terminal (eg, source or drain) of transistor T3 of that particular memory cell.As shown in Figure 4A, memory device 400 may include other elements such as memory cell 417A of memory cell group 402A, memory cell 417B of memory cell group 402B, plate line 457 (and associated signal PL7), and conductive line 485a (and associated signal CS5). Such other elements are similar to those described above. Therefore, for simplicity, a detailed description of such other elements of memory device 400 is omitted from the description herein.4B illustrates a side view (eg, cross-sectional view) of the structure of a portion of the memory device 400 schematically shown in FIG. 4A , in which each of the memory cells is in accordance with some embodiments described herein. The memory cell structure of the cell may contain portions from a single pillar.As shown in FIG. 4B , memory device 400 may include substrate 499 and pillars (eg, pillars of semiconductor material) 401A' and 401B' formed over substrate 499 . Each of posts 401A' and 401B' has a length extending in the z-direction perpendicular to substrate 499 (eg, the vertical direction). Each of pillars 401A' and 401B' may include an n+ portion and a P_Si portion. Memory cells 410A, 411A, 412A, and 413A may be formed along different segments of pillar 401A' (eg, vertically relative to substrate 499). Memory cells 410B, 411B, 412B, and 413B may be formed along different segments of pillar 401B' (eg, formed vertically relative to substrate 499). Memory device 400 may include circuitry 495 formed in substrate 499. Substrate 499, common conductive lines 490, and circuitry 495 may be similar to substrate 299, common conductive lines 290, and circuitry 295, respectively, of memory device 200 (FIG. 2D). The signals of memory device 400 shown in Figure 4B (eg, signals BL0B, BL1B, BL2B, WL00, WL01, WL02, WL03, PL00, PL01, PL02, PL03, CS0, CS1, and CS2) are the same as those shown in Figure 4A same.Figure 4C shows a portion of memory device 400 in Figure 4B, including memory cells 412A and 413A (of memory cell group 401A) and memory cells 412B and 413B (of memory cell group 401B). The following description discusses portions of memory device 400 shown in Figure 4C in greater detail. Elements in other portions of memory device 400 (eg, the portion containing memory cells 410A, 410B, 411A, and 411B in Figure 4B) have similar structures to the elements shown in Figure 4C and are not described herein for simplicity.As shown in Figure 4C, memory device 400 may include dielectric (eg, dielectric material) 405 positioned at corresponding locations (adjacent corresponding segments) of pillars 401A'. Dielectric 405 may include silicon oxide or other dielectric material. Dielectric 405 may separate (eg, electrically isolate) pillars 401A' and 401B' from write word lines 440, 441, 442, and 443, plate lines 450, 451, 452, and 453, and conductive line 482a.Each of data lines 431A and 432A may contact (eg, be electrically connected to) a corresponding n+ portion of post 401A'. Each of data lines 431B and 432B may contact (eg, be electrically connected to) a corresponding n+ portion of post 401B'.Capacitor plate 402a, which is part of a storage node (or memory element) of a corresponding memory cell, may comprise (eg, may be formed from) a portion of the n+ portion. For example, a portion of n+ portion 413A' may be a storage node (eg, a memory element) of memory unit 413A. In another example, a portion of n+ portion 413B' may be a storage node (eg, a memory element) of memory unit 413B.Transistor T3 may include transistor elements (e.g., body, source, and drain) that are portions P_Si of a particular pillar (pillar 401A' or 401B') and two n+ portions adjacent to portions P_Si of the same particular pillar. part of the combination. Transistor T3 may also include a gate as part of a corresponding word line. For example, a portion of word line 443 may be the gate of transistor T3 of memory cell 413A, and portions of n+ portions 413A′ and 413A″ may be the source and drain (or drain and source), respectively, of transistor T3 of memory cell 413A. ), and P_Si portion 413A''' may be the body (eg, moving body) of transistor T3 of memory cell 413A (in which a transistor channel may be formed). In another example, a portion of word line 442 may be the gate of transistor T3 of memory cell 412A, and portions of n+ portions 412A′ and 412A″ may be the source and drain (or drain), respectively, of transistor T3 of memory cell 412A. terminal and source), and P_Si portion 412A''' may be the body (eg, moving body) of transistor T3 of memory cell 412A (in which the transistor channel may be formed).Switch N2 may operate as a transistor such that the structure of switch N2 may include the structure of a transistor. Switch N2 may comprise a portion of a portion P_Si of a particular column (column 401A' or 401B') and a combined portion of two n+ portions adjacent to portion P_Si of the same particular column. For example, in switch N2 between memory cells 412A and 413A, a portion of conductive line 482A and the n+ portions of posts 401A' and 401B' may be the gate, source, and drain, respectively, of the transistors in switch N2.Word lines 442 and 443, data lines 431A, 431B, 432A, and 432B, plate lines 452 and 453, and conductive line 482A may include conductive materials. Examples of conductive materials include polysilicon (eg, conductively doped polysilicon), metals, or other conductive materials.In Figure 4C, lines 4D, 4E and 4F are cross-sectional lines. As discussed below, portions (eg, partial top views) of memory device 400 taken from lines 4D, 4E, and 4F are shown in Figures 4D, 4E, and 4F, respectively.4D illustrates a portion (eg, partial top view) of memory device 400 including some elements viewed from line 4D of FIG. 4C downward to substrate 499 (FIG. 4B), in accordance with some embodiments described herein. element. For the sake of simplicity, detailed descriptions of the same elements shown in FIGS. 4A-4C (and other figures described below) will not be repeated.To illustrate the relative positions of some of the elements of memory device 400, FIGS. 4D-4F illustrate elements of memory device 400 that are schematically shown in FIG. 4A but not structurally shown in FIGS. 4B and 4C. The location of some of the components. For example, Figure 4D shows memory cells 417A and 417B and word lines 447 and 443 that are shown schematically in Figure 4A but not structurally in Figures 4B and 4C. In another example, Figure 4D shows an X decoder and a Y decoder not shown in Figures 4A and 4B. However, the X decoder and Y decoder in Figure 4D may be part of the circuitry 495 of the memory device 400 in the substrate 499 in Figure 4B. The X decoder and Y decoder (FIG. 4D) may be part of the corresponding row and column access circuitry of memory device 400. As shown in FIG. 4D , each of the read data lines 432A and 432B may have a length extending in the y direction. Each of word lines 443 and 447 may have a length extending in the x-direction and be positioned below (under) read data lines 432A and 432B. Other word lines of memory device 400 positioned below respective word lines 443 and 447 are not shown in FIG. 4D.4E illustrates a portion (eg, partial top view) of memory device 400 including some elements viewed from line 4E of FIG. 4C downward to substrate 499 (FIG. 4B), in accordance with some embodiments described herein. element. As shown in FIG. 4E , each of the plate lines 453 and 457 may have a length extending in the x-direction. Figure 4E does not show other plate lines of memory device 400 that are positioned below corresponding plate lines 453 and 457.Figure 4F illustrates a portion (eg, a partial top view) of a memory device 400 including some elements viewed from line 4F of Figure 4C down to substrate 499, in accordance with some embodiments described herein. As shown in Figure 4F, each of the conductive lines 482a and 485a may have a length extending in the x-direction. Figure 4F does not show other conductive lines of memory device 400 that are positioned beneath respective conductive lines 482a and 485a.Figure 4G shows a schematic diagram of a portion of the memory device 400 of Figure 4A including memory cells 412A and 413A. Figure 4H is a graph illustrating example values of voltages of signals provided to memory device 400 of Figure 4G during three different example write operations 421, 422, and 423, in accordance with some embodiments described herein. The following description refers to Figures 4G and 4H.In write operation 421, memory unit 412A is selected to store information, and memory unit 413A is not selected (eg, not selected to store information). In write operation 422, memory unit 413A is selected to store information, and memory unit 412A is not selected. In write operation 423, both memory cells 412A and 413A are selected to store information.As shown in FIG. 4H , during a write operation of memory device 400 (eg, write operation 421, 422, or 423), signal CS2 may be provided with voltage V3 regardless of which of memory cells 412A and 413A is selected. (To turn off switch N2). Voltage V3 may be 0V (eg, ground). During a write operation of memory device 400 (eg, write operation 421, 422, or 423), each of signals PL2 and PL3 may be provided with a voltage regardless of which of memory cells 412A and 413A is selected. V4. Voltage V4 may be 0V (eg, ground).In write operation 421, signal WL3 (associated with unselected memory cell 413A) may be provided with voltage V5 (to turn off transistor T3 of unselected memory cell 413A). Voltage V5 may be 0V (eg, ground). Signal WL2 (associated with selected memory cell 412A) may be provided with voltage V6 (to turn on transistor T3 of selected memory cell 412A). The value of voltage V6 is greater than the value of voltage V5 (V6>V5). The value of voltage V6 may be greater than the supply voltage (eg, VDD) of the memory device 500 (eg, V6>VDD). Signal BL2A (associated with unselected memory cell 413A) may be provided with voltage Vx, which may be 0V (eg, Vx=V3 or Vx=V4) or may be between 0V depending on the memory cell leakage characteristics. Some voltage (e.g. optimal voltage) between and VDD (e.g. half VDD). Signal BL1A (associated with selected memory cell 412A) may be provided with voltage VBL1. The value of voltage VBL1 may be based on the value of the information to be stored in memory unit 412A. For example, if the information to be stored in memory unit 412A has a value (eg, "0"), voltage VBL1 may have a value (eg, VBL1=0V or VBL1<0), and if the information to be stored in memory unit 412A If the information in has another value (for example, "1"), then the voltage may have another value (for example, VBL1>0V (for example, VBL1=1V)). As mentioned above, reference to VDD refers to a certain voltage level, however, the voltage level is not limited to the supply voltage (eg, VDD) of the memory device (eg, memory device 400). For example, if an internal voltage generator of a memory device (eg, memory device 400) generates an internal voltage that is less than VDD and uses the internal voltage as the memory array voltage, VBL1 (FIG. 4H) may be less than VDD but greater than 0V.In write operation 422, the voltages provided to signals WL2 (associated with unselected memory cell 412A) and WL3 (associated with selected memory cell 413A) may be swapped such that WL2 = V5 and WL3 = V6. Signal BL1A (associated with unselected memory cell 412A) may be provided with voltage Vx. Signal BL2A (associated with selected memory cell 413A) may be provided with voltage VBL2. The value of voltage VBL2 may be based on the value of information to be stored in memory unit 413A. For example, if the information to be stored in memory unit 413A has a value (eg, "0"), voltage VBL2 may have a value (eg, VBL2=0V or VBL2<0), and if the information to be stored in memory unit 413A If the information in has another value (for example, "1"), the voltage may have another value (for example, VBL2>0V (for example, VBL2=1V, VDD, or greater than 0V)).In write operation 423, both memory cells 412A and 413A are selected to store information. Accordingly, the voltages provided to signals associated with memory cells 412A and 413A may be the same as those in write operations 421 and 422 for selected memory cells, such as WL2=WL3=V6, BL1A=VBL1, and BL2A=VBL2 .4I is a flowchart illustrating the different stages of a read operation 460 of the memory device 400 of FIGS. 4A-4F in accordance with some embodiments described herein. As shown in Figure 4I, a read operation 460 (for reading information from a selected memory cell) may include different stages, such as a pre-read (eg, pre-read) stage 461, a read (or read) stage 462, reset phase 463 and recovery phase 464. These stages (461, 462, 463 and 464) may be executed one stage after another in the order shown in Figure 4I starting from the pre-reading stage 461. In Figure 4I, readout stage 462 (for determining the value of the information stored in the selected memory cell) can be performed in two different readout schemes. One readout scheme (eg, shown in Figure 4M) is based on the threshold voltage (Vt) shift of the transistor coupled to the selected memory cell (eg, transistor T3). Alternative readout schemes (e.g., Figure 4M') are based on the properties (e.g., self-latching) of bipolar junction transistors that are intrinsically built into the transistor of the selected memory cell (e.g., transistor T3). The stages (461, 462, 463 and 464) of the read operation 460 are described in detail with reference to Figures 4J-4R.Figure 4J shows a schematic diagram of a portion of the memory device 400 of Figure 4A including memory cells 412A and 413A. Figure 4K is a graph showing the values of the signals in Figure 4J during the pre-read phase 461 of the read operation 460 of Figure 4I. The following description refers to Figures 4J and 4K. Assume that memory unit 413A is the selected memory unit (to be read in this example), and assume that memory unit 412A is the unselected memory unit (to be read in this example).Pre-read stage 461 may be performed to store (eg, temporarily store) information in the body of transistor T3 of memory cell 413A and to store information in the body of transistor T3 of memory cell 412A. In Figure 4C, the body of transistor T3 of memory cells 413A and 412A is contained in P_Si portions 413"' and 412"', respectively. Referring to Figures 4J and 4K, the value of the information stored in the body of transistor T3 of memory cell 413A is based on the value of the information stored in the capacitor plate 402a of memory cell 413A. The value of the information stored in the body of transistor T3 of memory cell 412A is based on the value of the information stored in capacitor plate 402a of memory cell 412A (Figures 4C and 4J).Reading information from a selected memory cell (eg, memory cell 413A in this example) involves detecting the data line associated with the selected memory cell and adjacent unselected memory cells (eg, memory cell 412A in this example). ) a current (eg, a certain amount of current) on a conductive path (eg, a current path) between associated data lines. For example, in Figure 4K, reading information from memory cell 413A may involve detecting current on the conductive path between data lines 432A and 431A.Information stored in the capacitor plate 402a of a selected memory cell and information stored in the capacitor plate 402a of an unselected memory cell may be lost after the information is read from the selected memory cell. In pre-read stage 461 (FIG. 4K), temporarily storing information in the body of transistor T3 of each of memory cells 412A and 413A allows the selected memory cell to be Information is restored (written back) to selected and unselected memory cells. Therefore, in a read operation of a selected memory cell (eg, memory cell 413A), the body of transistor T3 of the selected memory cell and the body of transistor T3 of an adjacent unselected memory cell (eg, memory cell 412A) may be The body is used as a temporary storage location.The voltages shown in Figure 4K may allow information to be stored in the body of transistor T3 in selected and unselected memory cells. The information temporarily stored in the body of transistor T3 may take the form of electric holes. The electron holes in the body of transistor T3 as described herein refer to the additional amount of electron holes that may arise in the material that forms part of the body of transistor T3 (eg, P_Si material).As shown in FIG. 4K, in the pre-sense stage 461, the signal CS2 may be provided with a voltage VL (eg, 0V) for turning off the switch N2. Each of signals PL2 and PL3 may be provided with voltage VPL (eg, 0V). Each of signals BL1A and BL2A may be provided with voltage VBL_H (eg, VBL_H=VDD). Each of signals WL2 and WL3 may be provided with voltage VWL. The value of voltage VWL may be selected (eg, 0<VWL<VBL_H) to slightly turn on transistor T3 of each of memory cells 412A and 413A. This may allow for impact ionization (II) current at the drain of transistor T3 of memory cell 413A and II current at the drain of transistor T3 of memory cell 413A. The II current allows the creation of electric holes in the body of transistor T3 of memory cell 412A, and the creation of electric holes in the body of transistor T3 of memory cell 412A. The presence or absence of electric holes in the body of transistor T3 of memory cell 413A represents the value ("0" or "1") of the information stored in capacitor plate 402a of memory cell 413A. Similarly, the presence or absence of electric holes in the body of transistor T3 of memory cell 412A represents the value ("0" or "1") of the information stored in capacitor plate 402a of memory cell 412A.Depending on the value of the information stored in memory cell 413A, pre-read stage 461 in Figure 4K may or may not generate a hole in the body of transistor T3 of memory cell 413A. For example, if a "0" is stored in capacitor plate 402a of memory cell 413A, electric holes may be generated in (eg, accumulated in) the body of transistor T3 of memory cell 413A. In another example, if a "1" is stored in capacitor plate 402a of memory cell 413A, holes may not be generated in (eg, not accumulated in) the body of transistor T3 of memory cell 413A. Similarly, if a "0" is stored in capacitor plate 402a of memory cell 412A, a hole may be generated in (eg, accumulated in) the body of transistor T3 of memory cell 412A. In another example, if a "1" is stored in capacitor plate 402a of memory cell 412A, holes may not be generated in (eg, not accumulated in) the body of transistor T3 of memory cell 412A.The presence or absence of electric holes in the body of transistor T3 of memory cell 413A may cause a change (eg, shift) in the threshold voltage of memory cell 413A. This change (eg, a temporary change) in the threshold voltage of transistor T3 allows a readout voltage to be provided to the gate of transistor T3 (eg, below described in greater detail) to determine the value of the information stored (eg, stored in capacitor plate 402a) for that particular memory cell.As shown in FIG. 4K', in the pre-sense stage 461, the signal CS2 may be provided with a voltage VL (eg, 0V) for turning off the switch N2. Each of signals PL2 and PL3 may be provided with voltage VPL (eg, 0V). Each of signals BL1A and BL2A may be provided with voltage VBL_L (eg, VBL_L=0V). Each of signals WL2 and WL3 may be provided with voltage VWL. The value of voltage VWL may be selected (eg, VWL<0) to initiate inter-band tunneling current conduction of transistor T3 of each of memory cells 412A and 413A. This may allow for a GIDL current at the drain of transistor T3 of memory cell 413A and a GIDL current at the drain of transistor T3 of memory cell 413A. The GIDL current allows the creation of electric holes in the body of transistor T3 of memory cell 412A, and the creation of electric holes in the body of transistor T3 of memory cell 412A. The presence or absence of electric holes in the body of transistor T3 of memory cell 413A represents the value ("1" or "0") of the information stored in capacitor plate 402a of memory cell 413A. Similarly, the presence or absence of electric holes in the body of transistor T3 of memory cell 412A represents the value ("1" or "0") of the information stored in capacitor plate 402a of memory cell 412A.Depending on the value of the information stored in memory cell 413A, pre-read stage 461 in Figure 4K' may or may not generate a hole in the body of transistor T3 of memory cell 413A. For example, if a "1" is stored in capacitor plate 402a of memory cell 413A, electric holes may be generated in (eg, accumulated in) the body of transistor T3 of memory cell 413A. In another example, if a "0" is stored in capacitor plate 402a of memory cell 413A, holes may not be generated in (eg, not accumulated in) the body of transistor T3 of memory cell 413A. Similarly, if a "1" is stored in capacitor plate 402a of memory cell 412A, a hole may be generated in (eg, accumulated in) the body of transistor T3 of memory cell 412A. In another example, if a "0" is stored in capacitor plate 402a of memory cell 412A, holes may not be generated in (eg, not accumulated in) the body of transistor T3 of memory cell 412A.FIG. 4L shows a schematic diagram of a portion of the memory device 400 of FIG. 4A including memory cells 412A and 413A. Figure 4M is a graph showing the value of the signal in Figure 4L during the readout stage 462 using a threshold voltage shift based scheme. Readout phase 462 is performed after pre-readout phase 461 (Fig. 4K). 4N is a diagram illustrating cell current (a certain amount of current) flowing through a memory cell (eg, 412A or 413A), the value of information stored in the memory cell (eg, 412A or 413A) (eg, "0" or "1") and the relationship between voltages VSENSE and VPASS (which may be applied to the gate of transistor T3 of memory cell 412A or 413A). The following description refers to Figures 4L, 4M, and 4N.As shown in Figure 4M, readout phase 462 may include readout interval 462.1 (which may occur from time T1 to time T2) and readout interval 462.2 (which may occur from time T3 to time T4). Readout interval 462.2 occurs after readout interval 462.1 (eg, times T3 and T4 occur after times T1 and T2). During read interval 462.1, memory cell 413A is read to determine the value of the information stored in memory cell 413A. During read interval 462.2 (after reading memory cell 413A), memory cell 412A is read to determine the value of the information stored in memory cell 412A. Thus, in readout stage 462, memory cells 413A and 412A are read out in a sequential manner (cell by cell). As an example, Figure 4M shows that the readout of memory cell 413A (during readout interval 462.1) is performed before the readout of memory cell 412A (during readout interval 462.2). Alternatively, the reverse order may be used such that the readout of memory cell 412A may be performed before the readout of memory cell 413A.As mentioned above, information stored in both memory cells 413A and 412A may be lost after reading out one or both of memory cells 413A and 412A. Therefore, although it is assumed that only memory unit 413A is the selected memory unit for reading information from memory unit 413A, reading both memory units 413A and 412A during readout stage 462 allows storage to be obtained during readout stage 462 The value of the information in each of memory cells 413A and 412A (eg, "0" or "1"). The obtained value (the read value) may be stored (e.g., in storage circuitry (e.g., a data buffer, a latch, or other storage element, not shown)) and may subsequently be used as the value to be retrieved during the recovery phase 464 The value of the information is restored (eg, written back) to both memory cells 413A and 412A (described below with reference to Figure 4R). Reading out memory cells 413A and 412A during readout phase 462 may be performed using the voltages shown in Figure 4M.As shown in Figure 4M, some signals may be provided with the same voltage between readout intervals 462.1 and 462.2. For example, signal CS2 may be provided with voltage VH (VH>0V, eg, VH=VDD) for turning on switch N2 (FIG. 4L). Each of signals PL2 and PL3 may be provided with a voltage VPL (the same voltage as in pre-sense stage 461 in Figure 4K). Signal BL2A may be provided with voltage VBL_H. Signal BL1A may be provided with voltage VBL_L. The value of voltage VBL_L (for example, VBL_L=0V) is smaller than the value of voltage VBL_H.Signals WL2 and WL3 may be provided with voltages VSENSE and VPASS, respectively (eg, during sense interval 462.1), or with voltages VPASS and VSENSE, respectively (during sense interval 461.2), depending on whether memory cells 413A and 413A are sensed. Which memory cell in 412A. The value of voltage VPASS is greater than the value of voltage VSENSE.The value of voltage VPASS may be such that regardless of the presence of a hole in the body of transistor T3 of the unread memory cell (regardless of the value of the information stored in the capacitor plate 402a of the unread memory cell (e.g., "0" or " 1")), the transistors T3 of the memory cells that are not being read (eg, memory cell 412A during the read interval 462.1) are all turned on (eg, become conductive). For example, during readout interval 462.1, transistor T3 of memory cell 412A is turned on regardless of whether there are holes in the body of transistor T3 of memory cell 412A. This also means that regardless of the value of the information stored in capacitor plate 402a of memory cell 412A (eg, "0" or "1"), transistor T3 of memory cell 412A is turned on because during stage 462 the electron hole The presence or absence in the body of transistor T3 of memory cell 412A depends on the value of the information stored in capacitor plate 402a of memory cell 412A prior to readout stage 462, as described above in pre-readout stage 461.In FIG. 4M , the value of voltage VSENSE may cause transistor T3 of the memory cell being read (eg, memory cell 413A during sense interval 462.1) to depend on whether there are holes in the body of transistor T3 of the memory cell being read. And turn on or off. For example, during readout interval 462.1, if there are holes in the body of transistor T3 of memory cell 413A, transistor T3 of memory cell 413A turns on (eg, becomes conductive). This also means that if "0" ("1" in the case of II, "1" in the case of GIDL) is stored in the memory unit 413A before performing the pre-read stage 461 (which precedes the read stage 462) In the capacitor plate 402a, the transistor T3 of the memory cell 413A is turned on. In another example, during read interval 462.1, if no holes are present in the body of transistor T3 of memory cell 413A, transistor T3 of memory cell 413A turns off (eg, does not become conductive). This also means that if a "1" is stored in the capacitor plate 402a of the memory cell 413A before performing the pre-read phase 461 (which precedes the read-out phase 462), the transistor T3 of the memory cell 413A is turned off.The values of voltages VSENSE and VPASS may be based on the current-voltage relationship shown in Figure 4N for the case of the results of the pre-readout phase based on the II current mechanism (Figure 4K). Curve 410 indicates that if voltage VSENSE is provided to a signal (eg, WL2 or WL3) at the gate of transistor T3 of that particular memory cell, and a "0" is stored in capacitor plate 402a of that particular memory cell, then Current (cell current) may flow through a particular memory cell (eg, through transistor T3 of the particular memory cell). As described above, if a "0" is stored in the capacitor plate 402a of that particular memory cell, an electric hole may be generated in the body of the transistor T3 of that particular memory cell.However, if voltage VSENSE is provided to the signal at the gate of transistor T3 of that particular memory cell (e.g., WL2 or WL3), and a "1" is stored in that particular memory cell, no current (or negligible) will flow ( For example, an undetectable amount of current may flow through a particular memory cell. As described above, if "1" is stored in the capacitor plate 402a of the particular memory cell, no electric holes may be generated in the body of the transistor T3 of the particular memory cell.Curve 411 shows that if voltage VPASS is provided to a signal at the gate of transistor T3 of that particular memory cell (eg, WL2 or WL3), then current (cell current) can flow through a particular memory cell (eg, through transistor T3) of that particular memory cell, regardless of the value of the information stored in that particular memory cell (eg, "0" or "1"). In the case of the results for the pre-readout phase based on the GIDL current mechanism (FIG. 4K'), the curve 410 of FIG. 4N may present the following situation: if a "1" is stored in the capacitor plate 402a of that particular memory cell, Then an electric hole may be generated in the body of the transistor T3 of the particular memory cell, and the curve 411 may present the following situation: if a "0" is stored in the capacitor plate 402a of the particular memory cell, then the No electric holes may be generated in the body of transistor T3.Therefore, during readout interval 462.1 (for readout memory cell 413A), if transistor T3 of memory cell 413A conducts (e.g., if transistor T3 of memory cell 413A is present in the body of transistor T3 of memory cell 413A (during the pre-readout phase of Figure 4K 461), the current can pass through the transistor T3 of the memory unit 413A, the switch N2 (which is turned on), and the transistor T3 of the memory unit 412A (which is turned on) on the data lines 431A and 432A (Fig. 4L) flows between. During read interval 462.1, if transistor T3 of memory cell 413A is off (e.g., if there are no holes in the body of transistor T3 of memory cell 413A (not generated during pre-read phase 461 of Figure 4K)) , then current may not flow between data lines 431A and 432A (FIG. 4L) because transistor T3 of memory unit 413A is off (but switch N2 and transistor T3 of memory unit 412A are on).Similarly, during readout interval 462.2 (for readout of memory cell 412A), if transistor T3 of memory cell 412A is conductive (e.g., if there is in the body of transistor T3 of memory cell 412A (as shown in the preset of FIG. 4K An electric hole is created in the body of transistor T3 of memory cell 412A during readout phase 461, then current can pass through transistor T3 of memory cell 413A (which is conductive), switch N2 (which is conductive), and the memory Transistor T3 of cell 412A flows between data lines 431A and 432A (Figure 4L). During read interval 462.1, if transistor T3 of memory cell 412A is off (e.g., if there are no holes in the body of transistor T3 of memory cell 412A (not generated during pre-read phase 461 of Figure 4K)) , then current may not flow between data lines 431A and 432A (FIG. 4L) because transistor T3 of memory unit 412A is off (but switch N2 and transistor T3 of memory unit 413A are on).Memory device 400 may include detection circuitry (not shown) that may be coupled to data line 432A or data line 431A. Memory device 400 may use detection circuitry to determine the value of information stored in memory cells read out based on the presence or absence of current between data lines 432A and 431A during read intervals 462.1 and 462.2 (e.g., “0 ” or “1”). For example, during readout interval 462.1, memory device 400 may determine to store a "0" in memory cell 413A if current is detected, and to store a "1" if no current (or a negligible amount of current) is detected. ” is stored in memory unit 413A. In another example, during readout interval 462.2, memory device 400 may determine that if current is detected, store a "0" in memory cell 412A, and if no current (or a negligible amount of current) is detected, "1" is then stored in memory unit 412A. Memory device 400 may include storage circuitry (e.g., data buffers, latches, or other storage elements) to store the value of information read from memory cells 412A and 413A during readout phase 462 (e.g., "0" or "1"). Memory device 400 may use these stored values as values for information to be written back to memory units 412A and 413A during recovery stage 464 (described below).Figure 4M' is a graph illustrating the value of the signal in Figure 4L during a readout phase 462 using an alternative readout scheme based on the properties of built-in bipolar junction transistors (eg, self-latching). Except that in Figure 4M', signal WL3 may be provided with voltage VG (instead of VSENSE) when memory cell 413A is read out, and signal WL2 may be provided with voltage VG (instead of VSENSE) when memory cell 412A is read out. , the voltage values of Figure 4M' may be the same as those shown in Figure 4M. As shown in Figure 4M', readout phase 462 may include readout interval 462.1' (which may occur from time T1' to time T2') and readout interval 462.2' (which may occur from time T3' to time T4') . Read interval 462.2' (when memory cell 412A is read) occurs after read interval 462.1' (when memory cell 413A is read). Voltage VG can be less than zero volts, such as a slightly negative voltage (eg, VG<0V). Applying a voltage VG less than zero volts can cause phenomena such as impact ionization current (near data line 413A) and subsequent BJT latching. Memory device 400 may include detection circuitry (not shown) for determining the current values stored in memory unit 412A (when read therefrom) and memory unit 413A in a manner similar to the current detection described above with reference to FIG. 4M The value (for example, "0" or "1") of the information in (at the time it is read).Figure 4O shows a schematic diagram of a portion of memory device 400 of Figure 2A including memory cells 412A and 413A. Figure 4P is a graph showing the value of the signal of Figure 4O during the reset phase 463, which is performed after the readout phase 462 (Figure 4M).Reset phase 463 may be performed to clear holes from the body of transistor T3 of each of memory cells 412A and 413A that may have been generated during pre-sense phase 461 (FIG. 4K). Clearing the holes in reset phase 463 may reset the threshold voltage of transistor T3 of each of memory cells 412A and 413A. Reset phase 463 may help maintain the cell current flowing through memory cells 412A and 413A, the value of the information stored in memory cells 412A and 413A (eg, "0" or "1"), and the relationship between voltages VSENSE and VPASS ( For example, Figure 4N). The following description refers to Figures 4O and 4P.As shown in FIG. 4P, signal CS2 may be provided with voltage VL or voltage VH. Each of signals PL2 and PL3 may be provided with voltage VPL. Each of signals BL1A and BL2A may be provided with voltage VBL_X.Each of signals WL2 and WL3 may be provided with voltage VWLy. Voltage VWLy may have a value such that transistor T3 of each of memory cells 412A and 413A may be turned on. For example, voltage VWLy may have a value greater than 0V (eg, greater than ground) and equal to or less than the supply voltage of memory device 400 (eg, VDD). Using the value of the signal shown in Figure 4P, electric holes (eg, generated during pre-read phase 461 in Figure 4K) can be cleared from the body of transistor T3 of memory cells 412A and 413A. The value of voltage VBL_X may be zero volts (eg, VBL_X=0V), or alternatively less than zero volts, such as a slightly negative voltage (eg, VBL_X<0V).During specific reset phases of different read operations, memory cells (not shown in FIG. 4O) adjacent to memory cells 412A and 413A may be reset during the specific reset phase (eg, similar to reset phase 463 in FIG. 4P). reset, and memory cells 412A and 413A are unselected (or unused) in the read operation. During the specific reset phase (used to reset adjacent memory cells, not shown), if a voltage less than zero volts is provided to signals BL1A and BL2A, the voltages on signals WL2, WL3, and CS2 (FIG. 4O) The value of may be less than zero volts (eg, slightly less than zero volts, such as WL2 = WL3 = Vn (eg, Vn = -0.3V)) during the particular reset phase. However, to avoid transistor leakage that may be caused by GIDL current, the voltages on signals WL2, WL3, and CS2 (Figure 4O) can have values slightly less than zero volts, such as WL2 = WL3 = Vn, but not much less than Vn (e.g., -1V<WL2=WL3<-0.3V).Figure 4Q shows a schematic diagram of a portion of memory device 400 of Figure 2A including memory cells 412A and 413A. Figure 4R is a graph showing the values of the signals in Figure 4Q during the recovery phase 464, which is performed after the reset phase 463 (Figure 4P). As described above, a recovery stage 464 may be performed to recover (e.g., write back) to memory units 412A and 413A. The following description refers to Figures 4Q and 4R.As shown in Figure 4R, signal CS2 may be provided with voltage VL. Each of signals PL2 and PL3 may be provided with voltage VPL. Each of signals WL2 and WL3 may be provided with voltage V6 (eg, V6>VDD) so that transistor T3 of each of memory cells 412A and 413A may be turned on.Signal BL2A (associated with memory cell 413A) may be provided with voltage VBL2. The value of voltage VBL2 may be based on the value of information (eg, "0" or "1") to be stored (eg, rewritten) in memory unit 413A. The value of the information to be stored in the memory unit 413A during the recovery phase 464 is the same as the value of the information read (read) from the memory unit 413A during the readout phase 462 . In FIG. 4R, if the information to be stored in the memory unit 413A has a value (eg, "0"), the voltage VBL2 may have a value (eg, VBL2=0V or VBL2<0), and if the information to be stored in the The information in memory unit 412A has another value (eg, "1"), then the voltage may have another value (eg, VBL2>0V (eg, VBL2=1V)). Based on the voltages in Figure 4R, the information (read in readout stage 462) can be recovered in capacitor plate 402a of memory cell 413A.Similarly, signal BL1A (associated with memory cell 412A) may be provided with voltage VBL1. The value of voltage VBL1 may be based on the value of information to be stored (eg, rewritten) in memory unit 412A (eg, "0" or "1"). If the information is pre-read using the II pre-read stage (associated with Figure 4K), then the value of the information to be stored in memory unit 412A during the restore stage 464 is the same as that read from the memory unit 412A during the read stage 462. The values of the (read) information are the same. However, if the information is pre-read using the GIDL pre-read stage (associated with Figure 4K'), the value of the information read (read) from memory unit 412A during read stage 462 may be Periods reversed. In FIG. 4R, if the information to be stored in the memory unit 412A has a value (eg, "0"), the voltage VBL1 may have a value (eg, VBL1=0V or VBL1<0), and if the information is to be stored in The information in memory unit 412A has another value (eg, "1"), then the voltage may have another value (eg, VBL1>0V (eg, VBL1=1V)). Based on the voltages in Figure 4R, the information (which was read out in readout stage 462) can be recovered (eg, in capacitor plate 402a of memory cell 412A).In the above example read operation (Figures 4J-4R), it is assumed that only memory cell 413A is the selected memory cell. However, both memory cells 413A and 412A may be selected in a read operation. In such a read operation (memory cells 413A and 412A are both selected), readout stage 462 (Fig. 4M) may also be performed in the manner described above (eg, the same manner in which only memory cell 413A is selected) because memory cells Both 413A and 412A can be read out in a sequential manner to determine the value of the information stored in memory units 413A and 412A.Figure 5A shows a schematic diagram of a portion of a memory device 500 containing memory cells having a memory cell structure from a single pillar, in accordance with some embodiments described herein. Memory device 500 may include memory array 501 . Memory device 500 may correspond to memory device 100 of FIG. 1 . For example, memory array 501 may form part of memory array 101 of FIG. 1 . Memory device 500 may be a variation of memory device 400 of Figure 4A. Therefore, for the sake of simplicity, a detailed description of similar or identical elements of memory devices 400 and 500 (which are given the same numbers in Figures 4A and 5A) will not be repeated. Differences in structure between memory devices 400 and 500 are described below.As shown in Figure 5A, memory device 500 may include memory cell groups (eg, strings) 501A and 501B. Each of memory cell groups 501A and 501B may contain the same number of memory cells. For example, memory cell group 501A may include memory cells 510A, 511A, 512A, and 513A, and memory cell group 501B may include memory cells 510B, 511B, 512B, and 513B. FIG. 5A shows four memory cells in each of memory cell groups 501A and 501B as an example. The memory cells in memory device 500 are volatile memory cells (eg, DRAM cells).FIG. 5A shows directions x, y, and z that may correspond to directions x, y, and z of the structure (physical structure) of the memory device 500 shown in FIGS. 5B to 5H. The memory cells in each of memory cell groups 501A and 501B may be vertically formed (eg, stacked on top of each other in a vertical stack in the z-direction) over the substrate of memory device 500 .Memory device 500 may omit switches (eg, transistors) N1 and N2 of memory device 400. However, as shown in FIG. 5A , memory device 500 may include transistor T4 in each memory cell in each of memory cell groups 501A and 501B. Memory device 500 also includes conductive lines 580, 581, 582, and 583 that may carry signals RSL0, RSL1, RSL2, and RSL3, respectively. Memory device 500 may use signals RSL0, RSL1, RSL2, and RSL3 to control (eg, turn on or off) transistors T4 of respective memory cells of memory cell groups 501A and 501B. The description herein uses the term "conductive lines" (referring to lines 580, 581, 582, and 583) to facilitate describing the different elements of memory device 500. However, conductive lines 580, 581, 582, and 583 may be word lines of memory device 500 similar to word lines 440, 441, 442, and 443.Memory device 500 may include data lines (bit lines) 520A and 521A associated with memory cell group 501A (in addition to data lines 430A, 431A, and 432A). Data lines 520A and 521A may carry signals BLR0A and BLR1A, respectively, for accessing (eg, during a read operation) corresponding memory cells 510A, 511A, 512A, and 513A of memory cell group 501A.Memory device 500 may include data lines (bit lines) 520B and 521B associated with memory cell group 501B (in addition to data lines 430B, 431B, and 432B). Data lines 520B and 521B may carry signals BLR0B and BLR1B, respectively, for accessing (eg, during a read operation) corresponding memory cells 510B, 511B, 512B, and 513B of memory cell group 501B.As shown in FIG. 5A , each of the memory cells 510A, 511A, 512A, and 513A and each of the memory cells 510B, 511B, 512B, and 513B may include transistors T3 and T4 and a capacitor C such that these Each of the memory cells may be referred to as a 2T1C memory cell. For comparison, each memory cell of memory device 400 (eg, memory cell 413A) includes a 1T1C memory cell.As shown in Figure 5A, memory device 500 may include other elements such as memory cell 517A of memory cell group 502A, memory cell 517B of memory cell group 502B, plate line 457 (and associated signal PL7). Such other elements are similar to those described above. Therefore, for simplicity, a detailed description of such other elements of memory device 500 is omitted from the description herein.Figure 5B illustrates a side view (eg, cross-sectional view) of the structure of a portion of memory device 500 shown schematically in Figure 5A, in accordance with some embodiments described herein. The structure of memory device 500 is similar to the structure of memory device 400 in Figure 4B. Therefore, for simplicity, a detailed description of similar or identical elements of memory devices 400 and 500 (which are given the same numbers in Figures 4B and 5B) will not be repeated.As shown in Figure 5B, conductive lines 580, 581, 582, and 583 may be similar (or identical) to word lines 440, 441, 442, and 443, respectively. For example, each of conductive lines 580, 581, 582, and 583 may have a length extending in the x-direction and may be shared by corresponding memory cells of memory cell groups 501A and 501B. Each of the conductive lines 580, 581, 582, and 583 may also have a similar (or identical) structure to that of the word lines 440, 441, 442, and 443, such as the structure of the word line 443 shown in FIG. 4D.Data lines 520A and 520B may be similar (or identical) to data lines 430A and 430B, respectively. Data lines 521A and 521B may be similar (or identical) to data lines 432A and 432B, respectively. For example, each of the data lines 520A, 520B, 521A, and 521B may have a length extending in the y direction perpendicular to the x direction. Each of the data lines 520A, 520B, 521A, 521B may have a similar (or identical) structure to the structure of the data line 432A or 432B shown in FIG. 4D.Figure 5C shows a portion of the memory device 500 of Figure 5B including memory cells 512A, 513A, 512B, and 513B. Some of the elements shown in Figure 5C are similar to some of the elements of the memory device 400 of Figure 4C; such similar (or identical) elements are given the same numerals and, for simplicity, are not described herein again. . As shown in Figure 5C, the structure and location of transistor T3 and capacitor plate 402a are the same as those of memory device 400 (Figures 4B and 4C). Transistor T4 in Figure 5C may include elements similar to those of transistor T3. For example, transistor T4 may include transistor elements (eg, body, source, and drain) that are part of a combination of portion P_Si and two n+ portions of portion P_Si adjacent to the same pillar (pillar 501A or 501B), and as A portion of a transistor element (eg, a gate) corresponding to a conductive line (one of conductive lines 582 and 583).Figure 5D shows a schematic diagram of a portion of the memory device 500 of Figure 5A including memory cells 512A and 513A. Figure 5E is a graph illustrating example values of voltages of signals provided to memory device 500 of Figure 5D during three different example write operations 521, 522, and 523, in accordance with some embodiments described herein. The following description refers to Figures 5D and 5E.In write operation 521, memory unit 512A is selected to store information, and memory unit 513A is not selected (eg, not selected to store information). In write operation 522, memory unit 513A is selected to store information, and memory unit 512A is not selected. In write operation 523, both memory cells 512A and 513A are selected to store information.As shown in Figure 5E, during a write operation of memory device 500 (eg, any one of write operations 521, 522, and 523), regardless of which of memory cells 512A and 513A is selected, Each of signals PL2 and PL3 may be supplied with voltage V4. In write operations 521, 522, and 523, each of the signals RSL2 and RSL3 may be provided with a voltage Va (eg, Va=0V). In write operations 521, 522, and 523, signal BLR1A may be provided with voltage Vb (eg, Vb=0V).In write operation 521, signal WL3 (associated with unselected memory cell 513A) may be provided with voltage V5 (to turn off transistor T3 of unselected memory cell 513A). Signal WL2 (associated with selected memory cell 512A) may be provided with voltage V6 (to turn on transistor T3 of selected memory cell 512A). The value of voltage V6 may be greater than the supply voltage (eg, VDD) of the memory device 500 (eg, V6>VDD). Signal BL2A (associated with unselected memory cell 513A) may be provided with voltage Vx (eg, Vx=V4). Signal BL1A (associated with selected memory cell 512A) may be provided with voltage VBL1. The value of voltage VBL1 may be based on the value of the information to be stored in memory unit 512A. For example, if the information to be stored in memory unit 512A has a value (eg, "0"), voltage VBL1 may have a value (eg, VBL1=0V or VBL1<0), and if the information to be stored in memory unit 512A If the information in has another value (for example, "1"), then the voltage may have another value (for example, VBL1>0V (for example, VBL1=1V)).In write operation 522, the voltages provided to signals WL2 (associated with unselected memory cell 512A) and WL3 (associated with selected memory cell 513A) may be swapped such that WL2 = V5 and WL3 = V6. Signal BL1A (associated with unselected memory cell 512A) may be provided with voltage Vx. Signal BLR1A (associated with unselected memory cell 513A) may be provided with voltage Vb. Signal BL2A (associated with selected memory cell 513A) may be provided with voltage VBL2. The value of voltage VBL2 may be based on the value of information to be stored in memory unit 513A. For example, if the information to be stored in memory unit 513A has a value (eg, "0"), voltage VBL2 may have a value (eg, VBL2=0V or VBL2<0), and if the information to be stored in memory unit 513A If the information in has another value (for example, "1"), the voltage may have another value (for example, VBL2>0V (for example, VBL2=1V)).In write operation 523, both memory cells 512A and 513A are selected to store information. Therefore, the voltages provided to each of signals WL2 and WL3 may be the same as those in write operations 521 and 522 for selected memory cells, such as WL2=WL3=V6, BL1A=VBL1, and BL2A=VBL2.Figure 5F is a flowchart illustrating the different stages of a read operation 560 of the memory device 500 of Figures 5A-5C, in accordance with some embodiments described herein. As shown in Figure 5F, a read operation 560 (for reading information from a selected memory cell) may include different stages, such as a pre-read stage 561, a read (or read) stage 562, a reset stage 563, and recovery Stage 564. These stages (561, 562, 563 and 564) may be executed one stage after another in the order shown in Figure 5F starting from the pre-read stage 561. In Figure 5F, the readout stage 562 (for determining the value of the information stored in the selected memory cell) can be performed in two different readout schemes. One readout scheme (eg, Figure 5J) is based on the threshold voltage (Vt) shift of the transistor coupled to the selected memory cell (eg, transistor T3). Alternative readout schemes for readout (e.g., Figure 5J') are based on the properties (e.g., self-latching) of bipolar junction transistors that are intrinsically built into the transistor of the selected memory cell (e.g., self-locking). For example, in transistor T4).The stages (561, 562, 563 and 564) of the read operation 560 are described in detail with reference to Figures 5G-5N.Figure 5G shows a schematic diagram of a portion of the memory device 500 of Figure 5A including memory cells 512A and 513A. Figure 5H is a graph showing the values of the signals in Figure 5G during the pre-read phase 561 of the read operation associated with Figure 5F. The following description refers to Figure 5H (impact ionization pre-readout stage) and Figure 5G. Assume that memory unit 512A is the selected memory unit (to be read in this example), and assume that memory unit 513A is the unselected memory unit (not to be read in this example). In the pre-reading stage 561, each of the signals PL2 and PL3 may be provided with a voltage VPL (eg, 0V). Signal BL2A may be provided with voltage Vc (eg, Vc=0V). Signal WL3 may be provided with voltage VL (eg, VL=0V) for turning off transistor T3 of memory cell 513A (the memory cell is not selected). Signal RSL3 may be provided with voltage VL (VL=0V). Signals BLR1A and BL1A may be provided with voltage VBL_H. Signal WL2 may be provided with voltage VWL (0<VWL<VBL_H), and RSL2 may be provided with voltage VL (VL<VBL_H). Similar to pre-read stage 461 of Figure 4K, pre-read stage 561 of Figure 5H may store information in the form of electric holes in the body of transistor T3 of memory cell 512A. The presence or absence of an electric hole in the body of transistor T3 of memory cell 512A depends on the value ("0" or "1") of the information stored in capacitor plate 402a of memory cell 512A.The following description refers to Figure 5H' (GIDL pre-reading phase) and Figure 5G. Assume that memory unit 512A is the selected memory unit (to be read in this example), and assume that memory unit 513A is the unselected memory unit (not to be read in this example). In pre-reading stage 561 of FIG. 5H', each of signals PL2 and PL3 may be provided with voltage VPL (eg, 0V). Signal BL2A may be provided with voltage Vc (eg, Vc=0V). Signal WL3 may be provided with voltage VL (eg, VL=0V) for turning off transistor T3 of memory cell 513A (the memory cell is not selected). Signal RSL3 may be provided with voltage VL (VL=0V). Signals BLR1A and BL1A may be supplied with voltage VL. Signal WL2 may be provided with voltage VWL (VWL<0). Signal RSL2 may be provided with voltage VL (VL=0V). Similar to pre-read stage 461 of Figure 4K', pre-read stage 561 of Figure 5H' may store information in the form of electric holes in the body of transistor T3 of memory cell 512A. The presence or absence of an electric hole in the body of transistor T3 of memory cell 512A depends on the value ("0" or "1") of the information stored in capacitor plate 402a of memory cell 512A.FIG. 5I shows a schematic diagram of a portion of the memory device 500 of FIG. 5A including memory cells 512A and 513A. Figure 5J is a graph illustrating the value of the signal in Figure 5I during the readout phase 562 using a threshold voltage shift based readout scheme. Readout phase 562 is performed after pre-readout phase 561 (Fig. 5H). The following description refers to Figures 5I and 5J. The voltage values of FIG. 5I may be the same as those shown in FIG. 5H except that the signals BLR1A, RSL2, WL2 and BL1A may be provided with voltages VBL_H, VPASS, VSENSE and VBL_L respectively.Memory device 500 may include detection circuitry (not shown) that may be coupled to data line 521A or data line 431A. Memory device 500 may use detection circuitry to determine the value of information stored in memory cell 512A (e.g., "0" or "1" based on the presence or absence of current between data lines 532A and 431A during readout phase 562 "). For example, during readout phase 562, memory device 500 may determine that if current is detected, store a "0" in memory cell 512A, and if no current (or a negligible amount of current) is detected, store a "1" ” is stored in memory unit 512A. The values "0" and "1" mentioned here may apply to the case of the impact ionization pre-readout stage. In the case of the GIDL prefetch phase, the logic can be reversed. Memory device 500 may include memory circuitry for storing the value (eg, "0" or "1") of information read from memory cell 512A during readout stage 562. Memory device 500 may use the stored value (eg, stored in memory circuitry) as the value of the information to be written back to memory unit 512a in recovery phase 564 (described below). In the alternative readout phase of Figure 5J, the voltages provided to signals BLR1A and BL1A may be switched such that BLR1A = VBL_L and BL1A = VBL_H.Figure 5J' is a graph showing the value of the signal in Figure 5I during a readout phase using an alternative readout scheme based on the properties of built-in bipolar junction transistors (eg, self-latching). The voltage values of FIG. 5J' may be the same as those shown in FIG. 5J, except that the signals BLR1A, WL2 and BL1A in FIG. 5J' may be provided with voltages VBL_L, VG and VBL_H respectively. Voltage VG can be less than zero volts, such as a slightly negative voltage (eg, VG<0V). Applying a voltage VG less than zero volts can cause phenomena such as impact ionization current (near data line 521A) and subsequent BJT latching. Memory device 500 may include detection circuitry (not shown) for determining the value (eg, "0" or "1") of information stored in memory cell 512A in a manner similar to the current detection described above with reference to FIG. 5J out).Figure 5K shows a schematic diagram of a portion of the memory device 500 of Figure 5A including memory cells 512A and 513A. Figure 5L is a graph showing the values of the signals in Figure 5K during the reset phase 563, which is performed after the readout phase 562 (Figure 5J). The following description refers to Figures 5K and 5L. The voltage values of FIG. 5L may be the same as those shown in FIG. 5J except that the signals BLR1A and B1A may be provided with the voltage VBL_X and the signals RSL2 and WL2 may be provided with the voltage VWLy. The value of voltage VBL_X may be zero volts (eg, VBL_X=0V). Alternatively, voltage VBL_X may have a value less than zero volts, such as a slightly negative voltage (eg, VBL_X = -0.3V).During specific reset phases of different read operations, memory cells adjacent to memory cell 513A (both shown and not shown in FIG. 5K ) may be reset during the specific reset phase (e.g., similar to the reset in FIG. 5L is reset during stage 563) and memory cell 513A is not selected (or not used) in the read operation. During the specific reset phase (used to reset adjacent memory cells, both shown and not shown), if a voltage less than zero volts is supplied to signals BLR1A and BL1A, signal RSL3 (FIG. 5K) The value of the voltage during the particular reset phase may be less than zero volts (eg, slightly less than zero volts, such as RSL3 = Vn (eg, Vn = -0.3V)). However, to avoid transistor leakage that may be caused by GIDL current, the voltage on signal RSL3 (Figure 5K) can have a value slightly less than zero volts, such as RSL3 = Vn, but not much less than Vn (e.g., -1V<RSL3< -0.3V).Figure 5M shows a schematic diagram of a portion of the memory device 500 of Figure 5A including memory cells 512A and 513A. Figure 5N is a graph showing the values of the signals in Figure 5M during recovery phase 564, which is performed after reset phase 563 (Figure 5K). As described above, a recovery stage 564 may be performed to recover (eg, write back) to memory units 512A and 513A. The following description refers to Figures 5M and 5N. As shown in FIG. 5N , signal BL2A may be provided with voltage Vx, each of signals WL3, RSL2, and RSL3 may be provided with voltage VL (for example, VL=0V), and signal BLR1A may be provided with voltage Vc (for example, Vc= 0V), signal WL2 may be provided with voltage V6 (eg, V6>VDD), and signal BL1A may be provided with voltage VBL1. If the information to be stored in memory unit 512A has a value (eg, "0"), voltage VBL1 may have a value (eg, VBL1=0V or VBL1<0), and if the information to be stored in memory unit 512A If the information has another value (for example, "1"), the voltage may have another value (for example, VBL1>1V). Based on the voltages in Figure 5N, information can be stored (eg, restored) in capacitor plate 402a of memory cell 512A.6 illustrates the structure of a portion of a memory cell 613 positioned along a segment of a column 601 of a memory device 600 in accordance with some embodiments described herein. Memory device 600 may include plate line 653 , word line 643 , and data line 631 that may correspond to one of plate lines, one of word lines, and one of data lines of memory device 400 ( FIG. 4B ) or memory device 500 ( FIG. 5B ). .As shown in Figure 6, pillar 601 may include n+ portions and P_Si portions. Pillar 601 may be similar to one of the pillars of memory device 400 (FIG. 4B) (eg, pillar 401A' in FIG. 4B), or one of the pillars of memory device 500 (FIG. 5B) (eg, pillar 501A' in FIG. 5B ). Portion P_Si is separated from word line 643 by dielectric (eg, silicon dioxide) 605 .As shown in Figure 6, memory unit 613 may include capacitor C' and transistor T3'. Capacitor C' may include capacitor plate 602a (which is part of the n+ portion), conductive portion 613', conductive contacts 613", and a portion of plate line 653. Conductive portion 613' may be formed from a relatively low resistance material (e.g., the resistance may be low For conductive doped polysilicon materials, such as metal). Conductive contact 613" may also have a relatively low resistance material that may be similar to the material of conductive portion 613'. Dielectrics 613k and 613o may be different dielectric materials with different dielectric constants. The dielectric constant of dielectric 613k may be greater than the dielectric constant of dielectric 613o. For example, dielectric 613o may be silicon dioxide, and dielectric 613k may be a high-k dielectric, which is a dielectric material with a dielectric constant greater than that of silicon dioxide.The structure of memory unit 613 may replace the structure of each memory cell of memory unit 400 (FIG. 4B) (eg, memory unit 413A in FIG. 4B) or the structure of memory unit 500 (FIG. 5B) (eg, FIG. The structure of each memory cell of memory cell 513A) in 5B. For example, the structure of capacitor C' may replace the structure of capacitor C in each of the memory cells of memory device 400 (FIG. 4B) or memory device 500 (FIG. 5B). |
A system and method for managing operating modes within a semiconductor chip for optimal power and performance while meeting a reliability target are described. A semiconductor chip includes a functional unit and a corresponding reliability monitor. The functional unit provides actual usage values to the reliability monitor. The reliability monitor determines expected usage values based on a reliability target and the age of the semiconductor chip. The reliability monitor compares the actual usage values and the expected usage values. The result of this comparison is used to increase or decrease current operational parameters. |
A semiconductor chip comprising:a functional unit;a monitor configured to:monitor an actual usage of the functional unit;compare the actual usage of the functional unit to an expected usage of the functional unit, wherein the expected usage is based at least in part on an age of the functional unit; andprovide information corresponding to said compare;a power manager configured to:update operating parameters of the functional unit to change power consumption for the functional unit responsive to the information; andsend the updated operating parameters to the functional unit.The semiconductor chip as claimed in claim 1, wherein in response to determining the actual usage is less than the expected usage, the updated operating parameters include maximum values for the operating parameters that are greater than current maximum values for the operating parameters.The semiconductor chip as claimed in claim 1, wherein in response to determining the actual usage is greater than the expected usage, the updated operating parameters include maximum values for the operating parameters that are less than current maximum values for the operating parameters.The semiconductor chip as claimed in claim 1, wherein in response to determining the actual usage is different than the expected usage, the monitor is further configured to change the expected usage of the functional unit.The semiconductor chip as claimed in claim 1, wherein monitoring the actual usage comprises receiving values comprising one or more of an operational voltage and a temperature measurement.The semiconductor chip as claimed in claim 5, wherein the monitor is further configured to:maintain a reliability metric as an accumulated value over time based at least upon the actual usage of the functional unit and the age of the functional unit; andcompare the reliability metric with a reliability target.The semiconductor chip as claimed in claim 6, wherein the monitor is further configured to store the reliability metric to non-volatile memory responsive to detecting a given time interval has elapsed.The semiconductor chip as claimed in claim 1, wherein the semiconductor chip further comprises:a plurality of voltage/clock domains, each operating with operating parameters; anda plurality of monitors, each configured to:receive actual usage values from a respective one of the plurality of voltage/clock domains; andcompare the actual usage values to expected usage values of the voltage/clock domains.A method comprising:operating a functional unit;comparing an actual usage of the functional unit to an expected usage of the functional unit, wherein the expected usage is based at least in part on an age of the functional unit; andupdating operating parameters of the functional unit to change power consumption for the functional unit, in response to determining the actual usage is different from the expected usage.The method as claimed in claim 9,wherein in response to determining the actual usage is less than the expected usage, the updating comprises changing maximum values of the operating parameters to be greater than current maximum values of the operating parameters; andwherein in response to determining the received actual usage values are greater than the expected usage values, the updating comprises changing maximum values of the operating parameters to be less than current maximum values of the operating parameters.The method as claimed in claim 9, wherein the operating parameters include one or more of an operational voltage and a power-performance state.The method as claimed in claim 9, wherein in response to determining the actual usage is different than the expected usage, the method further comprises changing the expected usage of the functional unit.The method as claimed in claim 9, wherein the actual usage values comprise one or more of an operational voltage and an on-die temperature measurement.An on-die reliability monitor comprising:a first interface configured to receive information indicative of an actual usage of a functional unit;control logic configured to:compare the actual usage of the functional unit to an expected usage of the functional unit; andgenerate information usable to change operating parameters of the functional unit, in response to determining the actual usage is different than the expected usage; anda second interface configured to convey the information to a power manager.The reliability monitor as claimed in claim 14,wherein in response to determining the received actual usage is less than the expected usage, the information indicates an increase in maximum values for the operating parameters; andwherein in response to determining the received actual usage is greater than the expected usage, the information indicates a decrease in maximum values for the operating parameters.The reliability monitor as claimed in claim 14, wherein the information indicative of the actual usage values comprise one or more of an operational voltage and an on-die temperature measurement.A system comprising:a functional unit;operational instructions comprising an algorithm for adapting operating parameters over time;a monitor configured to:monitor an actual usage of the functional unit;compare the actual usage of the functional unit to an expected usage of the functional unit, wherein the expected usage is based at least in part on an age of the functional unit; andprovide information corresponding to said compare;a power manager configured to:update operating parameters of the functional unit to change power consumption for the functional unit, responsive to the information and the operational instructions; andsend the updated operating parameters to the functional unit. |
BACKGROUND Description of the Relevant Art The power consumption of modem integrated circuits (IC's) has become an increasingly important design issue with each generation of semiconductor chips. Integrated circuit power dissipation constraints are not only an issue for portable computers and mobile communication devices, but also for high-performance microprocessors which may include multiple processor cores and multiple pipelines within a core.Power management units (PMU) for an IC may reduce power to a portion of the IC when it detects, or is otherwise informed, that the portion is unused for a given period of time. Similarly, power-performance states (P-states) or dynamic voltage and frequency scaling (DVFS) techniques are adjusted based on usage feedback of one or more processing units. Typically, power management algorithms assume worst-case thermal conditions and anticipated usage of an IC over time when estimating a lifetime for the IC. Given these assumptions, lower performance states (on average) are selected than might otherwise have been chosen. However, during typical usage the worst-case thermal conditions may not actually be met. Consequently, the power constraints placed upon the system due to the worst-case assumptions may be more stringent than necessary. Unfortunately, as the use of the IC is predicted in advance and built into the system, the system may provide lower performance during its anticipated life than could have otherwise been achieved.In view of the above, efficient methods and systems for managing operating modes within a semiconductor chip for optimal power and performance while meeting a reliability target are desired. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a generalized diagram of one embodiment of a computing system.FIG. 2 is a generalized diagram of one embodiment of a method used for increasing performance and reliability of a computing system.FIG. 3 is a generalized diagram of one embodiment of a method for adjusting operational parameters to increase reliability of a computing system.FIG. 4 is a generalized diagram of one embodiment of a system on a chip (SOC).FIG. 5 is a generalized diagram of one embodiment of a method for increasing performance and reliability of a semiconductor chip.While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims. DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, one having ordinary skill in the art should recognize that the invention might be practiced without these specific details. In some instances, well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring the present invention. Further, it will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements.Systems and methods for managing operating parameters within a semiconductor chip for optimal power and performance while meeting a reliability target are contemplated. In various embodiments, a semiconductor chip includes one or more functional units each of which operates with respective operating parameters. One or more functional units are connected to a corresponding reliability monitor. These one or more functional units report actual usage values to a reliability monitor. For example, the one or more functional units report the actual usage values each time a given time interval elapses. The actual usage values for the functional unit are based at least upon one or more operating parameters and an age of the functional unit. In some embodiments, the operational parameters include power performance states (P-states) for a given functional unit, dynamic voltage and frequency scaling (DVFS) parameters for multiple functional units, activity levels, the values of such parameters during given time intervals, the average of such parameters over given time intervals, and so forth. The actual usage values can also include an operational temperature.A reliability monitor receives the actual usage values from a functional unit and determines expected usage values for the corresponding functional unit based at least in part on the age of the functional unit. For example, if the reliability target of a semiconductor chip is a lifespan of at least five years, then expected usage values over a selected duration are set based on the reliability target of the five year lifespan. The selected duration can be hourly, daily, weekly, or otherwise. The distribution of the expected usage values can be set as desired. For example, in one embodiment, a uniform distribution is used where an approximately equal expected usage value is used for each time interval during the lifetime of the chip. In other embodiments, higher expected usage values are set for earlier stages of the lifespan while lower expected usage values are set for later stages of the lifespan, or vice-versa. Other distributions for setting the expected usage values based on the reliability target are possible and contemplated.The reliability monitor compares the received actual usage values to the expected usage values. In various embodiments, the reliability monitor maintains a reliability metric as an accumulated value over time based at least upon the received actual usage values from the functional unit and the age of the functional unit. When the reliability monitor determines the received actual usage values exceed the expected usage values, the reliability monitor generates information (e.g., a command or otherwise) to increase reliability for the functional unit by decreasing power consumption and/or other operating parameters. When the reliability monitor determines the received actual usage values exceed the expected usage values, the remaining anticipated lifetime of the functional unit may not reach the target lifetime. By reducing the operational parameter the remaining lifetime may be extended so that the target lifetime is reached. In contrast, the reliability monitor generates a command to permit a boost in performance of the functional unit responsive to determining the received actual usage values are less than the expected usage values. In this manner, the reliability monitor replaces or modifies an anticipated use approach by using feedback under real usage conditions. When actual usage is less than originally anticipate usage, the adjustments provided by the reliability monitor increase performance of the chip while still reaching the target lifetime of the functional unit.Turning to FIG. 1 , a generalized block diagram of one embodiment of a computing system 100 is shown. As shown, the computing system 100 includes a functional unit 150, a reliability monitor 110 and a power manager 140. The functional unit 150 can also be representative of any circuitry with its own voltage/clock domain. The functional unit 150 conveys actual usage values 152 to each of the reliability monitor 110 and the power manager 140. Control logic, such as the parameter selector 142, within the power manager 140 uses the received actual usage values to select one or more operational parameters for the functional unit 150. Additionally, the parameter selector 142 receives and use information from the reliability monitor 110 to update operational parameters for the functional unit 150. The information includes one or more commands, indications, flags, or computed values used to adjust the operational parameters. For example, without the information from the reliability monitor 110, the parameter selector 142 selects different parameters based on worst-case maximum limits for power consumption. In some embodiments, one or more algorithms used by the parameter selector 142 use a thermal design power (TDP) value. The TDP value represents an amount of power that a cooling system is able to dissipate without exceeding the maximum junction temperature for transistors within the chip.The reliability monitor 110 also receives the actual usage values from the functional unit 150. The reliability monitor also stores expected usage values 114. One or more of the actual usage values 112 and the expected usage values 114 depends on an age 120 of the functional unit 150. The comparator 130 within the reliability monitor 110 compares the actual usage values 112 and the expected usage values 114. The comparison performed by the comparator 130 determines whether the usage of the functional unit 150 is on target with a reliability target. For example, if a reliability target for the computing system 100 is a lifespan of at least five years, then multiple expected usage values over a duration is set based on the reliability target of the five year lifespan.Control logic within the reliability monitor 110 receives the comparison result from the comparator 130 and determine the received actual usage values 112 exceed the expected usage values 114. In response, the control logic within the reliability monitor 110 provides information 154 to direct the power manager 140 to increase reliability for the functional unit 150. In such a case, the information 154 may indicate one or more of the operating parameters should be reduced. In on embodiment, reduced maximum values for operating parameters are indicated. The reduced maximum values used by the parameter selector 142 causes the parameter selector 142 to select parameters which reduce power consumption by the functional unit 150. The reduced maximum values and the resulting selected operating parameters sent to the functional unit 150 reduce wear on the functional unit 150, and thus increase reliability of the functional unit 150.In contrast to the above, when the comparison result from the comparator 130 indicates the received actual usage values 112 are less than the expected usage values 114, the reliability monitor 110 provides information for use by the power manager 140 indicating a performance boost is available. The information sent from the reliability monitor 110 to the power manager 140 indicates maintaining or increasing maximum values for one or more of the operating parameters. The updated maximum values used by the parameter selector 142 causes the parameter selector 142 to select parameters which increase performance for the functional unit 150. In some embodiments, the functional unit 150 is representative of a processing unit, a general-purpose central processing unit (CPU) complex, a graphics processing unit (GPU), or another processor such as a digital signal processing (DSP) cores, a field programmable gate arrays (FPGA), an application specific integrated circuits (ASIC), and so forth. As described earlier, the functional unit 150 is representative of any circuitry with its own voltage/clock domain. For example, the functional unit 150 can be a memory controller, an input/output (I/O) hub controller, or other. A description of other embodiments with multiple voltage/clock domains is provided later. A single voltage/clock domain is discussed here for ease of illustration. In some embodiments, a reliability monitor 110 is used for a given voltage/clock domain. In other embodiments, a reliability monitor 110 is used for multiple voltage/clock domains.The functional unit 150 provides actual usage values to the power manager 140 and the reliability monitor 110. The actual usage values for the functional unit 150 are based upon one or more operating parameters, an operational temperature, an operational current, and an age of the functional unit 150. In various embodiments, the functional unit 150 utilizes analog or digital thermal sensors to provide information as to when the die heats up in a particular area due to increased compute activity.The reliability of the functional unit 150 can be critical and the actual usage values are used to monitor, track and adjust the usage of the functional unit 150 to satisfy reliability targets. For example, the computing system 100 can be used in medical equipment, automotive systems such as anti-lock braking systems, banking and business-critical storage and processing systems, space travel systems and so forth. Due to the difficulty of testing under real conditions, equations anticipating the use and worst-case conditions were used to predict the life span of integrated circuits (ICs). For example, the ICs in the functional unit 150 may have gone through high temperature operating life testing and the expected life span under real conditions was extrapolated from data gathered during the testing. However, the reliability monitor 110 replaces the anticipated use approach and provides real-time feedback under real usage conditions to monitor and adjust usage to satisfy a reliability target and take advantage of available performance.The operational temperature over time indicates wear on the functional unit 150. The on-die sensors in the functional unit 150 provide one or more operational temperature values to both the power manager 140 and the reliability monitor 110. The one or more operational temperature values over time indicates whether particular types of circuit failures are more or less likely. For example, time-dependent dielectric breakdown (TDDB) occurs when the gate oxide breaks down as a result of long-time application of a relatively low electric field being applied over a long duration. The breakdown is caused by formation of electron tunneling current forms a conducting path through the gate oxide to the substrate. Typically, the metal oxide semiconductor (MOS) field effect transistor (FET) is operating near or beyond its specified operating voltage.Another type of circuit failure occurs when electromigration gradually moves ions in a conductor during applications of high current densities. For example, copper or other traces used as long conducting wires for an appreciable amount of time experience diffusing metal atoms. As transistor widths and trace widths decrease, the effects of electromigration increase.In addition to operational temperature values, the actual usage values sent from the functional unit 150 to the reliability monitor 110 includes an operational voltage, an operational current, and a clock frequency. The combination of these values is used in power performance states (P-states). The power manager 140 provides P-state information to the functional unit 150. In some embodiments, the functional unit 150 uses only the operational voltage and clock frequency associated with the received P-state. In other embodiments, the functional unit 150 includes internal power management techniques. For example, the operating system or application-specific processes uses dynamic voltage and frequency scaling (DVFS) techniques. Downloaded drivers uses tables supplied by basic input output software (BIOS) to obtain clock frequency, operational voltage, temperature, and current information appropriate for a particular platform. Frequency and voltage transitions can be unavailable if the BIOS does not supply these tables.In addition to the DVFS scaling techniques, the microarchitecture and circuit-level design techniques for balancing power consumption and performance of the functional unit 150 can be aided by efforts to estimate in real-time the power consumption of circuitry and functional blocks within the functional unit 150. Methods for estimating this power consumption in real-time includes measuring an activity level of the circuitry and functional blocks. Any of a variety of techniques can be utilized to determine power consumption of circuitry and functional blocks within the functional unit 150.In some embodiments, the functional unit 150 samples a number of pre-selected signals. The selection of which signals to sample during a particular clock cycle corresponds to how well the selection correlates to the amount of switching node capacitance within the functional unit 150. For example, in some embodiments, various clock enable signals, bus driver enables, mismatch lines in content-addressable memories (CAM), and CAM word-line (WL) drivers can be chosen for sampling. A corresponding weight can be selected for each of the sampled signals. Multiple samples can be taken during a sample interval. A count can be maintained for such signals during operation. Based on these counts, an estimate of power consumption corresponding to the counts is determined. The estimated power consumption from the sampled signals would not be based on measures of thermal conditions or current draw. In addition to or in place of the sampled signals, one or more current draw measurements from on-die current sensors is sent in the actual usage values from the functional unit 150 to the power manager 140 and the reliability monitor 110.In some embodiments, the reliability monitor 110 stores the received actual usage values and later processes them. In other embodiments, the reliability monitor 110 pre-processes the received actual usage values to combine them with one another, combine them with other values stored within the reliability monitor 110, index one or more tables to access other values, and so forth. The actual usage values 112 represent values used by control logic within the reliability monitor 110 following one or more pre-processing steps or no pre-processing steps.The expected usage values correspond on an age 120 of the functional unit 150. In some embodiments, the functional unit 150 provides an indication of age with the actual usage values to the reliability monitor 110. In other embodiments, the reliability monitor 110 maintains the age 120. In some embodiments, the reliability monitor 110 uses a timestamp value to maintain the age. In other embodiments, the reliability monitor 110 uses one or more counters to maintain the age. One counter can be incremented hourly and roll over at the end of a 24 hour period, whereas other counters are incremented daily, weekly, monthly and annually. In such a case, the concatenation of the counter values provides an age of the functional unit 150. In other embodiments, software, such as the operating system, maintains the age of the functional unit 150.The reliability monitor 110 determines the expected usage values 114 for the functional unit 150 based on the age 120 and a reliability target. For example, if a reliability target for the functional unit 150 is a lifespan of at least five years, then multiple expected usage values 114 over a duration is set based on the reliability target being the five year lifespan. For example, expected usage values 114 may be set for each day of a five year period. Alternatively, other embodiments can set expected usage values 114 for different time intervals, such as hourly, weekly, monthly, each quarter of a year, and so forth.In addition to the above, the distribution of the expected usage values 114 is set as desired. For example, a uniform distribution can be used where a same expected usage value 114 is used for each time interval. Higher expected usage values are set for earlier stages of the lifespan while lower expected usage values are set for later stages of the lifespan. Other distributions for setting the expected usage values 114 based on the reliability target are possible and contemplated. In addition, each of the reliability target, the time intervals, and the distribution can be programmable values stored in control and configuration registers.In various embodiments, the reliability monitor 110 combines the actual usage values 112 to generate a single reliability metric. The reliability monitor 110 can maintain the reliability metric as an accumulated value over time based at least upon the actual usage values 112 and the age 120 of the functional unit. Similarly, the expected usage values 114 can be combined to generate a single target metric. The comparator 130 compares the reliability metric generated from the actual usage values 112 and the target metric generated from the expected usage values 114. The comparison result indicates whether the functional unit 150 is overused, underused or on target as compared to an expected usage based on the reliability target.When the comparator 130 determines the actual usage values 112 exceed the expected usage values 114, the comparator 130 generates information (or an indication) to increase reliability for the functional unit 150. As described earlier, the information includes one or more of commands, indications or flags, computed values, and/or otherwise that are used to adjust operational parameters for the functional unit 150. The information indicates updating maximum values for one or more of the operating parameters to values less than current maximum values for the one or more operating parameters.As an example, during the initial 6 months of usage, a computing system 100 may experience a relatively high workload that exceeds what was expected. Therefore, during a time interval, such as the next 6 months, the reliability monitor 110 generates information that causes a reduction in the maximum values (and power consumption) in order to increase reliability of the functional unit 150. The information corresponds to upcoming expected actual usage values for the next 6 months based on a given distribution of usage as described earlier.In some embodiments, the reliability monitor 110 provides the information to the power manager 140. In other embodiments, the reliability monitor 110 provides the information to both the power manager 140 and the functional unit 150. The power manager 140 updates the operational parameters to send to the functional unit 150 based on actual usage values received from the functional unit 150 and the information received from the reliability monitor 110.In contrast to the above, when the comparator 130 determines the actual usage values 112 are less than the expected usage values 114, the comparator 130 generates information to boost performance for the functional unit 150. The information indicates updating maximum values for one or more of the operating parameters to values greater than current maximum values for the one or more operating parameters. For example, after the initial 6 months of usage, the computing system 100 may have be utilized less than expected. Therefore, the reliability monitor 110 generates information to increase maximum values for one or more operating parameters in order to allow a boost in performance.The power manager 140 includes circuitry and logic for processing power management policies for the functional unit 150. The power manager 140 disables, or otherwise reduces power consumption, of portions of the functional unit 150 when it detects or is otherwise informed that the portion is unused for a given period of time. Similarly, power-performance states (P-states) or dynamic voltage and frequency scaling (DVFS) techniques can be adjusted based on usage feedback from the functional unit 150. The initial algorithms for managing power assume worst-case thermal conditions. However, the actual usage and environmental conditions will likely be less than the worst-case. Therefore, rather than use lower performance states, the information from the reliability monitor 110 aids the parameter selector 142 in selecting higher performance states when possible (e.g., when usage has been lower than expected) and lower performance states when appropriate (e.g., when usage has been higher than expected).Referring now to FIG. 2 , one embodiment of a method 200 for increasing performance and reliability of a computing system is shown. For purposes of discussion, the steps in this embodiment (as well as in Figures 3 and 5 ) are shown in sequential order. However, in other embodiments some steps occur in a different order than shown, some steps are performed concurrently, some steps are combined with other steps, and some steps are absent.In block 202, a workload is processed by a functional unit. Such a workload generally entails execution of software applications, operating system processes, or other processes. When a given time interval has elapsed (conditional block 204), then in block 206, the functional unit provides actual usage values or makes such usage values available for access. In some embodiments, in addition to the elapse of a time interval, such usage values may be provided at other times. For example, responsive to a user command, program code, or the detection of some event. In one embodiment, the functional unit provides the actual usage values to a reliability monitor. The functional unit additionally provides the actual usage values for use by a power manager. The actual usage values includes one or more of an operational temperature, a current draw, power performance state (P-state) information, dynamic voltage and frequency scaling (DVFS) parameters, activity levels and other power consumption values. In some embodiments, the actual usage values also use corresponding weights. Alternatively, weights can be associated when the actual usage values are received at the reliability monitor. An age can also be associated with the actual usage values through one or more accumulated sums.In block 208, expected usage values based on the age of the functional unit are determined. The determination uses the age of the functional unit and a distribution of expected usage values based on a reliability target as described earlier. In various embodiments, the received actual usage values are combined to generate a single reliability metric. The reliability metric can be maintained as an accumulated value over time based at least upon the actual usage values and the age of the functional unit. Similarly, the expected usage values can be combined to generate a single target metric. The reliability metric generated from the actual usage values are compared to the target metric generated from the expected usage values. The comparison result indicates whether the functional unit has been used more than expected, less than expected, or approximately equal to what is expected based on the reliability target.If the actual usage values exceed the expected usage values (conditional block 210), then in block 212 a command or other information is provided to the power manager to increase reliability for the functional unit. In some embodiments, providing the information includes storing the information in a location (e.g., a register or memory location) that is then accessed by the power manager. In other embodiments, the power manager may request such information from the monitor or other entity which then provides the requested information in response to the request. These and other embodiments are possible and are contemplated. The command or information indicates updating maximum values for one or more of the operating parameters to values less than current maximum values for the one or more operating parameters.The command or information causes a reduction in power consumption by the function unit. In one embodiment, reducing power consumption includes reducing a maximum allowable power performance state, or a reduction in a maximum allowable average power performance state (P-state) over time, for the functional unit. In one embodiment, a "throttle" of a P-state includes decrementing a currently selected P-state by at least one P-state to a lower power consumption P-state. In some examples, the power manager does not select throttling the P-state if the power manager did not receive additional information from the reliability monitor. For example, the power consumption may not be relatively high, but the functional unit can be currently exceeding a reliability target. Factors such as at least the effects of TDDB, electromigration, and age can be used to determine the functional unit is currently exceeding the reliability target. Therefore, with the added information from the reliability monitor, the P-state is throttled.If the actual usage values do not exceed the expected usage values (conditional block 210), but the actual usage values are less than the expected usage values (conditional block 214), then in block 216 a command or other information can be sent to the power manager that allows a boost in performance of the functional unit. In one embodiment, such a command or information indicates updating maximum values for one or more of the operating parameters to values greater than current maximum values for the one or more operating parameters. If the actual usage values do not exceed the expected usage values (conditional block 210), and the actual usage values are relatively equal to the expected usage values (conditional block 214), then in block 218 a command or other information is sent to the power manager to maintain operating parameters selected for the functional unit. Alternatively, no command or information is sent to the power manager. In such a case, the power manager simply maintains its current settings for the operating parameters.As described earlier, the reliability monitor combines the received actual usage values to generate a single reliability metric. The reliability metric can be maintained as an accumulated value over time based at least upon the actual usage values and the age of the functional unit. By maintaining the reliability metric as an accumulated value, the reliability metric depends on an average of the received actual usage values over time. Similarly, the expected usage values can be combined to generate a single target metric. In some embodiments, the rate of the updates of the reliability metric and the target metric are programmable. For example, the reliability metric can be updated each millisecond, but other durations can be selected and later possibly changed. The target metric can be updated daily, but other durations can be selected and may later be changed.Referring now to FIG. 3 , one embodiment of a method 300 for adjusting operational parameters to increase reliability of a computing system is shown. In block 302, the actual usage values are received from a functional unit. As described earlier, examples of the actual usage values includes one or more of an operational temperature, a current draw, P-state information, DVFS parameters, activity levels and other power consumption values. Additionally, in block 304, a command or other information is received from a reliability monitor.In block 306, at least one or more operational parameters can be updated based on the received actual usage values. For example, using the received actual usage values, the power manager or other logic determines to throttle or boost a P-state, reschedule high-performance software applications, enable or disable one or more functional blocks within the functional unit, and so forth.If the functional unit is not active (conditional block 308), then in block 318, the operational parameters and any other directing information are sent to the functional unit. In some embodiments, a minimal activity level can be needed to be in use for the power manager to further consider feedback information from the reliability monitor. In some examples, the minimal activity level can be one or more activity levels above the lowest activity level associated with an inactive or turned off system.If the functional unit is active (conditional block 308), and the received feedback from the reliability monitor, such as a command, indicates decreasing power consumption (conditional block 310), then in block 312 one or more updated operational parameters can be adjusted by an amount indicated by the command to reduce power consumption. In some embodiments, the power manger updates one or more operational parameters as described earlier for block 306 without considering the feedback from the reliability monitor. For example, the algorithms and control logic can be preexisting algorithms and logic and they are being reused.As shown in method 300, in block 312 the one or more operational parameters are further adjusted based on the feedback information from the reliability monitor, but in other embodiments, the power manager updates the one or more operational parameters simultaneously using both the actual usage values and the feedback information from the reliability monitor. For example, using the received command or other information from the reliability monitor, the power manager or other logic determines to further throttle a P-state, further delay scheduling high-performance software applications, further disable one or more functional blocks within the functional unit, and so forth.If the command or other information from the reliability monitor indicates increasing performance (conditional block 314), then in block 316, one or more updated operational parameters can be adjusted by an amount indicated by the command or other information to increase performance. For example, using the received command or other information from the reliability monitor, the power manager or other logic determines to further boost a P-state, further accelerate scheduling high-performance software applications, further enable one or more functional blocks within the functional unit, and so forth.Control flow for each of the blocks 312 and 316 moves to block 318 where the adjusted operational parameters are sent from the power manager to the functional unit. The further adjusting in blocks 312 and 316 take into account at least the effects of TDDB, electromigration and age of the functional unit and determination of whether the functional unit is currently exceeding the reliability target.Turning to FIG. 4 , a generalized block diagram of one embodiment of a system-on-a-chip (SOC) 400 is shown. The SOC 400 is an integrated circuit (IC) that includes multiple types of IC designs on a single semiconductor die, wherein each IC design provides a separate functionality. In the illustrated embodiment, the SOC 400 includes both an accelerated processing unit (APU) 410 and a platform and input/output (I/O) controller hub 420 on a single semiconductor die.In one embodiment, the APU 410 includes a general-purpose central processing unit (CPU) complex 430 and a graphics processing unit (GPU) 440 on a same semiconductor die. Other various processors may be placed in the SOC 400 in addition to or in place of the CPU 430 and the GPU 440. Other examples of on-die processors the SOC 400 uses include at least digital signal processing (DSP) cores, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and so forth.The APU 410 utilizes a system management unit (SMU) 480 for controlling the operation of the resources on the SOC 400 and synchronizing communication among the resources. The SMU 480 manages power-up sequencing of the various processors on the SOC 400 and control multiple off-chip devices via reset, enable and other signals conveyed through ports in the PICH 420. The SMU 480 also manages communication between the various processors on the multiple buses within the SOC 400.The SOC 400 includes one or more clock sources, such as phase lock loops (PLLs), which are not shown for ease of illustration. The clock sources provide clock signals for each of the components within the SOC 400. The SMU 480 controls these clock sources. The SMU 480 also controls one or more operational voltages used by circuitry across the SOC 400. For example, the SMU 480 includes a power management unit (not shown).Additionally, the SMU 480 includes one or more reliability monitors 482. The reliability monitors 482 provide feedback to the power management unit based on factors such as at least the effects of TDDB, electromigration and age. The feedback can be used to determine whether the SOC 400 is currently exceeding a reliability target. In some embodiments, the SMU 480 includes one reliability monitor in the monitors 482 for each voltage/clock domain in the SOC 400.In some embodiments, the SMU 480 includes a centralized controller for the monitors 482. The centralized controller receives feedback from each of the monitors 482 and determine a final set of commands or other information to send to the power management unit. For example, each one of the monitors 482 can have an associated weight to prioritize its feedback ahead or behind feedback from other monitors. In various embodiments, the weights are assigned by an amount of on-die real estate associated with the corresponding voltage/clock domain. In other embodiments, the weights can be assigned based on the functionality provided by the voltage/clock domain or any other factor.In some embodiments, a given reliability monitor of the monitors 482 includes the functionality of a centralized controller, rather than the SMU 480 includes a separate centralized controller. In some embodiments, the reliability monitors 482 are dispersed across the SOC 400 near their respective voltage/clock domains. In such embodiments, the reliability monitors 482 provides feedback to the SMU 480, which forwards the feedback to a centralized controller. In other embodiments, the dispersed reliability monitors 482 provide feedback information to a given reliability monitor with the functionality of a centralized controller and the given reliability monitor sends feedback information to the SMU 480 representative of the feedback received from the monitors 482.The APU 410 includes an integrated memory controller 450 to directly communicate with off-chip memory and video cards. The off-chip memory includes at least dynamic random access memory (DRAM). In addition, the memory controller 450 can be connected to off-chip disk memory through an external memory bus. In one embodiment, the SMU 480 includes integrated channel circuitry to directly link signals from the platform and input/output (I/O) controller hub 420 to the CPU complex 430 and the GPU 440 for data control and access. In some embodiments, the crossbar switch 460 is used for this functionality. In other embodiments, the crossbar switch 460 is not used and the functionality is included in the SMU 480.The SMU 480 utilizes operational instructions such as firmware and/or other microcode for coordinating signal and bus control. In various embodiments, such operational instructions are stored in a non-volatile memory. Similarly, the reliability monitors 482 uses operational instructions for characterizing actual usage values received from corresponding voltage/clock domains, configuring reliability target values, or the algorithm using both actual usage and target values. As described earlier, a reliability monitor of the monitors 482 includes an algorithm, which can be implemented in firmware or otherwise. In some embodiments, the operational instructions may be updated to modify the algorithm. The algorithm filters the received actual usage values and applies them to one or more equations. The one or more equations calculate contributions of the received usage values to a reliability metric via an accumulating value.The algorithm in a reliability monitor of the monitors 482 also tracks a target reliability metric that accumulates over time of use. The target metric represents an accumulating typical use case over a lifetime specification. The algorithm uses a PID (proportional, integral and differential) controller to adapt updated operational parameters over time based on actual system usage to maintain overall reliability to meet a target specification.In some embodiments, the SMU 480 periodically writes one or more of the actual usage values (or corresponding reliability metrics), the expected usage values (or corresponding target metrics), the updated operational parameters to non-volatile memory to preserve the data in the event of the SOC 400 being shut down. For example, the SMU 480 periodically writes the data to off-chip memory through the memory controller 450. The time interval for writing the data can be programmable.The platform and I/O controller hub (PICH) 420 can interface with different I/O buses according to given protocols. The PICH 420 can perform I/O functions and communicate with devices and software such as peripherals following the Universal Serial Bus (USB) protocol, peripherals and network cards following the Peripheral Component Interconnect Express (PCIe) protocol, the system basic input/output software (BIOS) stored in a read only memory (ROM), interrupt controllers, Serial Advanced Technology Attachment (SATA) devices, network interfaces, a multichannel high definition audio codec functionality and interface and so forth. The PICH 420 can perform on-die the operations typically performed off-die by a conventional Southbridge chipset.The CPU complex 430 includes one or more processing units 435a-435b, which includes a processor core 432 and a corresponding cache memory subsystem 434. In some embodiments, the CPU 430 can also include a shared cache memory subsystem 462, which is accessed by each one of the processing units 435a-435b. Each processor core 432 includes circuitry for executing instructions according to a given instruction set. For example, the SPARC® instruction set architecture (ISA) can be selected. Alternatively, the x86, x86-64®, Alpha®, PowerPC®, MIPS®, PARISC®, or any other instruction set architecture can be selected.The GPU 440 can be able to both directly access both local memories 434 and 462 and off-chip memory via the integrated memory controller 450. Such embodiments can lower latency for memory accesses for the GPU 440, which can translate into higher performance. Since cores within each of the CPU 430 and the GPU 440 can access a same memory, the SMU 480 maintains cache coherency for the CPU 430 and the GPU 440. One or more of the memory controller 450 and the SMU 480 can perform address translations for memory accesses.In various embodiments, the GPU 440 includes one or more graphic processor cores 442 and data storage buffers 444. The graphic processor core performs data-centric operations for at least graphics rendering and three dimensional (3D) graphics applications. The graphics processor core 442 has a highly parallel structure making it more effective than the general-purpose CPU 430 for a range of complex algorithms.As described earlier, each of the reliability monitors 482 adjust operational parameters sent from the power management unit to the multiple voltage/clock domains across the SOC 400. The further adjusting by the reliability monitors 482 take into account at least the effects of TDDB, electromigration and age of the SOC 400 and determine whether the SOC 400 is currently exceeding its reliability target.Referring now to FIG. 5 , one embodiment of a method 500 for increasing performance and reliability of a semiconductor chip is shown. In block 502, one or more software applications are processed. The software applications are processed on a processor, a processing unit, a CPU complex, a GPU, a SOC, or other. A first time interval can correspond to how often actual usage values of a chip or unit are sent to a corresponding reliability monitor and a power manager. For example, the first time interval can be a millisecond, although other time intervals can be selected and used. In addition, the first time interval can be programmable.If the first time interval has elapsed (conditional block 504), then in block 506, actual usage values from voltage/clock domains are sent to respective reliability monitors and a power manager. Starting with the reliability monitors, in block 508, a reliability metric for a respective voltage/clock domain is updated based on the received actual usage values. As described earlier, the reliability monitor combines the received actual usage values to generate a single reliability metric. The reliability metric can be maintained as an accumulated value over time based at least upon the actual usage values and the age of the functional units within the voltage/clock domain. By maintaining the reliability metric as an accumulated value, the reliability metric depends on an average of the received actual usage values over time. Similarly, the expected usage values can be combined to generate a single target metric.The reliability metric and the target metric can be updated every first time interval, such as the example of a millisecond described earlier. In various embodiments, updating operational parameters in the voltage/clock domains occurs less frequently. For example, a second time interval of a day can be used. Other values for the second time interval can be selected and used. Similar to the first time interval, the second time interval can be programmable.If the second time interval has elapsed (conditional block 510), then in block 512, the updated reliability metric is compared to the updated target metric. The comparison result(s) indicates whether the particular voltage/clock domain is currently overused, underused or on target regarding expected usage based on the reliability target (target lifespan for the chip). The comparison result(s) includes information such as one or more of commands, indications or flags, or computed values used to adjust operational parameters for the voltage/clock domains. The information indicates throttling or boosting P-states, rescheduling tasks, threads or processes of one or more software applications, and enabling or disabling particular functional blocks or functional units in the voltage/clock domains.In block 514, the comparison result(s) can be updated with a weight for the respective voltage/clock domain. As described earlier, a centralized controller can be used for the multiple reliability monitors and receive feedback from each of the monitors. The centralized controller determines a final set of commands or other information to send to the power manager. For example, each one of the reliability monitors has an associated weight to prioritize its feedback ahead or behind feedback from other monitors. In various embodiments, the weights are assigned by an amount of on-die real estate associated with the corresponding voltage/clock domain. In other embodiments, the weights can be assigned based on the functionality provided by the voltage/clock domain or any other factor. The centralized controller provides information generated from the feedback from the multiple reliability monitors to the power manager.In block 516, the power manager determines operational parameters for the voltage/clock domains based on the received actual usage values and information from the multiple reliability monitors. As described earlier, the command(s) or other information indicates throttling or boosting P-states, rescheduling tasks, threads or processes of one or more software applications, and enabling or disabling particular functional blocks or functional units in the voltage/clock domains. In some embodiments, the information from the reliability monitors is received less frequently than the information from the voltage/clock domains. For example, the power manager receives information from the voltage/clock domains every millisecond, whereas the power manager receives information from the reliability monitors daily. In block 518, the power manager sends the updated operational parameters and any other directives or commands to the voltage/clock domains.It is noted that one or more of the above-described embodiments include software. In such embodiments, the program instructions that implement the methods and/or mechanisms are conveyed or stored on a computer readable medium. Numerous types of media which are configured to store program instructions are available and include hard disks, floppy disks, CD-ROM, DVD, flash memory, Programmable ROMs (PROM), random access memory (RAM), and various other forms of volatile or non-volatile storage. Generally speaking, a computer accessible storage medium includes any storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium includes storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, or DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray. Storage media further includes volatile or non-volatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, low-power DDR (LPDDR2, etc.) SDRAM, Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, Flash memory, non-volatile memory (e.g. Flash memory) accessible via a peripheral interface such as the Universal Serial Bus (USB) interface, etc. Storage media includes microelectromechanical systems (MEMS), as well as storage media accessible via a communication medium such as a network and/or a wireless link.Additionally, in various embodiments, program instructions include behavioral-level descriptions or register-transfer level (RTL) descriptions of the hardware functionality in a high level programming language such as C, or a design language (HDL) such as Verilog, VHDL, or database format such as GDS II stream format (GDSII). In some cases the description is read by a synthesis tool, which synthesizes the description to produce a netlist including a list of gates from a synthesis library. The netlist includes a set of gates, which also represent the functionality of the hardware including the system. The netlist is then placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks are then used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the system. Alternatively, the instructions on the computer accessible storage medium are the netlist (with or without the synthesis library) or the data set, as desired. Additionally, the instructions are utilized for purposes of emulation by a hardware based type emulator from such vendors as Cadence®, EVE®, and Mentor Graphics®.Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
The invention relates to low latency remote processing of accelerators. A method of offloading performance of a workload includes receiving a first function call from a caller on a first computing system acting as an initiator, the first function call to be executed by an accelerator on a second computing system acting as a target, the first computing system coupled to the second computing system over a network; determining the type of the first function call; and generating a parameter value list of the first function call. |
1. A device comprising:processor; anda memory device coupled to the processor, the memory device having stored thereon instructions that, in response to execution by the processor, cause the processor to:A first function call is received from a caller on a first computing system acting as an initiator, the first function call to be executed by an accelerator on a second computing system acting as a target, the first computing system being coupled by a network to said second computing system;determining the type of the first function call;generating a list of parameter values for the first function call;sending a first message to the second computing system, the first message including the name of the first function call, the list of parameter values for the first function call, and one or more new entries, the one or more new entries representing dummy output parameter values; andReturns to the caller when the type of the first function call is an asynchronous callable with no output dependencies; when the type of the first function call is an asynchronous callable with alternative output parameters function, assigning the newly created symbol to an output parameter and returning to the caller; and when the type of the first function call is a synchronous function, blocking the caller until the call from the second The system receives a response to the first message.2. The apparatus of claim 1 , the memory device having stored thereon instructions that, in response to execution by the processor, cause the processor to:When the type of the first function call is a synchronous function, the caller is unblocked when the response to the first message is received from the second computing system.3. The apparatus of claim 1, wherein the first function call is a request to offload performance of a workload from the first computing system to the accelerator on the second computing system.4. The device of claim 1 , the memory device having stored thereon instructions that, in response to execution by the processor, cause the processor to:A second message is received at the first computing system acting as a target from the second computing system acting as an initiator, the second message including the name of a second function call, the one or more new entries of the parameter value list and symbol table, the one or more new entries representing dummy output parameter values;adding said one or more new entries from said second message to said symbol table;For each input parameter value in the parameter value list of the second function call, if there is a corresponding symbol table index, replacing the symbol table entry associated with the corresponding symbol table index with a dummy output parameter value;executing, by an accelerator on the first computing system, the function using the input parameter values;When the type of the second function call is the asynchronous callable function with replaceable output parameters, for each output parameter in the parameter value list, the dummy output parameter values of the parameter value list map to corresponding output values in said symbol table; andA second message including the name of the second function call and a list of output parameter values is sent to the second computing system.5. The apparatus of claim 4, wherein the second function call is a request to offload performance of a workload from the second computing system to the accelerator on the first computing system.6. A method comprising:A first function call is received from a caller on a first computing system acting as an initiator, the first function call to be executed by an accelerator on a second computing system acting as a target, the first computing system being coupled by a network to said second computing system;determining the type of the first function call;generating a list of parameter values for the first function call;sending a first message to the second computing system that includes the name of the first function call, the list of parameter values for the first function call, and one or more new entries for a symbol table, the one or multiple new entries representing dummy output parameter values; andReturns to the caller when the type of the first function call is an asynchronous callable with no output dependencies; when the type of the first function call is an asynchronous callable with alternative output parameters function, assigning the newly created symbol to an output parameter and returning to the caller; and when the type of the first function call is a synchronous function, blocking the caller until the call from the second The system receives a response to the first message.7. The method of claim 6, comprising when the type of the first function call is a synchronous function, when the response to the first message is received from the second computing system, Unblocks the caller.8. The method of claim 6, wherein the first function call is a request to offload performance of a workload from the first computing system to the accelerator on the second computing system.9. The method of claim 6, comprising:A second message is received at the first computing system acting as a target from the second computing system acting as an initiator, the second message including the name of a second function call, the one or more new entries of the parameter value list and symbol table, the one or more new entries representing dummy output parameter values;adding said one or more new entries from said second message to said symbol table;For each input parameter value in the parameter value list of the second function call, if there is a corresponding symbol table index, replacing the symbol table entry associated with the corresponding symbol table index with a dummy output parameter value;executing, by an accelerator on the first computing system, the function using the input parameter values;When the type of the second function call is the asynchronous callable function with replaceable output parameters, for each output parameter in the parameter value list, the dummy output parameter values of the parameter value list map to corresponding output values in said symbol table; andA second message including the name of the second function call and a list of output parameter values is sent to the second computing system.10. The method of claim 9, wherein the second function call is a request to offload performance of a workload from the second computing system to the accelerator on the first computing system.11. A system comprising:a first computing system acting as an initiator; anda second computing system acting as a target, the second computing being coupled to the first computing system through a network, the second computing system comprising an accelerator;Wherein the first computing system is to receive a function call from a caller; determine the type of the function call; generate a list of parameter values for the function call; send a first message to the second computing system, which includes the function said name of the call, said parameter value list of said function call, and one or more new entries of a symbol table, said one or more new entries representing dummy output parameter values; and when said function call's Returns to the caller when the type is an asynchronous callable function with no output dependencies; assigns the newly created symbol to the output parameter when the type of the function call is an asynchronous callable function with replaceable output parameters and return to the caller; and when the type of the function call is a synchronous function, block the caller until a response to the first message is received from the second computing system; andwherein said second computing system is to receive said first message; add said one or more new entries from said first message to said symbol table; said list of parameter values for said function call For each input parameter value in , if there is a corresponding symbol table index, replace the symbol table entry associated with the corresponding symbol table index with a dummy output parameter value; executed by the accelerator using the input parameter value said function; when said type of said function call is said asynchronously callable function with replaceable output parameters, for each output parameter in said list of parameter values, said dummy output parameter values of the parameter value list are mapped to corresponding output values in the symbol table; and sending a second message to the first computing system including the name of the function call and the list of output parameter values.12. The system of claim 11 , wherein when the type of the function call is a synchronous function, when the response to the first message is received from the second computing system, the The first computing system unblocks the caller.13. The system of claim 12, wherein the function call is a request to offload performance of a workload from the first computing system to the accelerator on the second computing system. |
Low-latency remoting to acceleratorsCopyright Notice/PermissionPortions of the disclosure of this patent document may contain copyrighted material. The copyright owner has no objection to the reproduction by anyone of the patent document or the patent disclosure in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights. A copyright notice applies to all data described below, as well as figures attached to this document, and also to any software described by: 2021, Intel Corporation. all rights reserved.technical fieldThe present invention relates to low latency delay remoting to accelerators.Background techniqueIn some cloud computing and high-volume data analysis computing environments, computationally intensive workloads are often offloaded from processors to accelerators to achieve higher performance. In one scenario, at least a portion of the workload is offloaded to an accelerator in the same computing system as the processor executing the other portion of the workload. In another scenario, at least a portion of the workload is offloaded to an accelerator (sometimes referred to as a disaggregated accelerator) in another computing system that is coupled via a network to include processors that execute other portions of the workload computing system. In this scenario, the latency involved in offloading workloads across the network can negatively impact overall system performance.Contents of the inventionThe present invention provides an apparatus comprising: a processor; and a memory device coupled to the processor, the memory device having instructions stored thereon which, in response to execution by the processor, cause the processing Accelerator: receiving a first function call from a caller on a first computing system acting as an initiator, the first function call to be executed by an accelerator on a second computing system acting as a target, the first computing system over a network coupled to the second computing system; determining a type of the first function call; generating a list of parameter values for the first function call; sending a first message to the second computing system, the first message including the the name of the first function call, the list of parameter values for the first function call, and one or more new entries for the symbol table, the one or more new entries representing dummy output parameter values; and when the When the type of the first function call is an asynchronous callable function with no output dependencies, returned to the caller; when the type of the first function call is an asynchronous callable function with replaceable output parameters , assigning a newly created symbol to an output parameter and returning to the caller; and when the type of the first function call is a synchronous function, blocking the caller until receiving from the second computing system to a response to the first message.Description of drawingsThe concepts described herein are illustrated in the drawings by way of example and not limitation. For simplicity and clarity, elements shown in the figures have not necessarily been drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.Figure 1 is a block diagram of two computing systems in accordance with one or more embodiments.2 is a block diagram of a remote function call from a first computing system to a second computing system, according to one or more embodiments.Figure 3 is a timeline diagram of an example of a remote function call in accordance with one or more embodiments.Figure 4 is a timeline diagram of an example of a remote function call in accordance with one or more embodiments.Figure 5 is a block diagram of an example of caller and remote manager processing in accordance with one or more embodiments.Figure 6 is a block diagram of an example of callee and remote manager processing in accordance with one or more embodiments.Figure 7 is a flow diagram of an initiator's remote manager processing in accordance with one or more embodiments.Figure 8 is a flow diagram of a target's remote manager processing in accordance with one or more embodiments.9 is a schematic diagram of an illustrative electronic computing device performing remote processing to accelerator processing, according to some embodiments.detailed descriptionThe techniques described herein reduce the impact of network latency associated with running workloads on remote accelerators or other computing devices through remote function calls, such as calls received via an application programming interface (API), to improve overall system performance . This technique enables asynchronous execution of remote functions to overlap network transmission of messages between computing systems. This helps to offload workloads to factorized accelerators, making computing systems more efficient.While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described in detail in this disclosure. It should be understood, however, that there is no intent to limit inventive concepts to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the invention and the appended claims. References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc. indicate that the described embodiments may include a particular feature, structure, or characteristic, but that each embodiment may or may not include The particular feature, structure or characteristic must be included. Furthermore, these phrases are not necessarily referring to the same embodiment. Furthermore, when a specific feature, structure or characteristic is described in conjunction with an embodiment, whether or not explicitly described, it is within the knowledge of those skilled in the art that the feature, structure or characteristic can be implemented in combination with other embodiments. Referring now to FIG. 1 , an exemplary computing environment 100 for low-area, low-power, low-latency, and high-throughput processing of processor-accelerator communications includes a first computing system 102 and a second computing system 142 coupled by a network 120 . In one embodiment, first computing system 102 is also referred to as an initiator, and second computing system 142 is also referred to as a target. In another embodiment, the first computing system 102 acts as the target and the second computing system 142 acts as the initiator. The first computing system 102 includes a processor 108 to execute instructions (Instr) 113 stored in a memory 112 . Instructions 113 include at least caller application 104 and remote manager 106 . Caller application 104 includes an application having at least one workload to process. In some processing scenarios, caller application 104 offloads one or more workloads to accelerator 120 for more efficient execution than executing one or more workloads on processor 108 . The caller application 104 offloads the workload by making function calls to the remote manager 106 using the API (eg, instructing the remote manager to send the workload to be processed on the accelerator 120 on the computing system). In at least one embodiment, accelerator 120 is implemented as a field programmable gate array (FPGA). Since the communication between the processor 108 and the accelerator 120 occurs within the first computing system 102, the communication is performed with the first delay.The second computing system 142 includes a processor 160 for executing instructions (Instr) 153 stored in memory 152 . Instructions 153 include at least callee application 144 and remote manager 146 . Callee applications 144 include applications having at least one workload to process. In some processing scenarios, callee application 144 accepts offloading one or more workloads to accelerator 170 in second computing system 142 to perform more efficiently than executing one or more workloads on processor 108 . In at least one embodiment, accelerator 170 is implemented as a field programmable gate array (FPGA). This offloading requires that the caller 104 executed by the processor 108 in the first computing system 102 (via the remote manager 106 and the remote manager 146) communicate with the callee 144 executed by the processor 160 in the second computing system 142. The network 120 communicates. Since the communication between processor 108 and accelerator 170 is from first computing system 102 (initiator) to second computing system 142 (target) over network 120, the communication is performed with the second delay. The second delay is greater than the first delay, causing overall system performance of the computing environment 100 to degrade. The techniques described herein employ a remote manager 106 executed by a processor 108 in a first computing system 102 interacting with a remote manager 146 executed by a processor 160 in a second computing delay to improve the overall system performance of the computing environment 100.In various computing environments, there may be any number of processors 108 and accelerators 120 on the first computing system 102, any number of processors 160 and accelerators 170 on the second computing system 142, and any number of first computing systems Coupled with any number of second computing systems. In some large-scale cloud computing environments, the number of caller applications 104, callee applications 144, first computing systems 102, second computing systems 104, and associated accelerators 120, 170 may be large (e.g., several Ten systems, hundreds of systems, thousands of systems, thousands or millions of callers and callees, etc.). Therefore, any reduction in the second latency can have a significant impact on the overall performance of the computing environment 100 .First computing system 102 and second computing system 142 may be embodied as any type of device capable of performing the functions described herein. For example, computing systems 102, 142 may be implemented as, but not limited to, computers, laptop computers, tablet computers, notebook computers, mobile computing devices, smartphones, wearable computing devices, multiprocessor systems, servers, disaggregated servers, workstations, and/or or consumer electronic devices. As shown in FIG. 1 , exemplary computing systems 102, 142 include processors 108, 160, input/output (I/O) subsystems 110, 150, memories 112, 152, and data storage devices 114, 154, respectively. Additionally, in some embodiments, one or more of the illustrative components may be incorporated into, or otherwise form part of, another component. For example, in some embodiments, memories 112, 152, or portions thereof, may be incorporated into processors 108, 160, respectively.Processors 108, 160 may be implemented as any type of processor capable of performing the functions described herein. For example, processors 108, 160 may be embodied as single or multi-core processors, digital signal processors, microcontrollers, or other processor or processing/control circuits.The memory 112, 142 may be implemented as any type of volatile or non-volatile memory or data storage device capable of performing the functions described herein. In operation, memory 112, 152 stores various data and software used during operation of computing system 102, 142, such as operating systems, applications, programs, libraries, and drivers. As shown, memories 112, 152 are communicatively coupled to processors 108, 160 via I/O subsystems 110, 150 implemented as circuitry and/or components to facilitate communication with processors 108, 120, memory 112, 152 and other components of the computing system for input/output operations. For example, I/O subsystems 110, 150 may be implemented as or otherwise include memory controller hubs, input/output control hubs, sensor hubs, host controllers, firmware devices, communication links (i.e., point-to-point links, bus links circuits, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate input/output operations. In some embodiments, the memories 112, 152 are directly coupled to the processors 108, 160, respectively, such as via an integrated memory controller hub. Additionally, in some embodiments, the I/O subsystems 110, 150 form part of a system-on-chip (SoC) and interface with the processors 108, 160, memories 112, 152, accelerators 120, 170 and/or other components of the computing system are incorporated together on a single integrated circuit chip. Additionally or alternatively, in some embodiments, processors 108, 160 include integrated memory controllers and system agents, which may be embodied as logic blocks, where data traffic from processor cores and I/O devices is are gathered together before being sent to memory 112,152.The data storage devices 114, 154 may be implemented as any type of device or device configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard drives, solid-state drives, non-volatile flash memory, or other data storage devices. Computing systems 102 , 142 may also include a communication subsystem 116 , 156 , which may be implemented as any communication circuit, device, or collection thereof that enables communication between computing systems 102 , 142 over network 120 . Communication subsystems 116, 156 may be configured to implement such communication using any one or more communication technologies (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, WiMAX, 3G, 4G LTE, etc.) .Accelerators 120, 170 may be implemented as FPGAs, application specific integrated circuits (ASICs), coprocessors, or other digital logic devices capable of performing accelerated functions (eg, accelerated application functions, accelerated network functions, or other accelerated functions). For example, the accelerators 120, 170 are FPGAs implemented as integrated circuits including programmable digital logic resources that can be configured after manufacture. For example, an FPGA includes a configurable array of logic blocks that communicate through configurable data exchanges. The accelerators 120, 170 are coupled to the processors 108, 160 via a high-speed connection interface, such as a peripheral bus (e.g., a Peripheral Component Interconnect (PCI) Express bus) or an interprocessor interconnect (e.g., an In-Die Interconnect (IDI) ) or Quick Path Interconnect (QPI)), or via any other suitable interconnect. The accelerators 120, 170 receive data and/or instructions from the processors for processing and return resulting data to the processors.The computing system 102 , 142 further includes one or more peripheral devices 118 , 158 . Peripheral devices 118, 158 include any number of additional input/output devices, interface devices, hardware accelerators, and/or other peripheral devices. For example, in some embodiments, peripheral devices 118, 158 include touch screens, graphics circuitry, graphics processing unit (GPU) and/or processor graphics, audio devices, microphones, cameras, keyboards, mice, network interfaces, and/or other Input/Output Devices, Interface Devices and/or Peripheral Devices.It should be appreciated that for some implementations, less or more equipped computing systems than the examples described above may be preferred. Accordingly, the configuration of computing systems 102, 142 may vary in different implementations, depending on many factors, such as price constraints, performance requirements, technological improvements, or other circumstances. Examples of computing systems 102, 142 include, but are not limited to, mobile devices, personal digital assistants, mobile computing devices, smartphones, cellular phones, handsets, one-way pagers, two-way pagers, messaging devices, computers, personal computers (PCs) , desktop computer, laptop computer, notebook computer, handheld computer, tablet computer, server, disaggregated server, server array or server farm, web server, network server. Internet servers, workstations, microcomputers, mainframe computers, supercomputers, network appliances, web appliances, distributed computing systems, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, televisions, Digital TVs, set-top boxes, wireless access points, base stations, subscriber stations, mobile subscriber centers, radio network controllers, routers, hubs, gateways, bridges, switches, machines, or combinations thereof.The technology described herein may be implemented as one or more microchips interconnected using a motherboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or an FPGA or any one or combination of integrated circuits. For example, the term "logic" includes software or hardware and/or a combination of software and hardware.Remoting is the technique of sending commands and data over a network to a computing device or accelerator to perform tasks. For example, an application running on one machine (such as the caller application 104 on the first computing system 102) may wish to communicate through an accelerator (such as the accelerator 170) on a remote computing system (such as the second computing system 142). Run tasks to accelerate tasks (for example, machine learning (ML) inference). An application or software library supporting the application communicates with the software device driver interfaced to the accelerator using the API. If the accelerator is attached locally in the computing system running the application (eg, first computing system 102 ), the communication between the application/library and the software device driver for the accelerator will take place in the form of local procedure calls. However, if the accelerator is connected to a remote computing system (such as the second computing system 142), the API function calls will have to be "remoted" over the network 120 to a software device driver running on the computing system connected to the accelerator. In the techniques described herein, this remoting is implemented by remote managers on both sides (eg, remote manager 106, remote manager 146). The first computing system 102 may be referred to as the initiator (where the caller application 104 resides), and the second computing system 142 may be referred to as the target (where the callee application 144 and accelerator 170 reside). If remoting were implemented naively, each API call involving blocking (described below) would be made serially over the network 120 and incur significant network latency overhead (e.g., on today's data center networks, a Round trip is about 100 microseconds).FIG. 2 is a block diagram 200 of a remote function call from the first computing system 102 to the second computing system 104 in accordance with one or more embodiments. A caller 104 on the first computing system 102 makes a function call 202 to the remote manager 106 . In one example, the function call 202 is a request to the accelerator 170 to offload the performance of the workload 202 from the first computing system to the second computing system 142 . Function calls 202 are provided by remote manager 106 to implement remote offload requests. Remote manager 106 sends the offload request to remote manager 146 on second computing system 146 via network 120 (not shown in FIG. 2 ). The remote manager 146 then makes a corresponding function call 202 to the callee application 144 . In at least one embodiment, the callee application 144 includes a software device driver (not shown) for interfacing with the accelerator 170 . In another embodiment, the software device driver is a separate component from the callee application 144 . Accelerator 170 processes workload 202 and returns results to callee application 144 . Callee application 144 returns the result to remote manager 146 via function call 202 . Remote manager 146 sends the results to remote manager 106 on first computing system 102 over the network. The remote manager 106 returns the result to the caller application 104 via a function call 202 . In an embodiment, remote manager 106 and remote manager 146 include the same code and can each operate as an initiator and/or target.FIG. 3 is a timeline diagram 300 of an example of a remote function call in accordance with one or more embodiments. Figure 3 shows the effect of serial remote API function calls over the network. The originator is the entity (eg, caller application 104) that enables the API function call. The target side consists of a software device driver (eg, callee application 144 ) on a remote machine (eg, second computing system 142 ) connected to accelerator 170 . The remote managers 106, 146 on the initiator and target, respectively, connect the caller application 104 with the callee application 144 over the network. In this example, caller 104 makes four synchronous API function calls (F0 302, F1 310, F2 318, and F3 326). After each function call, the remote manager 106 on the first computing system 102 blocks, waiting for a response from the remote manager 146 on the second computing system 142 .Delays due to network delays are shown in shaded blocks in Figure 2. When remote manager 106 receives F0 302 , remote manager 106 sends F0 302 to remote manager 146 , which forwards the function call to callee 144 . Remote manager 106 waits for reply F0 308 from callee 144 , which causes delays 304 and 306 . When remote manager 106 receives F1 310 , remote manager 106 sends F1 310 to remote manager 146 , which forwards the function call to callee 144 . Remote manager 106 waits for a reply F1 316 from callee 144 , which causes delays 312 and 314 . When remote manager 106 receives F2 318 , remote manager 106 sends F2 318 to remote manager 146 , which forwards the function call to callee 144 . Remote manager 106 waits for reply F2 320 from callee 144 , which causes delays 322 and 324 . When remote manager 106 receives F3 326 , remote manager 106 sends F3 326 to remote manager 146 , which forwards the function call to callee 144 . Remote manager 106 waits for reply F3 328 from callee 144 , which causes delays 330 and 332 . As the number of API function calls for performing a task increases, the overall time to complete the task increases due to network communication overhead (eg, including at least network delays 304, 306, 312, 314, 322, 324, 330, and 332).To reduce the impact of network latency, the techniques described herein propose overlapping code execution on the initiator and target with network transfers. The techniques described herein make API function calls as asynchronous as possible so that the initiator (eg, caller application 104 and associated remote manager 106 ) does not have to block and wait on each call. This requires exploiting certain characteristics of the function being remoted. Analysis of functions commonly used in remoting use cases shows that not all function calls need to be synchronized in operation. Some function calls do not return values that are consumed by the originator before proceeding.Three types of function calls are recognized: type 0 = asynchronous callable function with no output dependencies, type 1 = asynchronous callable function with alternative output parameters, and type 2 = synchronous function. Type 0 functions do not return a value used by the caller 104 . Although the function must execute correctly without errors, the caller does not expect the function to return a value for use in any future calculations. An example of such a function is a function that initializes a library. Assuming the function executes correctly on the target side, it is possible to return immediately to the caller 104 without blocking. Type 1 functions return a value to the caller, which (the caller) can pass that value back as input to another function that the callee executes in the future. The returned value is not consumed by the caller in any other way. As an example, consider creating a command list to hold commands to be submitted to the accelerator 170 . When the caller creates a list on the target by invoking, for example, the CreateCommandList() function, the callee returns an opaque handle referencing the list. In the future, when a caller sends a command that must be appended to the command list, the caller will cause the command list handle to be passed back to the callee. In this example, the CreateCommandList() function is a type 1 function. Type 1 function calls may also immediately return dummy output value(s) to the caller, but remote manager 146 and target (e.g., callee application 144) must keep track of the dummy return value(s) for later Identify the bogus return value and replace it with the real value (described below). A Type 2 function returns to the caller a value that it (the caller) used in its computation, or is a function that causes some data to be passed from the caller 104 to the callee 144 . For example, a function that submits a batch of commands to the accelerator 170 is a Type 2 function because the caller 104 may need the result of a computation or require the release of a resource used in an earlier function call (eg, a memory buffer) before proceeding with its execution. Therefore, a Type 2 function call must always block the caller 104 .FIG. 4 is a timeline diagram 400 of an example of a remote function call in accordance with one or more embodiments. In the example shown in FIG. 4 , caller 104 makes a sequence F0 402 , F1 404 , F2 406 , and F3 408 of four function calls. In this example, the first three calls are type 0 or type 1 calls that do not block the caller 104 (using "no wait" message transfer semantics). Note that these function calls 402 , 404 , and 406 return (with dummy output parameter values) to the caller immediately before the function call is relayed to the target (eg, callee 144 ) and executed by accelerator 170 . The fourth call in this example, F3408, is a Type 2 function, and thus blocks caller 104. When F0 402, F1 404, and F2 406 are relayed to callee 144 and executed there, caller 104 can do useful work without blocking. The caller 104 must wait for a response to F3408 and then resume execution because the caller relies on the value returned by F3 to make forward progress. As shown in Figure 3, shaded blocks represent network delays. Thus, the delay 418 due to reply F0 410 , the delay 420 due to reply F1 412 , the delay 422 due to reply F2 414 , and the delay 424 due to reply F3 416 are represented. Comparing Figure 4 with Figure 3, it can be seen that the effective network latency is reduced due to code execution overlapping with network delivery. In general, the greater the ratio of Type 0 and Type 1 calls to Type 2 calls in an application, the greater the reduction in network overhead.FIG. 5 is a block diagram of an example of a caller 104 and remote manager 106 process 500 in accordance with one or more embodiments. Assume, for example, that caller 104 invokes a sequence of functions, F0 402 , F1 404 , F2 406 , and F3 408 . As in the example of FIG. 4 , assume that F0 402 is a type 0 function; F1 404 and F2 406 are type 1 functions; and F3 408 is a type 2 function. The input and output parameters for each function are shown in Figure 5 next to the function name on the arrow. For example, F1 404 requires an input parameter A1 and an output parameter A1'. The caller 104 passes in the value of A1 and gets the value of A1' from the callee 144 . Similarly, F3 408 requires two input parameters (A3' and A3") and one output parameter A3. When these functions are enabled into the remote manager 106 from the application (caller 104) or a library on the remote manager, The remote manager 106 maintains a data structure called the function parameter value list 502 to keep track of the sequence of function calls, the various parameters, their types, and their values. In one embodiment, the function call parameter list 502 is the node wherein each function has a node.Each function node points to a linked list of arguments to the function.The symbol table 504 is maintained to keep track of the dummy output parameters returned to the caller 104 by the remote manager 106 value, described further below.When the remote manager 106 on the first computing system 102 receives the function call 202 from the caller 104, the remote manager determines the type of the function (0-2 type). In one implementation of the technique, each function in the API can be assigned a priori to one of three types by analyzing their input, output, and execution semantics. The name of the function and its arguments are entered into the linked list data structure 502 . Based on the type of the function, there are three cases to consider.Case 1 (0-type function): Remote Manager 106 immediately returns to caller 104 with a "success" status (note: execution of the function has not yet occurred on the target side (e.g., by accelerator 170), but caller 104 need not be blocked ). The remote manager 106 on the caller relays the function call 202 to the callee 144 .Case 2 (type 1 function): The remote manager 106 notes that the function has one or more output parameters. For example, in the case of F1 404, there is an output parameter A1'. In the case of a normal blocking function call, F1 404 would execute on the target (by the accelerator 170) and return the value of A1'. To review, this value is necessary to the caller 104 only if it (the caller) may need to have the value passed back to the callee 144 in the future. The exact value returned to the caller is not important, as long as the caller continues to use the same value, and the callee knows how to replace that value with the actual value. Therefore, the remote manager 106 on the caller 104 immediately returns a dummy output value to the caller. The remote manager 106 also records this value in the Pseudo Output Parameter Symbol (POPS) field 506 under the symbol table 504 entry. In this example, the dummy output value of A1' is #1. Function F2 406 is handled similarly to creating another dummy output parameter value #2 of A2". Note that the linked list node for the output argument points to the corresponding entry in symbol table 502. In addition to the name of the function and its argument value, Remote manager 106 also sends part of the linked list data structure and symbol table entry corresponding to the Type 1 function to callee 144. For example, in the case of F1, remote manager 106 sends F1's linked list (and its own variable), and the first entry 510 in the symbol table 502 corresponding to the parameter A1'. In the case of F2 406, the remote manager 106 sends the linked list of the function call parameters of F2 406 and the second entry in the symbol table 502 Entry 512.Case 3 (Type 2): In this case, the remote manager 106 blocks the caller 104 . Additionally, if any of the input parameter values match the earlier function's dummy output parameter value, the remote manager adds a pointer to the corresponding entry in symbol table 502 from the linked list. For example, in the case of F3 408, the values of the two input parameters A3' and A3" (from caller 104) match the dummy output parameter symbolic values #1 and #2 (from earlier function calls F1 404 and F2 406), respectively The remote manager adds pointers from the linked list to the corresponding symbol table entries. As was the case with earlier Type 1 calls, the remote manager 106 sends the function argument list and corresponding symbol table entries to the callee 144 .FIG. 6 is a block diagram of an example of a callee 144 and remote manager 146 process 600 in accordance with one or more embodiments. Remote manager 146 on the target (e.g., second computing system 142) receives the sequence of function calls F0 402, F1 404, F2 406, and F3 408, along with their input argument values, output arguments, and An entry in the symbol table 504 for the call. The remote manager 146 on the callee 144 side invokes the functions on the callee 144 in the software stack (eg, the software device driver of the accelerator 170 ) in program order. The first function F0 402 executes normally. When F1 404 executes, remote manager 146 notices that F1 has an output parameter A1'. After the callee 144 finishes executing F1 (through the processing of the accelerator 170 ), the value of the output argument A1 ′, referred to herein as V1 , is returned to the remote manager 146 . Since A1' has entry 510 in symbol table 504 with a dummy output parameter symbol represented by #1, the remote manager on the target (e.g., second computing system 142) adds the value V1 to the actual output parameter of that entry Value (ROPV) field 602 . Therefore, remote manager 146 binds #1 510 to V1 602 . Similarly, #2 is mapped/bound to its actual value V2 604 . When accelerator 170 executes F3 408, remote manager 146 observes that two of the input parameters (A3' and A3") have values in symbol table 504; namely #1 510 and #2 512. The symbol table entries also show , the truth values corresponding to #1 and #2 are V1 602 and V2 604, respectively. Therefore, remote manager 146 replaces V1 and V2 with #1 and #2, respectively, before enabling F3 408 on callee 144. Thus, the intent of the caller 104 to pass the values of A1' and A2' as input to the F3 408 is executed by the remote manager 106, 146, and the F3 408 is properly enabled.This example shows how the techniques described herein, by taking advantage of the API's semantic features (type 0-2), create The callee 144) overlaps network delivery with code execution, effectively performing remoting of the function call 202 across the network 120.FIG. 7 is a flowchart of an initiator's remote manager process 700 in accordance with one or more embodiments. At block 702 , remote manager 106 on an initiator (eg, first computing system 102 ) receives a function call from caller 104 . At block 704, the remote manager 106 determines the type of function call. At block 706, the remote manager 106 generates a list of function call parameter values. For each input parameter value in symbol table 504, the remote manager adds the input parameter value and the symbol table index (of the entry containing the input parameter value) to the list of function call parameter values. For each output parameter, the remote manager creates a new symbol and adds the new symbol to a new entry in the symbol table 504, and adds the new entry's symbol table index (for the new symbol) to the value of the function call parameter value. List. At block 708, the remote manager 106 builds a message including the function call name, a list of function call parameter values, and a new symbol table entry, and sends the message to the remote manager on the target party (e.g., the second computing system 142) . At block 710 , if the function call is of type 0, the remote manager 106 returns immediately to the caller 104 at block 712 . If the function call type is a 1 type, the remote manager 106 assigns the newly created symbol to the output parameter at block 714 and returns immediately to the caller 104 at block 712 . If the function call type is type 2, remote manager 106 blocks the caller until a response is received from remote manager 146 on the target. When a response is received, remote manager 106 unblocks the caller and returns the received response.An example of a process for implementing the remote manager 106 (initiator) in pseudo-code in the first computing system 102 is shown in Table 1 as follows.Table 1Lines 1 and 2 of Table 1 initialize two lists: (1) args, which will contain information about the function arguments; and (2) symbols, which will contain dummy outputs representing output parameters in functions of type 1 Parameter notation (POPS). Each element in the args list is a structure (struct), which contains three fields: type (such as integer, floating point, etc.); If the parameter value matches, it is an index in the symbol table 504 (SYMTAB)).The for loop (lines 3-18) builds up the args and symbols lists. Each argument of the function is considered in one pass of the loop. If the parameter is an INPUT parameter (lines 7-11), its value is looked up in the symbol table 504 (SYMTAB). If an entry with this parameter value is found, it means that the value of the argument from the caller is a dummy output parameter symbol corresponding to the OUTPUT parameter of another function executed earlier. If the argument is an OUTPUT argument to a function of type 1 (lines 12-17), a new dummy output argument value is created and added to SYMTAB. The symidx field of the argument struct is set to the index of the new symbol. This index will be used by the remote manager 146 on the second computing system 142 to bind the actual output parameter value (ROPV) to the POPS symbol after the function execution is complete. On line 17, the dummy output value is copied to the memory location of the output parameter in preparation for return to caller 104 .The function, its arguments, and any new symbols added to the SYMTAB are packaged into a message to the remote manager in the second computing system (line 20). Lines 22-33 handle the transmission of the message. Type 0 and Type 1 function call request messages are sent asynchronously (async_send) and do not block the caller. Type 2 function calls block the caller (sync_send) and return only when a reply message is received from the target party.When the response to the function call is received by the remote manager 106 on the initiator (eg, the caller 104 of the first computing system 102 ), it may be a success or an error. Since some functions (type 0 and type 1) execute asynchronously, it is possible to get errors from earlier functions that have returned to the caller 104 (this is not possible for type 2 functions, which are synchronous). An (unlikely) error from an earlier asynchronous function call can be propagated to the caller as an exception. Errors from type 2 function calls are handled normally. Since type 2 functions can have output parameters, it is necessary to make the values of the output parameters (in memory) consistent between the target and initiator before returning to the caller.FIG. 8 is a flow diagram of a remote manager process 800 by a target in accordance with one or more embodiments. When the target remote manager 146 receives the message, at block 802 the remote manager 146 adds a new symbol table entry (received from the initiator remote manager 106) to the symbol table 504 on the target. There can be one or more new symbol table entries received. At block 804, for each input parameter value, if there is a corresponding symbol table index (received in the message from the initiator), the remote manager 146 replaces the symbol table entry associated with the symbol table index with the dummy output parameter value. The symbol table entry stores the actual output parameter value (ROPV). At block 806, the target executes the function using the input parameter values. At block 808, if the function type is type 1, then for each output parameter (of the function), the remote manager 146 maps the dummy output parameter value from the function call parameter list to the corresponding actual output parameter value in the symbol table ( For example, ROPV, return from function). At block 810, the remote manager 146 builds a message including the function name and a list of output parameter values, and at block 812 the remote manager 146 sends the message to the remote manager 106 on the initiator.In response to receiving a message from an initiator (e.g., remote manager 106) with a function call request, the target (e.g., remote manager 146) after unpacking the message containing the function, parameter type, value, and symbol table entry , execute the sample procedure shown in Table 2 below.Table 2Lines 1-2 add a new symbol to the symbol table 504 (SYMTAB) on the target (remote manager 146 on the second computing system 142). The for loop in lines 4-8 processes the input parameters before the function executes. Since some of the input parameter values may be dummy parameter values from earlier function enablements, each INPUT parameter value must be looked up in SYMTAB (line 7) using the symidx index in the table. In line 8, the actual output parameter value (ROPV) corresponding to the symbol is replaced with a dummy value. Line 10 executes the function (using accelerator 170) with positive argument values to the function. The for loop in lines 12-14 disposes of the output parameter values of the Type 1 function. In line 14, the symbol corresponding to the output parameter is bound to the actual output value (from executing the function in line 10). Finally, in lines 16-17, the message with the return argument is prepared and sent asynchronously to the initiator (eg, remote manager 106).9 is a schematic diagram of an illustrative electronic computing device performing remote processing of accelerator processing, according to some embodiments. Electronic computing device 900 is a representation of computing systems 102 and 142 . In some embodiments, computing device 900 includes one or more processors 910 including one or more processor cores 918 and remote manager 106 (for caller 104) or 146 (for callee 144) . In some embodiments, computing device 900 includes accelerator 120 or 170 . In some embodiments, the computing device performs remote processing as described above in FIGS. 1-8 .Computing device 900 may additionally include one or more of: cache memory 962, graphics processing unit (GPU) 912 (which in some implementations may be hardware accelerator 120/170), wireless input/output (I/O) Interface 920 , wired I/O interface 930 , memory circuitry 940 , power management circuitry 950 , non-transitory storage 960 , and network interface 970 for connecting to network 120 . The following discussion provides a brief, general description of the components forming exemplary computing device 900 . Example non-limiting computing devices 900 may include desktop computing devices, blade server devices, workstations, laptop computers, mobile phones, tablet computers, personal digital assistants, or similar devices or systems.In an embodiment, the processor core 918 is capable of executing a set of machine-readable instructions 914 , reading data and/or a set of instructions 914 from one or more storage devices 960 , and writing data to one or more storage devices 960 . Those skilled in the relevant art will appreciate that the illustrated embodiments, as well as other embodiments, may be practiced with other processor-based device configurations, including portable or handheld electronic devices, such as smartphones, laptop computers, wearable computers, consumer Electronic equipment, personal computers (“PCs”), network PCs, microcomputers, server blades, mainframe computers, FPAGs, Internet of Things (IOT) devices, etc. For example, set of machine-readable instructions 914 may include instructions to implement remoting, as provided in FIGS. 1-8 .Processor core 918 may include any number of hardwired or configurable circuits, some or all of which may include programmable and/or configurable combinations of electronic components, semiconductor devices, and/or logic elements that Or the logic element is partially or completely disposed in a PC, server, mobile phone, tablet computer or other computing system capable of executing processor-readable instructions.Computing device 900 includes a bus or similar communication link 916 that communicatively couples and facilitates the exchange of information and/or data between various system components, including processor core 918, cache memory 962, graphics processor circuitry 912 , one or more wireless I/O interfaces 920 , one or more wired I/O interfaces 930 , one or more storage devices 960 , and/or one or more network interfaces 970 . Computing device 900 may be referred to herein in the singular, but this is not intended to limit embodiments to a single computing device 900, as in certain embodiments there may be more than one computing device 900 incorporating, comprising, or Any number of communicatively coupled, co-located or remote networked circuits or devices are encompassed.Processor core 918 may include any number, type, or combination of currently available or future developed devices capable of executing sets of machine-readable instructions.Processor core 918 may include (or be coupled to) but is not limited to any current or future developed single-core or multi-core processor or microprocessor, such as: one or more system-on-chip (SOC); central processing unit (CPU); Digital Signal Processor (DSP); Graphics Processing Unit (GPU); Application Specific Integrated Circuit (ASIC), Programmable Logic Unit, Field Programmable Gate Array (FPGA), etc. Unless otherwise described, the construction and operation of the various blocks shown in Figure 9 are of conventional design. Accordingly, such blocks need not be described in further detail herein, as they will be understood by those skilled in the relevant arts. Bus 916 interconnecting at least some of the components of computing device 900 may employ any currently available or future developed serial or parallel bus structure or architecture.System memory 940 may include read only memory (“ROM”) 942 and random access memory (“RAM”) 946 . A portion of ROM 942 may be used to store or otherwise hold a basic input/output system (“BIOS”) 944 . BIOS 944 provides basic functionality to computing device 900 , such as by causing processor core 918 to load and/or execute one or more sets of machine-readable instructions 914 . In an embodiment, at least a portion of the one or more sets of machine-readable instructions 914 cause at least a portion of the processor core 918 to provide, create, generate, transform, and/or act as a special-purpose, specific, and special machine, such as Word processors, digital image capture machines, media players, gaming systems, communication devices, smartphones, neural networks, machine learning models, or similar devices.Computing device 900 may include at least one wireless input/output (I/O) interface 920 . At least one wireless I/O interface 920 may be communicatively coupled to one or more physical output devices 922 (haptic devices, video displays, audio output devices, hardcopy output devices, etc.). At least one wireless I/O interface 920 may be communicatively coupled to one or more physical input devices 924 (pointing device, touch screen, keyboard, haptic device, etc.). At least one wireless I/O interface 920 may include any currently available or future developed wireless I/O interface. Example wireless I/O interfaces include, but are not limited to: Near Field Communication (NFC), and similar wireless I/O interfaces.Computing device 900 may include one or more wired input/output (I/O) interfaces 930 . At least one wired I/O interface 930 may be communicatively coupled to one or more physical output devices 922 (haptic devices, video displays, audio output devices, hardcopy output devices, etc.). The at least one wired I/O interface 930 may be communicatively coupled to one or more physical input devices 924 (pointing device, touch screen, keyboard, haptic device, etc.). Wired I/O interface 930 may include any currently available or future developed I/O interface. Example wired I/O interfaces include, but are not limited to, Universal Serial Bus (USB), IEEE 1394 ("FireWire"), and similar wired I/O interfaces.Computing device 900 may include one or more communicatively coupled non-transitory data storage devices 960 . Data storage 960 may include one or more hard disk drives (HDD) and/or one or more solid state storage devices (SSD). One or more data storage devices 960 may include any current or future developed storage appliances, network storage devices and/or systems. Non-limiting examples of such data storage devices 960 may include, but are not limited to, any current or future developed non-transitory machine-readable storage medium, storage appliance or device, such as one or more magnetic storage devices, one or more optical memory device, one or more resistive memory devices, one or more molecular memory devices, one or more quantum memory devices, or various combinations thereof. In some implementations, the one or more data storage devices 960 may include one or more removable storage devices, such as one or more flash drives, flash memory, flash memory units, or capable of being communicatively coupled with the computing device 900 and Uncoupled similar appliances or devices.One or more data storage devices 960 may include an interface or controller (not shown) that communicatively couples the respective storage device or system to bus 916 . One or more data storage devices 960 may store, retain, or otherwise contain sets of machine-readable instructions, data structures, program modules, data stores, databases, logical structures, and/or references to processor cores 918 and/or graphics processors Other data useful to circuitry 912 and/or to one or more applications executing on or by processor core 918 and/or graphics processor circuitry 912 . In some cases, one or more data storage devices 960 may be communicatively coupled to processor core 918, such as via bus 916 or via one or more wired communication interfaces 930 (e.g., Universal Serial Bus or USB); a or a plurality of wireless communication interfaces 920 (eg, near field communication or NFC); and/or one or more network interfaces 970 (IEEE802.3 or Ethernet, IEEE 802.11 or the like).Processor readable instruction set 914 and other programs, applications, logic sets and/or modules may be stored in whole or in part in system memory 940 . Such instruction sets 914 may be transferred in whole or in part from one or more data storage devices 960 . Instruction set 914 may be loaded, stored, or otherwise retained in system memory 940 in whole or in part during execution by processor core 918 and/or graphics processor circuitry 912 .Computing device 900 may include power management circuitry 950 that controls one or more operational aspects of energy storage device 952 . In an embodiment, energy storage device 952 may include one or more primary (ie, non-rechargeable) or secondary (ie, rechargeable) batteries or similar energy storage devices. In an embodiment, energy storage device 952 may include one or more ultracapacitors or supercapacitors. In an embodiment, power management circuitry 950 may alter, adjust, or control the flow of energy from external power source 954 to energy storage device 952 and/or to computing device 900 . Power source 954 may include, but is not limited to, a solar power system, a commercial grid, a portable generator, an external energy storage device, or any combination thereof.For convenience, processor core 918, graphics processor circuitry 912, wireless I/O interface 920, wired I/O interface 930, storage device 960, and network interface 970 are shown communicatively coupled to each other via bus 916 to provide connections between the above components. In alternative embodiments, the components described above may be communicatively coupled in a manner different than that shown in FIG. 9 . For example, one or more of the above-described components may be directly coupled to other components, or may be coupled to each other via one or more intermediate components (not shown). In another example, one or more of the components described above may be integrated into processor core 918 and/or graphics processor circuitry 912 . In some embodiments, all or a portion of bus 916 may be omitted, and components coupled directly to each other using suitable wired or wireless connections.For example, flowcharts representing example hardware logic, non-tangible machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the computing device 900 are shown in FIGS. 3-8 . The machine readable instructions may be one or more executable programs, or portion(s) of executable programs, for execution by a computer processor, such as the processor 910 shown in the example computing device 900 discussed. The program may be embodied in software stored on a non-transitory computer-readable storage medium, such as a CD-ROM, floppy disk, hard drive, DVD, Blu-ray disc, or memory associated with processor 910, but the entire program and/or Portions thereof may alternatively be executed by means other than processor 910 and/or embodied in firmware or dedicated hardware. Furthermore, although the example procedures are described with reference to the flowcharts illustrated in FIGS. 3-8 , many other methods of implementing the example computing device 900 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, FPGAs, ASICs, comparators, operational amplifiers (op amps), logic circuits, etc.) configured to perform corresponding operations without executing software or firmware.Machine-readable instructions described herein may be stored in one or more of compressed, encrypted, fragmented, compiled, executable, packed, and the like. Machine-readable instructions as described herein may be stored as data (eg, portions of instructions, code, representations of code, etc.), which may be utilized to create, manufacture, and/or generate machine-executable instructions. For example, machine readable instructions may be partitioned and stored on one or more storage devices and/or computing devices (eg, servers). Machine-readable instructions may require one or more of installing, modifying, adapting, updating, combining, supplementing, configuring, decrypting, decompressing, unpacking, distributing, redistributing, compiling, etc., in order to make them available to computing devices and/or other machines to read, interpret and/or execute directly. For example, machine-readable instructions may be stored in multiple parts that are separately compressed, encrypted, and stored on different computing devices, where the parts, when decrypted, decompressed, and combined form a set of executable instructions that implement functions such as procedures described in this article.In another example, machine-readable instructions can be stored in a state where they can be read by a computer, but require additional libraries (e.g., dynamic link libraries (DLLs)), software development kits (SDKs), application programming interfaces ( API), etc., in order to execute instructions on special computing devices or other devices. In another example, before the machine-readable instructions and/or corresponding program(s) can be executed in whole or in part, the machine-readable instructions may need to be configured (e.g., stored settings, input data, recorded network addresses, etc. ). Accordingly, the disclosed machine-readable instructions and/or corresponding program(s) are intended to encompass such machine-readable instructions and/or program(s) regardless of the machine-readable instructions and/or program(s) The particular format or state of a program while it is stored or otherwise at rest or transmitted.Machine-readable instructions described herein may be expressed by any past, present, or future instruction language, scripting language, programming language, or the like. For example, machine readable instructions may be expressed in any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, Hypertext Markup Language (HTML), Structured Query Language (SQL), Swift, and the like.As noted above, the example processes of FIGS. 3-8 may be implemented using executable instructions (e.g., computer and/or machine-readable instructions) stored on a non-transitory computer and/or machine-readable medium, such as Hard drives, SSDs, flash memory, read-only memory, compact discs, digital versatile discs, cache memory, random-access memory, and/or any other storage device or disk in which information is , permanent, ephemeral, temporary buffered and/or cached information) storage. As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk, and to exclude propagating signals and to exclude transmission media."Including" and "comprising" (and all forms and tenses thereof) are used herein as open-ended terms. Accordingly, whenever a claim employs any form of "include" or "comprise" (eg, comprises, includes, comprising, including, having, etc.) as In the preamble or within any kind of claim recitation, it is understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase "at least" is used as a transitional term, such as in the preamble of a claim, it is open-ended in the same way that the terms "including" and "comprising" are open-ended same. The term "and/or" when used, for example, in forms such as A, B, and/or C, refers to any combination or subset of A, B, C, such as (1) only A, (2) only B, (3 ) C only, (4) A and B, (5) A and C, (6) B and C, and (7) A and B and C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A and B" means an embodiment that includes any of: (1) at least one of A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing a structure, component, item, object, and/or thing, the phrase "at least one of A or B" means an embodiment that includes any of the following: (1) at least One A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the execution or performance of a process, instruction, action, activity, and/or step, the phrase "at least one of A and B" means an embodiment that includes any of the following: (1) At least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the execution or performance of a process, instruction, action, activity, and/or step, the phrase "at least one of A or B" means an embodiment that includes any of the following: (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.As used herein, references in the singular (eg, "a," "an item," "first," "second," etc.) do not preclude a plurality. The term "a" or "a piece" of an entity, as used herein, means one or more of that entity. The terms "a" (or "a piece"), "one or more" and "at least one" are used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by eg a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.The descriptors "first", "second", "third", etc. are used herein when identifying a plurality of elements or components that may be individually referred to. Unless otherwise specified or understood based on the context of their use, such descriptors are not intended to confer any meaning of priority, physical order or arrangement in a list, or ordering in time, but are merely used to refer to multiple elements individually or labels of components to facilitate understanding of the disclosed examples. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while the same element may be referred to in the claims with a different descriptor such as "second" or "third". In such cases, it should be understood that such descriptors are used merely for convenience in referring to multiple elements or components.The following examples relate to further embodiments. Example 1 is an apparatus comprising: a processor; and a memory device coupled to the processor, the memory device having stored thereon instructions that, in response to execution by the processor, cause the processor to : receiving a first function call from a caller on a first computing system acting as an initiator, the first function call to be executed by an accelerator on a second computing system acting as a target, the first computing system being coupled by a network to the second computing system; determine the type of the first function call; generate a list of parameter values for the first function call; send a first message to the second computing system, the first message including the The name of the first function call, the list of parameter values for the first function call, and one or more new entries for a symbol table, the one or more new entries representing dummy output parameter values; and when the when the type of the first function call is an asynchronous callable function with no output dependencies, returned to the caller; when the type of the first function call is an asynchronous callable function with replaceable output parameters, assigning a newly created symbol to an output parameter and returning to the caller; and when the type of the first function call is a synchronous function, blocking the caller until receiving from the second computing system A response to the first message.In Example 2, the subject matter of Example 1 may optionally include instructions, responsive to being executed by the processor, causing the processor to: when the type of the first function call is a synchronous function, when from The second computing system unblocks the caller upon receipt of the response to the first message.In Example 3, the subject matter of Example 1 can optionally include wherein the first function call is a request to offload performance of a workload from the first computing system to the accelerator on the second computing system.In Example 4, the subject matter of Example 1 can optionally include instructions that, in response to execution by the processor, cause the processor to: The second computing system of receives a second message including the name of a second function call, the list of parameter values for the second function call, and one or more new entries for a symbol table, the one or more new entries representing dummy output parameter values; adding said one or more new entries from said second message to said symbol table; to said list of parameter values for said second function call For each input parameter value of , if there is a corresponding symbol table index, the symbol table entry associated with the corresponding symbol table index is replaced with a dummy output parameter value; used by the accelerator on the first computing system when the type of the second function call is the asynchronous callable function with replaceable output parameters, for each output parameter in the list of parameter values, will be from dummy output parameter values of the parameter value list of the second function call are mapped to corresponding output values in the symbol table; and sending to the second computing system the name and A second message that outputs a list of parameter values.In Example 5, the subject matter of Example 4 can optionally include wherein the second function call is a request to offload performance of a workload from the second computing system to the accelerator on the first computing system.Example 6 is a method comprising: receiving a first function call from a caller on a first computing system acting as an initiator, the first function call to be executed by an accelerator on a second computing system acting as a target, the The first computing system is coupled to the second computing system through a network; determining the type of the first function call; generating a list of parameter values for the first function call; sending a first message to the second computing system, which includes the name of the first function call, the list of parameter values for the first function call, and one or more new entries of a symbol table, the one or more new entries representing dummy output parameter values; and when Returned to the caller when the type of the first function call is an asynchronous callable function with no output dependencies; when the type of the first function call is an asynchronous callable function with alternative output parameters , assigning a newly created symbol to an output parameter and returning to the caller; and when the type of the first function call is a synchronous function, blocking the caller until A response to the first message is received.In Example 7, the subject matter of Example 6 may optionally include when the type of the first function call is a synchronous function, when the response to the first message is received from the second computing system , unblock the caller.In Example 8, the subject matter of Example 6 can optionally include wherein said first function call is a request to offload performance of a workload from said first computing system to said accelerator on said second computing system.In Example 9, the subject matter of Example 6 can optionally include receiving, on said first computing system acting as a target, from said second computing system acting as an initiator, a second message comprising a second the name of the function call, the list of parameter values for the second function call, and one or more new entries in a symbol table, the one or more new entries representing dummy output parameter values; said one or more new entries are added to said symbol table; for each input parameter value in said parameter value list of said second function call, if there is a corresponding symbol table index, it will The symbol table entry associated with the corresponding symbol table index is replaced with a dummy output parameter value; the function is executed by an accelerator on the first computing system using the input parameter value; when the type of the second function call is the asynchronously callable function with replaceable output parameters, for each output parameter in the parameter value list, maps the dummy output parameter value from the parameter value list of the second function call to the corresponding output values in the symbol table; and sending a second message to the second computing system including the name and list of output parameter values for the second function call.In Example 10, the subject matter of Example 9 can optionally include wherein said second function call is a request to offload performance of a workload from said second computing system to said accelerator on said first computing system.Example 11 is at least one non-transitory machine-readable storage medium comprising instructions that, when executed, cause at least one processor to: receive a first function call from a caller on a first computing system acting as an initiator, said first function call is to be executed by an accelerator on a second computing system acting as a target, said first computing system being coupled to said second computing system by a network; determining the type of said first function call; generating said A list of parameter values for a first function call; sending a first message to the second computing system that includes one of the name of the first function call, the list of parameter values for the first function call, and a symbol table a plurality of new entries, the one or more new entries representing dummy output parameter values; and when the type of the first function call is an asynchronous callable function with no output dependencies, returned to the caller; when when said type of said first function call is an asynchronously callable function with replaceable output parameters, assigning a newly created symbol to an output parameter and returning to said caller; and when all of said first function call's When the type is a synchronous function, blocking the caller until a response to the first message is received from the second computing system.In Example 12, the subject matter of Example 11 can optionally include instructions that, when executed, further cause the at least one processor to: when the type of the first function call is a synchronous function, when The second computing system unblocks the caller upon receipt of the response to the first message.In Example 13, the subject matter of Example 11 can optionally include wherein the first function call is a request to offload performance of a workload from the first computing system to the accelerator on the second computing system.In Example 14, the subject matter of Example 11 can optionally include instructions that, when executed, cause at least one processor to: on said first computing system acting as a target, from said second computing system acting as an initiator The computing system receives a second message, the second message including a name of a second function call, the list of parameter values for the second function call, and one or more new entries for a symbol table, the one or more new an entry representing a dummy output parameter value; adding said one or more new entries from said second message to said symbol table; for each input parameter in said list of parameter values for said second function call value, if there is a corresponding symbol table index, replacing the symbol table entry associated with the corresponding symbol table index with a dummy output parameter value; executed by an accelerator on the first computing system using the input parameter value said function; when said type of said second function call is said asynchronous callable function with replaceable output parameters, for each output parameter in said list of parameter values, will be from said second function mapping dummy output parameter values of the parameter value list for the call to corresponding output values in the symbol table; and sending to the second computing system a Second message.In Example 15, the subject matter of Example 14 can optionally include wherein the second function call is a request to offload performance of a workload from the second computing system to the accelerator on the first computing system.Example 16 is a system comprising a first computing system acting as an initiator; and a second computing system acting as a target, the second computing being coupled to the first computing system through a network, the second computing system comprising an accelerator ; wherein the first computing system is to receive a function call from a caller; determine the type of the function call; generate a list of parameter values for the function call; send a first message to the second computing system including the the name of the function call, the list of parameter values for the function call, and one or more new entries of a symbol table, the one or more new entries representing dummy output parameter values; and when all of the function calls When said type is an asynchronous callable function with no output dependencies, it is returned to the caller; when said type of said function call is an asynchronous callable function with replaceable output parameters, the newly created symbol is assigned to the output parameter and return to the caller; and when the type of the function call is a synchronous function, block the caller until a response to the first message is received from the second computing system; and wherein said second computing system is to receive said first message; add said one or more new entries from said first message to said symbol table; said list of parameter values for said function call For each input parameter value in , if there is a corresponding symbol table index, replace the symbol table entry associated with the corresponding symbol table index with a dummy output parameter value; executed by the accelerator using the input parameter value said function; when said type of said function call is said asynchronously callable function with replaceable output parameters, for each output parameter in said list of parameter values, said dummy output parameter values of the parameter value list are mapped to corresponding output values in the symbol table; and sending a second message to the first computing system including the name of the function call and the list of output parameter values.In Example 17, the subject matter of Example 16 can optionally include wherein when said type of said function call is a synchronous function, when said response to said first message is received from said second computing system , the first computing system unblocks the caller.In Example 18, the subject matter of Example 16 can optionally include wherein said function call is a request to offload performance of a workload from said first computing system to said accelerator on said second computing system. |
The present disclosure includes apparatuses and methods for in-memory operations. An example apparatus includes a memory device including a plurality of subarrays of memory cells, where the pluralityof subarrays includes a first subset of the respective plurality of subarrays and a second subset of the respective plurality of subarrays. The memory device includes sensing circuitry coupled to thefirst subset, the sensing circuitry including a sense amplifier and a compute component. The apparatus also includes a controller configured to direct a first movement of a number of data values froma subarray in the second subset to a subarray in the first subset and performance of a sequential plurality of operations in-memory on the number of data values by the first sensing circuitry coupledto the first subset. |
1.An apparatus comprising:A memory device comprising:a plurality of sub-arrays of memory cells, the plurality of sub-arrays comprising a first subset of the respective plurality of sub-arrays and a second subset of the respective plurality of sub-arrays;a first sensing circuit coupled to the first subset, the first sensing circuit comprising a sense amplifier and a computing component;A controller configured to boot:a first number of data values from a sub-array of the second subset to a first movement of the sub-array of the first subset; andThe plurality of sequential in-memory operations are performed on the number of data values by the sense amplifier and the computing component of the first sensing circuit coupled to the first subset.2.The device of claim 1 wherein the controller is further configured to:a second movement of data values from the sub-array in the first subset to sub-arrays in the second subset;Wherein the data value is a result of the plurality of sequential operations performed on the number of data values moved from the sub-array in the second subset.3.Apparatus according to any of claims 1 to 2, wherein the results of each of said respective plurality of sequential operations are stored by said sub-array in said first subset until said plurality of The execution of the sequential operations is performed to calculate the result of the last of the plurality of sequential operations.4.Apparatus according to any of claims 1 to 2, wherein the results of each of said respective plurality of sequential operations are stored by said first sensing circuit coupled to said first subset until The execution of the plurality of sequential operations is completed to calculate a result of the last of the plurality of sequential operations.5.The apparatus according to any one of claims 1 to 2, wherein said memory device further comprises:a second sensing circuit coupled to the second subset; and wherein:The second sensing circuit includes a sense amplifier and does not include a computing component;The second subset takes the number of data values as a certain number of senses before the first movement of the number of data values of the plurality of sequential operations to be performed by the first sensing circuit The measured data value is stored in the second sensing circuit.6.The apparatus according to any one of claims 1 to 2, after relaying the second movement of the data value, the sub-array storage in the second subset has been performed by the first sensing circuit Result data values for multiple sequential operations.7.The apparatus according to any one of claims 1 to 2, after relaying the second movement of the data value, the sub-array storage in the first subset has been performed by the first sensing circuit Result data values for multiple sequential operations.8.An apparatus comprising:A controller coupled to the memory device to execute a command to perform a plurality of sequential operations, wherein the memory device comprises:a first subset of the plurality of memory cell sub-arrays;a second subset of the plurality of memory cell sub-arrays;a sensing circuit selectively coupleable to the first subset and the second subset, the sensing circuit comprising a sense amplifier coupled to respective sense lines of the first subset and calculating Component; andAn I/O line shared by the second subset and the sensing circuitry of the first subset, the shared I/O line configured to be selectively coupled to the first subset The sensing circuit is configured to enable a certain number of data values stored in the second subset to be moved to the sensing circuit of the selected sub-array in the first subset;Wherein the controller is configured to direct the plurality of sequential in-memory operations on the number of data values in the sensing circuitry of the selected sub-array in the first subset.9.The apparatus of claim 8, wherein the controller is further configured to direct the selection of data values generated by performing the plurality of sequential operations from the first subset by the shared I/O line The sensing circuitry of the stator array moves to a selected sub-array of the second subset.10.A device according to any one of claims 8 to 9, wherein:Multiple shared I/O lines are configured to:The sensing circuitry selectively connectable to the plurality of sub-arrays to selectively enable selective movement of a plurality of data values stored in the second subset to the first subset Corresponding to a plurality of sense amplifiers and/or computing components in the ground coupled sensing circuit.11.A device according to any one of claims 8 to 9, wherein:Multiple shared I/O lines are configured to:The sensing circuitry selectively connectable to the plurality of sub-arrays to selectively enable a plurality of data values to be sensed from a corresponding plurality of the plurality of data values stored in the second subset The sense amplifiers are moved in parallel to the selectively coupled circuit of the first subset; andWherein the plurality of sense amplifiers are included in the sensing circuit of the second subset.12.The device according to any one of claims 8 to 9, wherein the memory device further comprises:a plurality of sensing component strips, wherein each of the plurality of sensing component strips is coupled to the first subset of the plurality of sub-arrays and a corresponding one of the second subset Subarray;The I/O lines are selectively shareable by the sensing circuitry in the coupled pair of strips of the plurality of sensing components.13.A device according to any one of claims 8 to 9, wherein:The first subset of the plurality of sub-arrays is a number of sub-arrays that process PIM dynamic random access memory DRAM cells in memory;The second subset of the plurality of sub-arrays is a number of sub-arrays of memory cells other than PIM DRAM cells.14.The apparatus according to any one of claims 8 to 9, wherein a first length of a sensing line of the first sub-array in the first subset is a sense of a first sub-array in the second subset At most half of the second length of the line.15.An apparatus comprising:A controller coupled to the memory device, wherein the memory device comprises:a first subset of the plurality of memory cell sub-arrays;a second subset of the plurality of memory cell sub-arrays;a sensing circuit coupled to the first subset and the second subset, the sensing circuit including sensing of respective ones of the plurality of sensing lines coupled to the first subset Amplifier and computing components; anda number of library registers selectively coupled to the controller;Wherein the controller is configured to boot:Performing a plurality of sequential in-memory operations on the number of data values in the sensing circuit of the selected sub-array in the first subset; andMoving data values generated by said performing the plurality of sequential operations from the sensing circuit to a selected destination;Wherein the selected destination comprises selected ones of the selected sub-arrays of the first subset, selected ones of selected sub-arrays of the second subset, and selected ones of selected library registers Row.16.The device of claim 15 wherein said memory device further comprises:An I/O line shared by the sensing circuit of the selected sub-array of the first subset and the sensing circuit of the selected sub-array of the second subset and the selected bank register And where:The shared I/O line is configured to be selectively coupleable to the sensing circuit of the first subset to enable a certain number of result data values stored in the first subset to be moved to the Selected destination; andThe selected destination includes the selected one of the selected sub-arrays of the second subset and the selected one of the selected bank registers.17.The device of claim 15 wherein said memory device further comprises:a number of vector registers selectively coupled to the controller;An I/O line, the sensing circuit of the selected sub-array of the first subset and the sensing circuit of the selected sub-array of the second subset, the selected bank register, and The selected vector register is shared; and where:The shared I/O line is configured to be selectively coupleable to the sensing circuit of the first subset to enable a certain number of result data values stored in the first subset to be moved to the Selected destination; andWherein the selected destination comprises the selected one of the selected sub-arrays of the second subset, the selected one of the selected bank registers, and the selected vector register Selected line.18.The apparatus according to any one of claims 15 to 17, wherein said memory device further comprises:An I/O line, the sensing circuit of the selected sub-array of the first subset and the sensing circuit of the selected sub-array of the second subset, the selected bank register, and Selected vector register sharing; andA microcode engine configured to execute an instruction set to guide:The shared I/O lines are selectively coupleable to the sensing circuit and the second subset of the first subset to selectively enable storage in the first subset and the A certain number of result data values in the two subsets can be moved to the selected destination;The selected destination includes the selected one of the selected bank registers and the selected one of the selected vector registers.19.The apparatus according to any one of claims 15 to 17, wherein said memory device further comprises:a connection circuit configured to connect a sensing circuit coupled to a particular one of a number of sub-arrays in the second subset to a certain number of corresponding columns in the first sub-array of the first subset Lines; andA microcode engine configured to execute an instruction set to guide:The connecting circuit directs a plurality of data values from the number of sub-arrays in the second subset to corresponding ones of the first sub-arrays in the first subset and the corresponding columns For performing the plurality of sequential operations;The plurality of selected rows and the corresponding columns in the first sub-array of the first subset receive the plurality of data values;The controller directs the plurality of sequential operations to be performed on the plurality of data values in the sensing circuit of the first sub-array in the first subset.20.The device of claim 19 wherein:The connection circuit is further configured to:Selectively coupled to the sensing circuitry of the first subset and the second subset to selectively cause a certain number of the first subset and the second subset to be stored Resulting data values can be moved to the selected destination;The selected destination includes the selected one of the selected bank registers and the selected one of the selected vector registers.21.A method for operating a memory device, comprising:A plurality of sequential in-memory operations are performed on the plurality of data values by a first sense component strip coupled to the selected first sub-array of the memory devices after:Sensing the plurality of data values in a selected second sub-array in the memory device; andMoving the plurality of sensed data values to the first sense component strip coupled to the selected first sub-array;Wherein the selected first sub-array includes, in the column, a number of memory cells that are at most half of the number of memory cells in the column of the selected second sub-array.22.The method of claim 21 wherein said method further comprises:Storing the plurality of sensed data values sequentially in a second sense component strip coupled to the selected second sub-array;Moving the plurality of sensed data values from the second sense component strip to the first sense component strip coupled to the selected first sub-array; andA first data value generated by said performing said plurality of sequential operations is moved from said first sensing component strip to a selected first row of said selected first sub-array.23.The method of claim 22 wherein said method further comprises:Performing another operation on the resulting first data value moved from the selected first row by the first sensing component strip coupled to the selected first sub-array; andA second data value generated by performing the other operation is stored in a selected second row of the selected first sub-array.24.A method according to any one of claims 21 to 23, wherein the method further comprises:Performing the plurality of sequential operations on the plurality of sensed data values in the first sense component strip coupled to the selected first sub-array; andData values generated by said performing the plurality of sequential operations are moved from the first sensing component strip to selected ones of the second sub-array.25.A method according to any one of claims 21 to 23, wherein the method further comprises:The I/O lines shared by the first sensing component strip coupled to the selected first sub-array and the second sensing component strip coupled to the selected second sub-array are selectively coupled The first sensing component strip and the second sensing component strip;Moving the plurality of sensed data values from the second sensing component strip to the first sensing component strip via the shared I/O line;Performing the plurality of sequential operations by the first sensing component strip without moving the result of the respective plurality of operations to the second sub-array prior to completing a last one of the plurality of sequential operations Said second sensing component strip or memory unit;Moving a data value generated by completing the last one of the plurality of sequential operations from the first sensing component strip to the second sense of the second sub-array via the shared I/O line Measuring a component strip or the memory unit; andThe data values generated by the completion of the plurality of sequential operations are written to the memory cells of selected ones of the second sub-arrays. |
Apparatus and method for in-memory operationTechnical fieldThe present invention relates generally to semiconductor memories and methods, and more particularly to apparatus and methods for operation within a memory.Background techniqueMemory devices are typically provided as internal semiconductor integrated circuits in a computer or other electronic system. There are many different types of memory, including volatile and non-volatile memory. Volatile memory may require power to maintain its data (eg, host data, erroneous data, etc.), and includes, inter alia, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM). Synchronous Dynamic Random Access Memory (SDRAM), and Thyristor Random Access Memory (TRAM). Non-volatile memory can provide permanent data by retaining stored data when not powered, and can include, inter alia, NAND flash memory, NOR flash memory, and resistance variable memory, such as phase change random access memory (PCRAM), Resistive random access memory (RRAM) and magnetoresistive random access memory (MRAM), such as spin torque transfer random access memory (STT RAM).An electronic system typically includes a number of processing resources (eg, one or more processors) that can retrieve and execute instructions and store the results of the executed instructions to a suitable location. The processor may include a number of functional units, such as an arithmetic logic unit (ALU) circuit, a floating point unit (FPU) circuit, and a combinational logic block, for example, the number of functional units may be used to pass data (eg, one Or multiple operands) execute an instruction to execute an operation. As used herein, operations may be, for example, Boolean operations (eg, AND, OR, NOT, NOT, NAND, NOR, and XOR) and/or other operations (eg, inversion, shift, arithmetic, statistics, and Many other possible operations). For example, a functional unit circuit can be used to perform arithmetic operations on operands via a number of logical operations, such as addition, subtraction, multiplication, and division.A number of components in an electronic system may be involved in providing instructions to a functional unit circuit for execution. The instructions may be executed, for example, by a processing resource, such as a controller and/or a host processor. Data (eg, the operands at which the instructions are to be executed) may be stored in a memory array accessible by the functional unit circuitry. The instructions and/or data may be retrieved from the memory array and the instructions and/or data may be sequenced and/or buffered before the functional unit circuit begins executing instructions on the data. Moreover, since different types of operations can be performed by functional unit circuits in one or more clock cycles, intermediate results of instructions and/or data can also be sequenced and/or buffered. A sequence of operations that completes in one or more clock cycles may be referred to as an operation cycle. In terms of processing and computing performance and/or power consumption of computing devices and/or systems, the time taken to complete an operational cycle can be costly.In many examples, processing resources (eg, processors and associated functional unit circuits) can be external to the memory array and access data via a bus between the processing resources and the memory array to execute the set of instructions. The processing performance of the in-memory processing device can be improved, wherein the processor can be implemented inside and/or near the memory (e.g., directly on the same chip as the memory array). In-memory processing devices can save time and also save power by reducing or eliminating external communications.DRAWINGS1A is a block diagram of an apparatus in the form of a computing system including a memory device in accordance with a number of embodiments of the present invention.1B is a block diagram of a library section of a memory device in accordance with a number of embodiments of the present invention.1C is a block diagram of a library of memory devices in accordance with a number of embodiments of the present invention.2 is a schematic diagram illustrating a sensing circuit of a memory device in accordance with a number of embodiments of the present invention.3 is a schematic diagram illustrating circuitry for data movement in a memory device in accordance with a number of embodiments of the present invention.4A and 4B are another schematic diagrams illustrating circuitry for data movement in a memory device in accordance with a number of embodiments of the present invention.detailed descriptionIn some implementations, a memory device can be configured to move (eg, copy, transfer, and/or transfer) data values from a storage memory unit into a cache for performing operations on the data values. A single operation may then subsequently move the data values generated by performing the single operation back to the storage memory unit. In this embodiment, if another operation is to be performed on the resulting data value, the resulting data value will be moved back to the cache for performing other operations and moved to storage again after the second operation. Memory unit. As such, execution of a plurality of sequential operations (eg, a sequence of multiple Boolean operations performed by sensing circuitry associated with memory cells of the cache memory) as described herein may involve raw and/or partial result data. The value is directed to a certain number of memory cells in the first sub-array and repeated movements from the number of memory cells to the cache sub-array. Such repeated movements of the original and/or partial result data values may reduce the speed, rate and/or efficiency of data processing and/or may increase power consumption.In contrast, the present invention includes apparatus and methods for in-memory operations (e.g., for in-memory processing (PIM) structures). In at least one embodiment, the apparatus includes a memory device, the memory device including a plurality of sub-arrays of memory cells, wherein the plurality of sub-arrays includes a first subset of the respective plurality of sub-arrays and the respective plurality of sub-arrays The second subset of the array. The memory device includes a sensing circuit coupled to the first subset, the sensing circuit including a sense amplifier and a computing component. The apparatus also includes a controller configured to direct a first number of data values from a sub-array of the second subset to a first movement of the sub-arrays of the first subset. The controller is also configured to direct the sense amplifiers and/or the computing component of the first sensing circuit coupled to the first subset to perform a plurality of sequential memories on the number of data values operating.The controller can also be configured to direct a second movement of data values from the sub-array in the first subset to sub-arrays in the second subset. For example, the controller can be configured to direct execution of the second movement of the data value, the data value being the number of data moving from the sub-array in the second subset The result of the plurality of sequential operations performed by the value. In some embodiments, the resulting data value can be moved back to a memory in the sub-array in the second subset, the data value previously stored in the memory. For example, the plurality of sequential operations may be performed by the sense amplifier and/or the computing component of the cache sub-array of the first subset, but not by the cache sub-array The sense amplifier and/or the computing component moves the results of the plurality of sequential operations to the storage sub-array of the second subset prior to completing a last one of the plurality of sequential operations.This sequence of data movements and/or operations performed on the data values in the first subset (eg, a cache) rather than the second subset (eg, memory) may be configured to A controller that operates independently of the host during data processing operations is booted. For example, although the host may be located at the same pitch as the memory device containing the controller (eg, 140 in FIG. 1A) and/or on the same chip as the memory device (eg, 110 in FIG. 1A) The data processing operations may have been commanded and the commands may have been executed by the processor/sequencer of controller 140, but the data movement and/or operations just described may be delegated to the controller for execution. In some embodiments, controller 140 can be formed on a chip and operate, for example, to perform operations, as shown and described in connection with FIG. 1A. As described herein, lying on a chip with other things is intended to mean forming on the same chip as the memory cells in the corresponding sub-array. However, embodiments are not so limited. For example, in some embodiments, controller 140 can locate and/or perform operations associated with host 110, for example, the host can instruct the controller in accordance with data values at which operations are to be performed.For example, the first and second ordinals are used herein to assist in distinguishing similar components (eg, sub-arrays of memory cells, subsets thereof, etc.) and are not used to indicate a particular ordering and/or relationship between the components. Unless the context clearly dictates otherwise (eg, by using terms such as proximity). For example, the first sub-array can be a sub-array 4 relative to sub-array 0 in the sub-array library and the second sub-array can be any other subsequent sub-array, eg, sub-array 5, sub-array 8, sub-array 61 And other possibilities, or the second sub-array can be any other prior sub-array, for example, sub-array 3, 2, 1 or 0. Moreover, moving data values from the first sub-array to the second sub-array is provided as a non-limiting example of such data movement. For example, in some embodiments, the data values can be moved from each sub-array sequentially and/or in parallel in each sub-array to another sub-array in the same bank (eg, it can be adjacent Subarrays and/or separated by a certain number of other subarrays) or different libraries.The host system and controller can perform address resolution on the program instructions (eg, PIM command instructions) and the entire block of data and direct (eg, control) the data and commands to assigned locations within the destination (eg, target) library (eg, Allocating, storing, and/or moving (eg, flowing) in the subarray and parts of the subarray. As described herein, writing data and executing commands (eg, performing operations) may utilize a normal DRAM write path to the DRAM device. As the reader will appreciate, while the DRAM-style PIM device is discussed with respect to the examples presented herein, embodiments are not limited to PIM DRAM implementations.As described herein, embodiments may allow a host system to initially allocate a certain number of locations in one or more DRAM banks, such as sub-arrays (or "subarrays") and sub-array Partially to save (eg, store) data, for example, in a second subset of subarrays. However, for increased speed, rate, and/or efficiency of data processing (eg, operations performed on data values), the data values may be moved (eg, copied, transmitted, and/or transmitted) to, for example, the first Another subarray of subsets of subarrays that are configured to achieve increased speed, rate, and/or efficiency of data processing, as described herein.The performance of a PIM system can be affected by memory access time (eg, line cycle time). The operations for data processing can include: opening (accessing) a row of memory cells in the library; reading and/or writing the memory cells; and then closing the rows. The time period spent for such operations may depend on the number of memory cells per computing component (eg, computing component 231 in sensing circuit 250 in Figure 2) and/or connecting all memory cells in the column to respective computing components The length of the digit line. A shorter digit line may provide relatively improved performance per computational component, but may also have more computational components per memory cell and therefore a lower density of memory cells due to the shorter digit lines. This lower density can contribute to relatively higher power and/or die area requirements. By comparison, longer digit lines can have fewer computational components for the same memory cell density, but longer digit lines can contribute to the relatively lower performance per computational component. Therefore, the performance benefits of combining short digit lines and the memory cell density benefits of long digital lines can be beneficial.A memory device (eg, a PIM DRAM memory device) is described herein as comprising a plurality of sub-arrays, wherein at least one of the sub-arrays is configured with other sub-arrays within the memory device (eg, in the same memory bank) The digit line of the medium) is short (eg, a number of memory cells having fewer memory cells per column and/or the column has a shorter physical length). A sub-array with shorter digit lines can result in faster access times to memory cells and the sensing circuitry can be configured with PIM functionality (as described herein) that will be used in conjunction with faster access times.As such, a sub-array with shorter digital lines and PIM functionality can be used as a cache to increase the speed, rate, and rate for sub-arrays configured with longer digit lines (eg, thus having slower access times). / or efficiency to perform operations. Subarrays with longer digit lines can be used for data storage to utilize a relatively high number of memory cells in their longer digit lines. In some embodiments, a sub-array with longer digit lines can be further configured for higher density of memory cells to achieve more efficient data storage. For example, higher density can be facilitated by not having PIM functionality in the sensing circuit because operations are performed after moving the data values to the cache rather than to the data values in the memory. Alternatively or in combination, a higher density memory architecture (eg, 1T1C memory cell) may be used to configure (eg, form) a longer digital line sub-array while a lower density architecture (eg, 2T2C memory cell) configuration may be used Short digital line subarray. Other changes can be made to the architecture to compare longer digital line subarrays to increase the speed, rate, and/or efficiency of data access in shorter digital line subarrays, for example, in short and long digital line subarrays. Different memory array architectures (eg, DRAM, SRAM, etc.) are used, word line length variations, and other possible changes.Accordingly, a plurality of sub-arrays may be included in a library of memory devices, for example, mixed with each other in various embodiments, wherein a first subset of the plurality of sub-arrays has relatively short digit lines and a plurality of sub-arrays The two subsets have relatively long digit lines, as described herein. A sub-array with shorter digit lines can be used as a cache to perform operations for sub-arrays with longer digit lines. Calculations may occur primarily or only in sub-arrays having shorter digit lines, such as the execution of operations, thereby causing performance to increase relative to sub-arrays having longer digit lines. A sub-array with longer digit lines can be used primarily or only for data storage, and as such, can be configured for memory density. In some embodiments, a sub-array with longer digit lines may be configured, for example, to provide an alternative to the movement of large amounts of data that will perform several accumulation operations in a sub-array of the first subset, among other reasons. At least some PIM functionality. However, regardless of whether a longer digit line is configurable with at least some PIM functionality, the data is moved (eg, copied, transmitted, and/or transmitted) to a shorter digit line sub-array and moved from the shorter digit line sub-array (eg, It may be preferred to copy, transmit and/or transmit data to perform a relatively high speed single operation and/or sequence of operations. As such, in some embodiments, only the first subset of short digit line sub-arrays can have any PIM functionality, thus potentially saving die area and/or power consumption.For example, a row of memory cells in a short digital line sub-array can be utilized as a certain number of caches of a long digital line (eg, storage) sub-array. The controller can manage data movement between two types of sub-arrays and can store information to document data moving from a source row of a particular storage sub-array to a destination row of a particular cache sub-array, and vice versa. In some embodiments, the short digit line sub-array is operable to write back a cache, the controller automatically returning the back from the write-back cache after completing an operation on a data value or a series of data values Data value or the series of data values. However, as described herein, the controller can be configured to direct a plurality of sequential operations performed by sensing circuitry associated with a short digital line sub-array operating as a cache memory without completing the plurality of sequential operations The result of the respective plurality of operations is moved back to a long digit line (e.g., storage) sub-array prior to the last operation in .A library in a memory device can include a plurality of sub-arrays of memory cells, wherein the plurality of partitions can each comprise respective ones of the plurality of the sub-arrays. In various embodiments, I/O lines shared by multiple partitions (eg, data buses for splitting intervals and/or data movement within partitions) as described herein may be configured to Dividing the plurality of sub-arrays into the plurality of partitions: selectively connecting and disconnecting the partitions to form the shared I/O using an isolation circuit associated with the shared I/O lines The separate part of the line. As such, shared I/O lines associated with isolation circuits at a plurality of locations along their length can be used to differentiate the segmentation of the sub-arrays into various combinations (eg, numbers) of sub-regions. Effectively separate blocks in the array, depending on whether the various sub-arrays and/or partitions are partially connected via shared I/O lines, etc., as directed by the controller. This may enable block data movement within individual partitions to occur substantially in parallel.The isolation of partitions may increase each partition and multiple partitions (eg, some or all of the partitions by performing data movement in parallel (eg, substantially at the same point in time) in each partition or partition combination. The speed, rate, and/or efficiency of data movement in a combination. For example, this may reduce sequential movement (eg, copying, transferring) of data between various short and/or long digital line sub-arrays that are selectively coupled along a shared I/O line in a memory cell array. And/or transmission) the time originally spent. The parallel nature of this data movement allows local or most data values to be moved locally in the sub-array of the partition so that the movement can be several times faster. For example, the movement may be faster than approximately a multiple of the number of partitions, for example, with respect to four partitions, the data values may be performed within approximately one quarter of the time it takes to not use the partitions described herein. Parallel movement in a subarray of each partition.BRIEF DESCRIPTION OF THE DRAWINGS In the following detailed description of the invention, reference to the drawing The embodiments are described in sufficient detail to enable those skilled in the art to practice the embodiments of the invention.As used herein, an identifier such as "X", "Y", "N", "M", etc. (specifically with respect to element symbols in the drawings) may include a certain number of specific features so specified. It is also understood that the terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. The singular forms "a", "the", "the" and "the" In addition, "a number of", "at least one" and "one or more" (eg, a certain number of) memory arrays may refer to one or more memory arrays, however "multiple" is intended to mean More than one. Moreover, the words "may" and "may" are used throughout this application in terms of permitted meaning (that is, having the possibility, ability) rather than being mandatory (that is, necessary). The term "include" and its derivatives mean "including but not limited to". Depending on the context, the terms "coupled" and "coupling" mean to be directly or indirectly physically connected or used for accessing commands and data and for the movement and transmission of commands and data. The terms "data" and "data value" are used interchangeably herein and may have the same meaning, depending on the context.As used herein, data movement is an inclusive term that includes, for example, copying, transmitting, and/or transmitting data values from a source location to a destination location. For example, data can be moved from a long digital line (eg, storage) sub-array to a short digital line (eg, cached) via I/O lines shared by respective sense component strips of the long and short digital line sub-arrays. Memory) sub-array, as described herein. Copying the data value may indicate that the data values stored (cached) in the sensing component strip are copied and moved to another sub-array via the shared I/O line and the raw data values stored in the rows of the sub-array may be constant. Transmitting the data value may indicate copying and moving data values stored (cached) in the sensing component strip via the shared I/O line to another sub-array and stored in raw data values in a row of the sub-array At least one of the changes can be changed, for example, by erasing and/or by subsequent writing operations, as described herein. Transmitting the data value can be used to indicate, for example, the process of moving the copied and/or transmitted data value by placing the data value from the source location on the shared I/O line and transmitting to the destination location.Each of the figures herein follows the numbering convention in which the first number or the first few digits correspond to the figure number and the remaining numbers identify the elements or components in the figure. Similar elements or components between different figures can be identified by using similar numbers. For example, 108 may refer to element "08" in FIG. 1, and similar elements may be referred to as 208 in FIG. As will be appreciated, the elements shown in the various embodiments herein can be added, interchanged, and eliminated to provide several additional embodiments of the invention. In addition, the proportions and relative scales of the elements provided in the figures are intended to illustrate a particular embodiment of the invention and should not be considered as limiting.FIG. 1A is a block diagram of an apparatus in the form of a computing system 100 including a memory device 120 in accordance with a number of embodiments of the present invention. As used herein, memory device 120, controller 140, channel controller 143, memory array 130, sensing circuit 150 (including sense amplifiers and computing components), and peripheral sense amplifiers and logic 170 may also be viewed separately For the corresponding "device".In prior methods, data may be transferred from the array and sensing circuitry, for example, via a bus including input/output (I/O) lines, to other functional unit circuits that may include ALU circuitry and are configured to perform appropriate operations. Resources such as processors, microprocessors, and computing engines. However, transferring data from the memory array and sensing circuitry to the processing resource(s) can involve significant power consumption. Even if the processing resources are on the same chip as the memory array, significant power can be consumed in moving data out of the array into the computing circuit, and moving data out of the array to the computing circuit can involve performing a sensing line (which can be referred to herein) Address access for digital or data lines, for example, to fire a column decode signal to transfer data from the sense line to an I/O line (eg, local and global I/O lines); move the data to the array Peripheral; and provide data to the calculation function.Moreover, circuitry that processes resources (eg, a computing engine) may not conform to the spacing rules associated with the memory array. For example, a cell of a memory array can have a 4F2 or 6F2 cell size, where "F" is the feature size corresponding to the cell. As such, devices associated with ALU circuits of previous PIM systems (eg, logic gates) may not be formed at the same pitch as the memory cells, which may affect chip size and memory density, for example.For example, the sensing circuit 150 described herein can be formed at the same pitch as a pair of complementary sensing lines. As an example, a pair of complementary memory cells can have a cell size with a 6F2 pitch (eg, 3F x 2F). If the pitch of a pair of complementary sense lines of the complementary memory cell is 3F, then the sense circuit equally spaced sense sensing circuitry (eg, the sense amplifier and corresponding computational component of each respective pair of complementary sense lines) is formed to fit over Within the 3F spacing of the complementary sensing lines.Moreover, circuitry of various previous system processing resources (eg, a computing engine, such as an ALU) may not conform to the spacing rules associated with the memory array. For example, a memory cell of a memory array can have a 4F2 or 6F2 cell size. As such, devices associated with prior art ALU circuits (eg, logic gates) may not be formed at the same pitch (at the same pitch as the sense lines) with the memory cells, which may affect chip size and/or memory density, for example. In the context of some computing systems and subsystems (eg, central processing units (CPUs)), data may be processed in the same pitch and/or in the same chip location as memory (eg, memory cells in the array), as herein As described in. Data may be processed by processing resources associated with, for example, the host, rather than being equally spaced from the memory.In contrast, a certain number of embodiments of the invention can include sensing circuitry 150 formed at the same pitch as the memory cells of the array, for example, including sense amplifiers and/or computing components. The sensing circuit 150 can be configured for (eg, capable of) performing a computational function, such as a logical operation.Capable of PIM device operation can use bit vector based operations. As used herein, the term "bit vector" is intended to mean a certain number of bits on a bit vector memory device (eg, a PIM device) that are stored in a row of memory cell arrays and/or in a sensing circuit. . Thus, as used herein, "bit vector operation" is intended to mean an operation performed on a bit vector that is, for example, a virtual address space and/or a portion of a physical address space used by a PIM device. In some embodiments, the bit vector can be a physically consecutive number of bits physically stored in the row and/or in the sensing circuit on the bit vector memory device such that the pair is a virtual address space and/or physics The bit vector of the contiguous portion of the address space performs a bit vector operation. For example, a row of virtual address space in a PIM device can have a bit length of 16K bits, for example, corresponding to a 16K complementary memory cell pair in a DRAM configuration. Sensing circuit 150 as described herein for this 16K bit row may include corresponding 16K processing elements, eg, computing components, as described herein, and selectively memory cells that are selectively coupleable into 16-bit rows The sensing lines are formed at the same distance. A computing component in the PIM device can be a unit of processing unit sensed by the sensing circuit 150 (eg, sensed by a sense amplifier paired with the computing component and/or stored in the sense amplifier) A single bit of the bit vector of the row operates as described herein.In various embodiments described herein, a certain number of bit vectors may be stored in the memory of memory device 120. In some embodiments, the bit vector can include the result of performing a plurality of sequential in-memory operations in the memory array 130 of the memory device 120. For example, instead of and/or in addition to being stored in memory array 130, the resulting data values resulting from performing the plurality of sequential operations may also be moved from memory array 130 for storage in vector registers 159 (eg, multiple vector registers) 159 in a specific line and / or register). In some embodiments, vector register 159 can be associated with controller 140, for example, selectively coupled to controller 140. In some embodiments, vector register 159 can represent a virtual and/or physical register that can be accessed by host 110, for example, via controller 140. A particular file in vector register 159 can store a virtual address (e.g., a base virtual address) of an element of memory device 120. A memory element (also referred to as a computing element) can store a certain amount of data that is manipulated in one of a plurality of sequential operations, such as described herein. The memory element may also refer to a certain number of memory units that store the amount of data. In various embodiments, in addition to those operations performed by the plurality of sequential operations, the vector registers may be configured to achieve an operation on the resulting data values. For example, storage of result data values (eg, forming multiple bits of a bit vector) may be in memory array 130 relative to respective data values (eg, a first subset (eg, cache sub-array 125-0) The storage in a corresponding number of memory cells in the source row is selectively offset by a number of memory cells in a selected destination in the vector register.A number of embodiments of the invention include sensing circuitry formed at equal intervals from the sense lines of the corresponding memory cell array. The sensing circuit can be capable of performing data sensing and/or computational functions (eg, depending on whether the sensing circuitry is associated with a short digital line sub-array or a long digital line sub-array) and storage of data locally in the memory cell array .In order to understand the improved data movement (eg, copy, transfer, and/or transmission) techniques described herein, the following is a discussion of devices (eg, PIM-capable memory devices and associated hosts) for implementing these techniques. . According to various embodiments, program instructions (eg, PIM commands) involving a PIM-capable memory device may distribute the implementation of the PIM commands and/or data via a plurality of sensing circuits that may perform operations, and/or may PIM commands and/or data movement and storing PIM commands and/or data in a memory array, for example, without having to address these via an address and control (A/C) and data bus between the host and the memory device PIM commands and/or data are sent back and forth. Thus, data for PIM-capable memory devices can be accessed and used in less time and/or with less power. For example, time and/or power advantages can be achieved by increasing the movement of data around and storing the data in a computing system to process the requested memory array operations (eg, read, write, logical operations, etc.) Speed, speed and/or efficiency.The system 100 illustrated in FIG. 1A can include a host 110 coupled (eg, connected) to a memory device 120 that includes a memory array 130. Host 110 can be a host system such as a personal laptop, desktop, tablet, digital camera, smart phone, and/or memory card reader, as well as various other types of hosts. Host 110 may include a system motherboard and/or backplane and may include a number of processing resources (eg, one or more processors, microprocessors, or some other type of control circuitry). System 100 can include a separate integrated circuit, or both host 110 and memory device 120 can be on the same integrated circuit. System 100 can be, for example, a server system and/or a high performance computing (HPC) system and/or a portion thereof. Although the example shown in FIG. 1A illustrates a system having a von Newman-type architecture, embodiments of the invention may be implemented in a non-von Newman-type architecture that may not include a von Newman-type architecture typically associated with One or more components (eg, CPU, ALU, etc.).For clarity, the description of system 100 has been simplified to focus on features that are specifically related to the present invention. For example, in various embodiments, memory array 130 can be, for example, a DRAM array, an SRAM array, an STT RAM array, a PCRAM array, a TRAM array, an RRAM array, a NAND flash array, and/or a NOR flash array. Memory array 130 can include memory cells arranged through rows coupled by access lines (which may be referred to herein as word lines or select lines) and through sense lines (which may be referred to herein as numbers) Line or data line) coupled columns. Although a single memory array 130 is shown in FIG. 1A, embodiments are not so limited. For example, in addition to a certain number of sub-arrays, memory device 120 may also include a number of memory arrays 130, such as a certain number of banks of DRAM cells, NAND flash cells, etc., as described herein.Memory device 120 can include address circuitry 142 for latching provided by I/O circuitry 144 via data bus 156 (eg, from an I/O bus of host 110) (eg, via local I/O lines and global I/O lines) An address signal is provided to the external ALU circuit and provided to the DRAM data line (DQ). As used herein, DRAM DQ may achieve input of data from, for example, controller 140 and/or host 110 to a library via a bus (eg, data bus 156) and output data from the library to controller 140 and/or Or host 110. During a write operation, voltage and/or current changes, for example, can be applied to the DQ, such as a pin. These changes can be translated into appropriate signals and stored in selected memory cells. During a read operation, once the access is completed and the output is enabled, the data values read from the selected memory cell can appear at the DQ. At other times, the DQ can be in a state such that the DQ does not pull or sink current and does not present the signal to the system. This can also reduce DQ contention when two or more devices (eg, libraries) share the data bus, as described herein.Status and exception information may be provided, for example, from the controller 140 on the memory device 120 to the channel controller 143 via the out-of-band bus 157, which may in turn be provided from the channel controller 143 to the host 110. Channel controller 143 can include logic component 160 to allocate multiple locations (e.g., controllers for the sub-array) in an array of each respective bank to store with a plurality of memory devices (e.g., 120-0, 120-1) Library operations for various libraries associated with the operation of each of , ..., 120-N), application instructions (eg, as a sequence of operations), and arguments (PIM commands). Channel controller 143 can dispatch commands (e.g., PIM commands) to the plurality of memory devices 120-1, ..., 120-N to store the program instructions in a given bank of memory devices.The address signals are received by address circuit 142 and decoded by row decoder 146 and column decoder 152 to access memory array 130. Sensing (reading) from the memory array 130 can be sensed by sensing a voltage and/or current change on the sense line (digital line) using a certain number of sense amplifiers (as described herein) of the sense circuit 150. data. The sense amplifier can read and latch data pages (eg, rows) from memory array 130. Additional computing components as described herein can be coupled to the sense amplifier and can be used in combination with the sense amplifier to sense, store (eg, cache and buffer), perform computational functions (eg, operations), and/or move data . I/O circuitry 144 can be used to communicate bi-directionally with host 110 via data bus 156 (e.g., a 64-bit wide data bus). Write circuit 148 can be used to write data to memory array 130. However, the functionality of the column decoder 152 circuitry can be distinguished from the column selection circuitry 358 described herein, which is configured to implement corresponding operations in, for example, a particular column of sub-arrays and operating strips. The unit's data movement operation.Controller 140 (eg, bank control logic and/or sequencer) can decode signals (eg, commands) provided by host bus 110 from control bus 154. These signals may include chip enable signals, write enable signals, and/or address latch signals that may be used to control operations performed on memory array 130, including data sensing, data storage, data movement, data writing, and/or Or data erase operations and other operations. In various embodiments, controller 140 may be responsible for executing instructions from host 110 and accessing memory array 130. Controller 140 can be a state machine, a sequencer, or some other type of controller. Controller 140 can control shifting data (eg, right or left) into a row of an array (eg, memory array 130).Examples of sensing circuitry 150 are described further below, for example, in Figures 2 and 3. For example, in a number of embodiments, the sensing circuit 150 can include a number of sense amplifiers and/or a number of computing components that can be used as accumulators and can be used, for example, Operations are performed on data associated with complementary sense lines, as directed by controller 140 and/or respective sub-array controllers (not shown) of each sub-array.In a number of embodiments, the sensing circuit 150 can be used to perform operations using data stored in the memory array 130 as input and to participate in moving data for transfer, write, logic, and memory operations into the memory array 130. The different locations are not transmitted via the sense line address access (eg, without exciting the column decode signal). As such, various calculation functions may be used and performed within the sensing circuit 150, rather than (or associated with) being performed by processing resources external to the sensing circuit 150, for example, by being associated with the host 110. The processor and other processing circuitry (e.g., ALU circuitry) located on device 120 (e.g., on controller 140 or elsewhere) are executed.In various prior methods, for example, data associated with an operand will be read from memory via a sensing circuit and provided to the outside via I/O lines (eg, via local I/O lines and global I/O lines) ALU circuit. The external ALU circuit can contain a certain number of registers and the calculation function will be performed using the operands and the results are passed back to the array via the I/O lines.In contrast, in a certain number of embodiments of the invention, the sensing circuit 150 is configured to perform operations on data stored in the memory array 130 and store the results back to the memory array 130 without enabling coupling The local I/O lines of the circuit 150 and the global I/O lines are measured. The sensing circuit 150 can be formed at the same distance from the sensing lines of the memory cells for the array. Additional peripheral sense amplifiers and/or logic 170 (eg, sub-array controllers each executing instructions for performing respective operations) may be coupled to sense circuit 150. In accordance with some embodiments described herein, the sensing circuit 150 and the peripheral sense amplifiers and logic 170 can cooperate while performing operations.As such, in a number of embodiments, circuitry external to memory array 130 and sense circuitry 150 is not required to perform the calculation function because sense circuitry 150 can perform appropriate operations to operate without the use of external processing resources. These calculation functions are executed in the instruction sequence. Thus, the sensing circuit 150 can be used to supplement or replace at least to some extent this external processing resource (or at least reduce the bandwidth consumption of transferring data to and/or from the external processing resource).In a number of embodiments, the sensing circuit 150 can be used to perform operations other than operations performed by external processing resources (eg, host 110) (eg, executing a sequence of instructions). For example, any of host 110 and sensing circuitry 150 may be limited to performing only certain operations and/or a particular number of operations.Enabling local I/O lines and global I/O lines may include enabling (eg, turning on, activating) coupling the gate to a decoded signal (eg, a column decode signal) and coupling the source/drain to the I/O Line of transistors. However, embodiments are not limited to enabling local I/O lines and global I/O lines. For example, in a number of embodiments, the sensing circuit 150 can be used to perform operations without enabling the column decode lines of the array. However, the local I/O line(s) and the (several) global I/O lines can be enabled to transfer the result to a suitable location other than back to the memory array 130, for example, to an external register.FIG. 1B is a block diagram of a library section 123 of a memory device in accordance with a number of embodiments of the present invention. Library section 123 may represent an example section in a certain number of library sections (eg, library section 0, library section 1, ..., library section M) of a library of memory devices. As shown in FIG. 1B, library section 123 can include a plurality of memory ranks 122 that are horizontally shown as X (eg, 16,384) columns in an example DRAM bank and library section. In addition, the library section 123 can be divided into sub-array 0, sub-array 1, ... and sub-array N-1, for example, 32, 64, 128 or various even-numbered sub-arrays, as shown at 125-0 and 125-1. An example of two short digit line (eg, cache) subarrays and shown at 126-0, . . ., 126-N-1 as a number of long digit lines in the same bank section (eg, storage) An instance of a subarray. The configuration of the embodiment illustrated in FIG. 1B (eg, the number and/or positioning of short and long digital line sub-arrays) is shown for purposes of clarity and is not limited to these configurations.The short and long digital line sub-arrays are respectively separated by an amplification region configured to couple to a data path (eg, a shared I/O line as described herein). As such, the short digital line sub-arrays 125-0 and 125-1 and the long digital line sub-arrays 126-0, ..., 126-N-1 may each have a sensing component strip 0, a sensing component strip 1 And ... and the amplification area 124-0, 124-1, ..., 124-N-1 corresponding to the sensing component strip N-1.Each column 122 can be configured to be coupled to the sensing circuit 150, as described in connection with FIG. 1A and elsewhere herein. As such, each column in the sub-array can be individually coupled to at least one of a sense amplifier and/or a computing component that contributes to the sensing component strip of the sub-array. For example, as shown in FIG. 1B, the library section 123 can include a sensing component strip 0, a sensing component strip 1, ..., a sensing component strip N-1, each having a sensing circuit 150, a sense The measurement circuit 150 has at least a register, a cache memory, and/or a data buffer or the like that can be used in various embodiments and coupled to the sub-arrays 125-0 and 125-1 and 126-0, ..., 126-N-1. A sense amplifier for each of the columns 122.In some embodiments, the computing component can be coupled into each respective sensing component strip coupled to the short digit line sub-array (eg, in sense of being coupled to the short digit line sub-arrays 125-0 and 125-1, respectively) Each sense amplifier within sense circuit 150 of component strips 124-0 and 124-1 is tested. However, embodiments are not so limited. For example, in some embodiments, there may be no 1:1 relationship between the number of sense amplifiers and the number of computational components, for example, there may be more than one sense amplifier per computation component or more than one computational component per sense amplifier This can vary between subarrays, partitions, libraries, and the like.Each of the short digit line sub-arrays 125-0 and 125-1 can include a plurality of rows 119 that are shown vertically as Y, for example, each sub-array can include 512 rows in an example DRAM bank. Each of the long digit line sub-arrays 126-0, . . ., 126-N-1 may include a plurality of rows 118 that are shown vertically as Z, for example, each sub-array may include 1024 in an exemplary DRAM bank. Row. Example embodiments are not limited to the example horizontal and vertical row orientations and/or number of columns described herein.Embodiments of the PIM DRAM architecture can perform processing at, for example, the sense amplifier and compute component levels in the sense component strip. Embodiments of the PIM DRAM architecture may allow a limited number of memory cells (eg, approximately 1K or 1024 memory cells) to be connected to each sense amplifier. The sensing component strips can include from about 8K to about 16K sense amplifiers. For example, a sense component strip of a long digital line sub-array can include 16K sense amplifiers and can be configured to couple to an array of 1K rows and approximately 16K columns, where the memory cells are at each intersection of rows and columns So that 1K (1024) memory cells are generated per column. By comparison, the sensing component strips of the short digital line sub-array can include 16K sense amplifiers and computing components and can be configured to couple to, for example, up to half of an array of 1K rows of long digital line sub-arrays so that Each column produces 512 memory cells. In some embodiments, the number of sense amplifiers and/or computational components in the respective sensing component strips (eg, corresponding to the number of memory cells in a row) may be at least some of the short digit line sub-arrays and long digit line sub-arrays Change between.The ratio of the number of rows, columns, and memory cells per column and/or the number of memory cells between rows in the long and short digit line sub-arrays just presented is provided by way of example and not limitation. For example, a long digital line sub-array can have columns each having a respective 1024 memory cells, and a short digital line sub-array can have columns each having a respective 512, 256, or 128 memory cells and less than 512 other possible numbers. In various embodiments, the long digital line sub-array can have fewer or more than 1024 memory cells per column, with the number of memory cells per column in the short digital line sub-array being configured as just described. Alternatively or additionally, the cache sub-array may be formed with a digit line length that is less than, equal to, or greater than the length of the digit line of the long digit line sub-array (storage sub-array) such that the cache sub-array is not a short number just described Line subarray. For example, the configuration of the digital lines and/or memory cells of the cache sub-array can provide faster calculations than the configuration of the storage sub-array (eg, 2T2C instead of 1T1C, SRAM instead of DRAM, etc.).Thus, the number of memory cell rows in the cache sub-array and/or the number of corresponding memory cells per digital line may be less than, equal to, or greater than the number of memory cell rows in the memory sub-array and/or per digital line of the memory sub-array Corresponds to the number of memory cells. In some embodiments, the number of memory cells in a row of long digital line sub-arrays may be different than the number of memory cells in a row of short digital line sub-arrays. For example, a memory cell configured as a short digital line sub-array of 2T2C may be approximately twice as wide as a memory cell configured as a long digital line sub-array of 1T1C because the 2T2C memory cell has two transistors and two capacitors The 1T1C memory cell has one transistor and one capacitor. To integrate the width of these two configurations of the sub-array on the chip and/or library architecture, the number of memory cells in the row can be adjusted, for example, such that the short digital line sub-array can, for example, have a long digital line About half of the memory cells in a row of an array have as many memory cells. The controller can have or be directed by instructions to accommodate movement of data values between the two configurations of the sub-array.In some embodiments, the long digital line sub-array 126-N-1 can be a sub-array 32 of 128 sub-arrays and can be in a first direction in a first of the four partitions of the sub-array The last subarray, as described herein. The isolation strip (not shown) can include a number of isolation transistors configured to connect and disconnect portions of the selected shared I/O line selectively (eg, as directed by controller 140). The isolation transistor can be selectively enabled (eg, activated and deactivated) to pass data to and from, for example, the sense amplifier and/or computational component of the sensing component strip via the shared I/O line in the partition Mobile connections and disconnections, as described in this article.As such, the plurality of sub-arrays 125-0 and 125-1 and 126-0, ..., 126-N-1, the plurality of sensing component strips 124-0, 124-1, ..., 124-N -1 and the isolation strip 172 can be considered a single partition 128. However, in some embodiments, a single isolated strip may be shared by two adjacent partitions depending on the direction of data movement.As shown in FIG. 1B, library section 123 can be associated with controller 140. In various examples, controller 140 shown in FIG. 1B can represent at least a portion of the functionality embodied by controller 140 shown in FIG. 1A and included in controller 140. Controller 140 may direct (eg, control) input of commands and/or data 141 to library section 123 and output data from library section 123 (eg, to the host) along with control of data movement in library section 123. 110, as described herein. Library section 123 may include a data bus 156 to DRAM DQ, such as a 64-bit wide data bus, which may correspond to data bus 156 described in connection with FIG. 1A. For example, in response to a command, the delegated controller 140 is responsible for directing the movements and/or operations performed on the data values in the in-memory operations described herein.1C is a block diagram of a library 121 of a memory device in accordance with a number of embodiments of the present invention. Library 121 may represent an example library of memory devices, such as bank 0, bank 1, ..., bank M-1. As shown in FIG. 1C, library 121 can include an A/C path 153, such as a bus, coupled to controller 140. Again, in various examples, the controller 140 shown in FIG. 1C can represent at least a portion of the functionality embodied by the controller 140 shown in FIGS. 1A and 1B and included in the controller 140.As shown in FIG. 1C, the library 121 can include multiple library segments, such as library segments 123. As further shown in FIG. 1C, library section 123 can be subdivided to be shown at 125-0, 125-1, and 125-3 for short digit line subarrays and 126-0, 126-1 for long digit line subarrays. , ..., 126-N-1 display a plurality of sub-arrays, for example, sub-array 0, sub-array 1, ..., sub-array N-1. The configuration of the number and/or positioning of the short and long digit line sub-arrays illustrated in Figure 1C is shown for purposes of clarity and is not limited to these configurations. Although the library section 123 can be configured with a short digit line sub-array 125-0 on top of the long digit line sub-array 126-0 as shown, then subsequently on top of another long digit line sub-array 126-1 Another short digit line sub-array 125-1, wherein a total of four sub-arrays are evenly spread at a 1:1 ratio, for example, in partition 128-0, but other numbers and/or ratios are short and/or long Digital line subarrays are also possible. For example, any feasible number of short and/or long digital line sub-arrays may be determined to be suitable for any sorting arrangement of a particular implementation (eg, where the ratio of short digital line sub-arrays to long digital line sub-arrays is 1 : 1, 1: 2, 1: 4, 1: 8, etc., wherein each packet of one or more short digit line sub-arrays is located adjacent to a group of one or more long digit line sub-arrays) and other configurations include In the library section 123 and/or its partition 128. Thus, in some embodiments, more than one short digit line sub-array can be positioned in series adjacent to one another and/or more than one long digit line sub-array can be positioned in series adjacent one another.The plurality of sub-arrays are shown at 125-0, 125-1, and 125-3 for the short digit line sub-array and at 126-0, 126-1, ..., 126-N-1 for the long digit line sub-array The arrays can each be coupled to sense component strips 124-0, 124-1, ..., 124-N-1, which can include sense circuit 150 and logic circuit 170, and/or by sense component strips 124-0, 124 -1, ..., 124-N-1 are separated. As described, the sensing component strips 124-0, 124-1, ..., 124-N-1 each include a sensing circuit 150 having at least one configured to couple to each of the sub-arrays The sense amplifiers of the memory cell columns are as further illustrated in Figure 2 and described in conjunction with Figures 3, 4A and 4B. The sub-array and associated sensing component strips may be divided into a number of partitions that share I/O lines 155, for example, 128-0, 128-1, ..., 128-M-1, as further described herein.As schematically shown in FIG. 1C, library 121 and each section 123 of the library may include multiple controls/data coupled into instructions and/or data (eg, program instructions (PIM commands)) read paths. The register is coupled to a shared I/O line 155 of a plurality of bank segments (eg, bank section 123) in a particular bank 121 as a data path (eg, a bus). Controller 140 can be configured to receive commands to begin execution of operations in a given library (eg, library 121-1). Controller 140 can be configured to retrieve instructions and/or constant data from the plurality of locations of a particular bank using, for example, shared I/O lines 155 coupled to control and data registers 151 and to use computing components of sensing circuitry 150 Perform the operation. Controller 140 may cache retrieved instructions and/or constant data local to a particular library, for example, in instruction cache 171 and/or logic circuit 170.As described herein, an I/O line can be selectively shared by a plurality of partitions, sub-arrays, rows, and/or particular columns of memory cells via sensing component strips coupled to each of the sub-arrays. For example, a sense amplifier and/or computing component of each of a selectable subset of a certain number of columns (eg, a subset of eight columns of a total number of columns) may be selectively coupled to the Each of the plurality of shared I/O lines to move (eg, transmit, transmit, and/or feed) data values stored (cached) in the sensing component strip to the plurality of shared I/Os Each of the lines. Since the singular forms "a", "an", "the" and "the" are meant to include both singular and plural referents, "shared I/O line" can be used to mean "multiple shared I/O" Line" unless the context clearly indicates otherwise. In addition, the "shared I/O line" is an abbreviation for "multiple shared I/O lines."In some embodiments, the controller 140 can be configured to direct (eg, provide) instructions (commands) and data to a particular bank 121 in the memory array 130 via a shared I/O line 155 coupled to the control and data registers 151. Multiple locations are directed to the sensing component strips 124-0, 124-1, ..., 124-N-1. For example, control and data register 151 can relay execution by sense amplifiers and/or computational components of sense circuit 150 in sense component strips 124-0, 124-1, ..., 124-N-1 instruction. For example, FIG. 1C illustrates controller 140 as being associated with instruction cache 171 and coupled to short digital line sub-arrays 125-0, 125-1, and 125-3 in library 121 via write path 149, Each of the long digital line sub-arrays 126-0, 126-1, ..., 126-N-1 and/or sensing component strips 124-0, 124-1, ..., 124-N-1.However, the shared I/O lines 155 and/or connection circuitry 232 described herein can be configured (eg, formed and/or enabled) to move the results of the execution of multiple sequential operations to the memory array. Suitable locations other than the first subset 125 and/or the second subset 126 of the sub-arrays of 130. For example, in various embodiments, the resulting data values can be moved to an external register via shared I/O line 155 and/or connection circuit 232. As shown in FIG. 1C, embodiments of such external registers may be associated with controller 140 of library 121 of memory device 120 (eg, controller 140 that may be selectively coupled to library 121 of memory device 120) A certain number of library registers 158 and/or vector registers 159.As described in connection with FIG. 1B, a plurality of sub-arrays (eg, four sub-arrays 125-0, 125-1, 126-0, and 126-1, which are shown by way of example in FIG. 1C) and their respective sensing component strips may The first partition 128-0 is formed. An isolation strip (not shown) can be positioned between sub-array 3 (126-1) and sub-array 4 (125-2) such that sub-array 126-1 is in the first direction of first partition 128-0 The last sub-array (e.g., downward in the context of Figure 1C) and sub-array 125-2 is the first sub-array of the second partition 128-1 in the first direction. A number of sub-arrays and their respective sensing component strips may extend further in the first direction until a second spacer strip (not shown) is positioned in the second partition 128-1 and the third partition 128-M-1 Up to the first sub-array 126-N-1. As indicated previously, the sub-arrays may be arranged in each of the bank sections 123 and/or partitions 128 in any order such that, for example, the short digit line sub-arrays 125-0 and 125-2 may be respectively The first sub-array in partitions 128-0 and 128-1, however, long digit line sub-array 126-N-1 may be the first sub-array in partition 128-M-1 and other possible configurations.However, embodiments are not so limited. For example, in various embodiments, there may be any number of short digit line sub-arrays 125 and any number of long digit lines that may be divided into any number of partitions by the isolation strip in the library section 123. The array 126, for example, has a combination of at least one short digit line sub-array and at least one long digit line sub-array in various partitions. In various embodiments, the partitions may each include the same number or a different number of short and/or long digit line sub-arrays, sensing component strips, etc., depending on the implementation.2 is a schematic diagram illustrating a sensing circuit 250 in accordance with a certain number of embodiments of the present invention. Sensing circuit 250 may correspond to sensing circuit 150 shown in FIG. 1A.The memory unit can include a storage element (eg, a capacitor) and an access device (eg, a transistor). For example, the first memory cell can include transistor 202-1 and capacitor 203-1, and the second memory cell can include transistor 202-2, capacitor 203-2, and the like. In this embodiment, memory array 230 is a DRAM array of 1T1C (single transistor single capacitor) memory cells, although other embodiments of the configuration may be used, for example, 2T2C with two transistors and two capacitors per memory cell. In a number of embodiments, the memory unit can be a destructive read memory unit, for example, reading data stored in the unit can corrupt the data such that data originally stored in the unit can be refreshed after being read.The cells of memory array 230 may be arranged in rows coupled by access (word) lines 204-X (rows X), 204-Y (rows Y), etc., and by pairs of complementary sensing lines (eg, as shown in FIG. 2) The digitized lines DIGIT(D) and DIGIT(D)_ and the columns of DIGIT_0 and DIGIT_0*) shown in Figures 3 and 4A to 4B are coupled. The individual sensing lines corresponding to each pair of complementary sensing lines are referred to as DIGIT(D) for digital line 205-1 and for DIGIT(D)_ for digital line 205-2 or referred to as Figures 3 and 4A to 4B, respectively. Corresponding component symbol in . Although only a pair of complementary digit lines are shown in FIG. 2, embodiments of the invention are not so limited, and the memory cell array can include additional memory cell columns and digit lines (eg, 4,096, 8,192, 16,384, etc.).Although the rows and columns are illustrated as being orthogonally oriented in a plane, embodiments are not so limited. For example, the rows and columns can be oriented relative to each other in any feasible three-dimensional configuration. The rows and columns may be oriented at any angle relative to each other, may be oriented in a substantially horizontal plane or a substantially vertical plane, and/or may be oriented in a folded topology, as well as other possible three-dimensional configurations.The memory cells can be coupled to different digital lines and word lines. For example, a first source/drain region of transistor 202-1 can be coupled to digital line 205-1 (D), and a second source/drain region of transistor 202-1 can be coupled to capacitor 203-1, And the gate of transistor 202-1 can be coupled to word line 204-Y. A first source/drain region of transistor 202-2 can be coupled to digital line 205-2(D)_, a second source/drain region of transistor 202-2 can be coupled to capacitor 203-2, and transistor 202 The gate of -2 can be coupled to word line 204-X. As shown in FIG. 2, a cell board can be coupled to each of capacitors 203-1 and 203-2. The cell board can be a common node to which a reference voltage (eg, ground) can be applied in various memory array configurations.Memory array 230 is configured to be coupled to sense circuit 250 in accordance with a number of embodiments of the present invention. In this embodiment, the sensing circuit 250 includes a sense amplifier 206 and a computing component 231 that correspond to respective memory cell columns (eg, coupled to respective pairs of complementary digital lines in a short digital line sub-array). A sense amplifier 206 can be coupled to the pair of complementary digit lines 205-1 and 205-2. Computing component 231 can be coupled to sense amplifier 206 via pass gates 207-1 and 207-2. The gates of pass gates 207-1 and 207-2 can be coupled to operational selection logic 213.Operation selection logic 213 can be configured to include: pass gate logic for controlling a pass gate, the pass gate coupling the pair of complementary digit lines transposed between sense amplifier 206 and computing component 231; and a switch gate Logic for controlling a switch gate that couples the pair of complementary digit lines transposed between sense amplifier 206 and computing component 231. Operation selection logic 213 may also be coupled to the pair of complementary digit lines 205-1 and 205-2. Operation selection logic 213 can be configured to control the continuity of pass gates 207-1 and 207-2 based on the selected operation.Sense amplifier 206 can be operative to determine data values (eg, logic states) stored in selected memory cells. Sense amplifier 206 can include a cross-coupled latch, which can be referred to herein as a primary latch. In the example illustrated in FIG. 2, the circuitry corresponding to sense amplifier 206 includes a latch 215 that includes four coupled to a pair of complementary digit lines D 205-1 and (D)_205-2. Transistors. However, embodiments are not limited to this example. Latch 215 can be a cross-coupled latch. For example, the gates of a pair of transistors (eg, n-channel transistors (eg, NMOS transistors) 227-1 and 227-2) and another pair of transistors (eg, p-channel transistors (eg, PMOS transistors) 227-1 and 229 -2) Gate cross coupling. The cross-coupled latch 215 including transistors 227-1, 227-2, 229-1, and 229-2 may be referred to as a primary latch.In operation, when a memory cell is being sensed (eg, read), the voltage on one of the digit lines 205-1 (D) or 205-2 (D)_ will be slightly greater than the digit line 205-1 ( D) or voltage on the other of 205-2(D)_. The ACT signal and the RNL* signal, for example, can be driven low to enable (eg, fire) the sense amplifier 206. A digital line 205-1(D) or 205-2(D)_ having a lower voltage will turn on one of the PMOS transistors 229-1 or 229-2 to be larger than in the PMOS transistor 229-1 or 229-2 The extent of the other, thus driving the higher voltage digital line 205-1 (D) or 205-2 (D)_ to be higher than the other digital line 205-1 (D) or 205-2 ( D) _ drive is high.Similarly, a digital line 205-1(D) or 205-2(D)_ having a higher voltage will turn on one of the NMOS transistors 227-1 or 227-2 to be greater than the NMOS transistor 227-1 or 227- The degree of the other of 2, thus driving the digital line 205-1 (D) or 205-2 (D)_ with a lower voltage to be lower than to be another digit line 205-1 (D) or 205 The -2(D)_ drive is low. Therefore, after a short delay, the digital line 205-1(D) or 205-2(D)_ having a slightly larger voltage is driven to the voltage of the supply voltage VCC by the source transistor, and the other digit line 205-1 (D) or 205-2 (D)_ The voltage that is driven to a reference voltage (eg, ground) by a trough transistor. Thus, cross-coupled NMOS transistors 227-1 and 227-2 and PMOS transistors 229-1 and 229-2 are used as sense amplifier pairs that amplify digital lines 205-1(D) and 205-2 The differential voltage on (D)_ and operates to latch the data value sensed from the selected memory cell. As used herein, the cross-coupled latch of sense amplifier 206 may be referred to as primary latch 215.Embodiments are not limited to the sense amplifier 206 configuration illustrated in FIG. As an example, sense amplifier 206 can be a current mode sense amplifier and a single-ended sense amplifier (eg, a sense amplifier coupled to a digital line). Moreover, embodiments of the invention are not limited to, for example, the folded digital line architecture shown in FIG.The operational sense amplifier 206 along with the computing component 231 to perform various operations using data from the array as input. In a number of embodiments, the result of the operation can be stored back to the array without transmitting data via a digital line address access (eg, without exciting the column decode signal) such that the data is routed and sensed via the local I/O line. The circuit is transmitted outside the circuit. As such, a certain number of embodiments of the present invention may achieve power execution operations using less than various prior methods and computational functions associated therewith. In addition, since a certain number of embodiments reduce or eliminate the transfer of data across local and global I/O lines in order to perform operations and associated computational functions (eg, transferring data between memory and discrete processors), a certain number of implementations An example can achieve increased (eg, faster) processing power compared to previous methods.The sense amplifier 206 can further include a balancing circuit 214 that can be configured to balance the digital lines 205-1 (D) and 205-2 (D)_. In this example, balancing circuit 214 includes a transistor 224 coupled between digital lines 205-1 (D) and 205-2 (D)_. The balancing circuit 214 also includes transistors 225-1 and 225-2 each coupling a first source/drain region to a balanced voltage (e.g., VDD/2), where VDD is the supply voltage associated with the array. A second source/drain region of transistor 225-1 can be coupled to digital line 205-1 (D), and a second source/drain region of transistor 225-2 can be coupled to digital line 205-2 (D) _. The gates of transistors 224, 225-1, and 225-2 can be coupled together and coupled to a balance (EQ) control signal line 234. As such, activating the EQ enables the transistors 224, 225-1, and 225-2, which effectively shorts the digital lines 205-1(D) and 205-2(D)_ together and shorts to the balanced voltage (eg, , VCC/2).Although FIG. 2 shows sense amplifier 206 including balance circuit 214, embodiments are not so limited, and balance circuit 214 can be implemented discretely with sense amplifier 206, in a configuration implementation that differs from the configuration shown in FIG. 2, Or not at all.As described further below, in a number of embodiments, the sensing circuit 250 (eg, the sense amplifier 206 and the computing component 231) can be operated to perform selected operations and initially store the results in the sense amplifier 206 or computation Data from the sensing circuit is transmitted in one of the components 231 without via a local or global I/O line, for example, the sensing line address access is not performed via, for example, activating the column decode signal.Execution of various types of operations can be implemented. For example, Boolean operations are used in many higher order applications, such as Boolean logic functions involving data values. Thus, the speed and power efficiency that can be achieved with improved execution of operation can provide improved speed and/or power efficiency for these applications.As shown in FIG. 2, computing component 231 can also include a latch that can be referred to herein as secondary latch 264. Secondary latch 264 can be configured and operated in a manner similar to that described above with respect to primary latch 215, except for cross-coupled p-channel transistors (eg, PMOS transistors) included in the secondary latch. A respective source can be coupled to a supply voltage (eg, VDD), and a pair of cross-coupled n-channel transistors (eg, NMOS transistors) of the secondary latch can selectively couple their respective sources to a reference voltage (eg, , ground), so that the secondary latch is continuously enabled. The configuration of computing component 231 is not limited to the configuration shown in FIG. 2, and various other embodiments are possible.In various embodiments, the connection circuit 232-1 can be, for example, at 217-1 and the connection circuit 232-2 can be coupled to the primary latch 215 at 217-1 to achieve the sense and/or Store the movement of data values. The sensed and/or stored data values can be moved to a selected one of a particular row and/or column of another sub-array via a shared I/O line, as described herein, and/or via a connection circuit 232-1 and 232-2 move directly to the selected memory cell in the particular row and/or column of the other sub-array. Although FIG. 2 shows that connection circuits 232-1 and 232-2 are coupled at 217-1 and 217-2, respectively, of primary latch 215, embodiments are not so limited. For example, connection circuits 232-1 and 232-2 can be coupled, for example, to secondary latch 264 to achieve movement of sensed and/or stored data values, and for coupling connection circuit 232-1 And other possible locations for 232-2.In various embodiments, the connection circuits (eg, 232-1 and 232-2) can be configured to connect the sense circuits coupled to a particular column in the first sub-array to corresponding columns in the second sub-array A certain number of rows, for example, the second sub-array may be adjacent sub-arrays and/or separated by a number of other sub-arrays. As such, the connection circuitry can be configured to move (eg, copy, transfer, and/or transmit) data values from, for example, selected rows and particular columns to selected rows and corresponding columns in the second sub-array, eg, Data values can be copied to selected memory cells therein for performing operations in a short digital line sub-array and/or for storing data values in a long digital line sub-array. In some embodiments, movement of data values may be directed by controller 140 executing the set of instructions to store the data values in sensing circuitry 250 (eg, sense amplifier 206 and/or coupled computing component 231), and the controller The particular row and/or particular memory cell that is intersected by the corresponding column in the second sub-array may be selected to receive the data value by movement (eg, copy, transfer, and/or transmission) of the data value.3 is a schematic diagram illustrating circuitry for data movement in a memory device in accordance with a number of embodiments of the present invention. 3 shows eight sense amplifiers, for example, sense amplifiers 0, 1, ..., 7, shown at 306-0, 306-1, ..., 306-7, respectively, each coupled to a corresponding pair of complementary amplifiers Sensing lines, for example, digital lines 305-1 and 305-2. Figure 3 also shows eight computing components, for example, computing components 0, 1, ..., 7 shown at 331-0, 331-1, ..., 331-7, each computing component via a respective pass gate 307-1 and 307-2 and digital lines 305-1 and 305-2 are coupled to respective sense amplifiers, for example, as shown at 306-0 for sense amplifier 0. For example, the transfer gate can be connected as shown in Figure 2 and can be controlled by the operational selection signal Pass. The output of the select logic can be coupled to the gates of pass gates 307-1 and 307-2 and to digital lines 305-1 and 305-2. Corresponding pairs of sense amplifiers and computing components can facilitate the formation of sensing circuits indicated at 350-0, 350-1, ..., 350-7.The data values present on the pair of complementary digit lines 305-1 and 305-2 can be loaded into the computing component 331-0 as described in connection with FIG. For example, when pass gates 307-1 and 307-2 are enabled, data values on the pair of complementary digit lines 305-1 and 305-2 can be passed from the sense amplifier to the computing component, for example, from 306- 0 is passed to 331-0. The data values on the pair of complementary digit lines 305-1 and 305-2 may be data values stored in the sense amplifier when the sense amplifier 306-0 is activated.The sense amplifiers 306-0, 306-1, ..., 306-7 of FIG. 3 may each correspond to the sense amplifier 206 shown in FIG. The computing components 331-0, 331-1, ..., 331-7 shown in Figure 3 may each correspond to the computing component 231 shown in Figure 2. The size of sense amplifier 306 and computing component 331 illustrated in Figure 3 is shown for purposes of clarity. However, as shown in FIG. 2, sense amplifier 306 and/or computing component 331 can be formed to fit within corresponding complementary digit lines 305-1 and 305-2, for example, with corresponding complementary digit lines 305-1 and 305. -2 assembled at the same distance. The combination of a sense amplifier and a computing component can facilitate a sensing circuit (e.g., 350-0) of a portion of DRAM memory sub-array 325 (e.g., a short digital line sub-array as shown at 125 in Figures IB and 1C). 350-1, ..., 350-7), the sensing circuit is configured to couple to an I/O line 355 shared by a number of sub-arrays and/or partitions, as described herein. The pairwise combination of sense amplifiers 306-0, 306-1, ..., 306-7 and computing components 331-0, 331-1, ..., 331-7 shown in Figure 3 can be included as in Figure 1B and 1C is at 124 and in the sensing component strip shown at 424 in Figures 4A and 4B.The configuration of the embodiment illustrated in Figure 3 is shown for purposes of clarity and is not limited to such configurations. For example, the combination of sense amplifiers 306-0, 306-1, ..., 306-7 with computing components 331-0, 331-1, ..., 331-7 and shared I/O lines 355 is illustrated in FIG. The illustrated configuration is not limited to the half of the combination of the sense amplifiers 306-0, 306-1, ..., 306-7 of the sensing circuit and the computing components 331-0, 331-1, ..., 331-7 formed in the memory cell Column 322 (not shown) is above and half is formed below column 322 of the memory cell. The number of these combinations of sense amplifiers and computing components that form the sensing circuitry configured to couple to the shared I/O lines is not limited to eight. In addition, the configuration of the shared I/O line 355 is not limited to being split into two for separately coupling each of the two sets of complementary digital lines 305-1 and 305-2, and the positioning of the shared I/O line 355 is not limited to In the middle of the combination of the sense amplifier and the computing component forming the sensing circuit, for example, not at either end of the combination of the sense amplifier and the computing component.The circuit illustrated in FIG. 3 also exhibits data movement operations (eg, configured to implement a particular column 322 of sub-array 325, associated digit lines 305-1 and 305-2 associated therewith, and shared I/O line 355 (eg, Column selection circuits 358-1 and 358-2, as guided by controller 140 shown in Figures 1A through 1C. For example, column select circuit 358-1 has select lines 0, 2, 4, and 6 that are configured to couple with corresponding columns (eg, column 0, column 2, column 4, and column 6). Column selection circuit 358-2 has select lines 1, 3, 5, and 7 that are configured to couple with corresponding columns (e.g., column 1, column 3, column 5, and column 7). In various embodiments, column select circuit 358, illustrated in conjunction with FIG. 3, can represent functions embodied by multiplexer 460 illustrated in connection with FIGS. 4A and 4B and included in multiplexer 460. At least part of sex.The controller 140 can be coupled to the column selection circuit 358 to control the select line (eg, select line 0) to access the sense amplifier, when the select transistors 359-1 and 359-2 are activated via signals from the select line 0, Data values in the component and/or on complementary digital line pairs (eg, 305-1 and 305-2) are calculated. Activating sense transistors 359-1 and 359-2 (eg, as directed by controller 140) to achieve column 0 (322-0) sense amplifier 306-0, computation component 331-0, and/or complementary digital line 305-1 And the coupling of 305-2 causes the data values to move to the shared I/O line 355 on the digital line 0 and the digital line 0*. For example, the shifted data value can be data from a particular row 319 in sense amplifier 306-0 and/or computation component 331-0 stored (cached) in the sensing component strip of the short digit line sub-array. value. The data values from each of columns 0 through 7 can be similarly selected by controller 140 activating the appropriate select transistors.Moreover, enabling (eg, activating) select transistors (eg, select transistors 359-1 and 359-2) can enable a particular sense amplifier and/or computational component (eg, 306-0 and/or 331-0, respectively) to The shared I/O line 355 is coupled such that data values stored by the amplifier and/or computing component can be moved to the shared I/O line 355, for example, placed on the shared I/O line 355, transmitted, and/or transmitted to the shared I. /O line 355. In some embodiments, one column (eg, column 322-0) is selected at a time to couple to a particular shared I/O line 355 to move (eg, copy, transfer, and/or transmit) the stored data values. In the example configuration of FIG. 3, shared I/O line 355 is illustrated as a shared differential I/O line pair, eg, a shared I/O line and a shared I/O line*. Thus, the selection of column 0 (322-0) can result in two data values, for example, two bits having a value of 0 and/or 1, from two rows (eg, row 319) and/or storage. In a sense amplifier and/or computational component associated with complementary digit lines 305-1 and 305-2. These data values can be input in parallel to each shared differential I/O pair of shared differential I/O lines 355, such as shared I/O and shared I/O*.As described herein, a memory device (eg, 120 in FIG. 1A) can be configured to be coupled to a host (eg, 110) via a data bus (eg, 156) and a control bus (eg, 154). A library 121 (e.g., library section 123 in Figure IB) in a memory device can include multiple sub-arrays of memory cells (e.g., 125-0 and 125-1 and 126-0, ..., 126 in Figures IB and 1C). -N-1). Library 121 can include sensing circuitry coupled to the plurality of sub-arrays via a plurality of columns of memory cells (eg, 122 in FIG. 1B) (eg, 150 in FIG. 1A and FIGS. 2, 3, 4A, and 4B) Corresponding component symbol). The sensing circuit can include a sense amplifier and/or a computing component coupled to each of the columns (eg, 206 and 231, respectively, in FIG. 2).Library 121 may include a plurality of partitions (e.g., 128-0, 128-1, ..., 128-M-1 in Figure 1C), each partition containing respective packets of the plurality of sub-arrays. The controller 140 coupled to the library can be configured to interact from the first sub-array in the second partition to the second sub-array (eg, from the sub-array 125-2 in the partition 128-1 in FIG. 1C to the sub-array) The second data movement of array 126-2 (not shown) directs from the first sub-array in the first partition to the second sub-array in parallel (eg, from the sub-array in partition 128-0 in Figure 1C) The first data movement of 125-0 to sub-array 126-0).In various embodiments, the sensing circuitry of the first sub-array (eg, 150 in FIG. 1A and corresponding component symbols in FIGS. 2, 3, 4A, and 4B) can be coupled via the first portion of the shared I/O line 355 Sensing circuitry to a second sub-array within the first partition and sensing circuitry of the first sub-array within the second partition can be coupled to the second sub-array via a second portion of the shared I/O line 355 Measuring circuit. For example, as described in conjunction with FIGS. 3, 4A, and 4B, the sense amplifiers and/or computational components in the sensing component strip 124 can be selectively coupled via selection circuitry 358 and/or multiplexer 460. . The controller 140 can be configured to direct a plurality of data values in parallel from movement of the plurality of data values from the first sub-array of the second partition to the plurality of memory cells in the second sub-array of the second partition For example, movement of a first sub-array of a first partition to a plurality of memory cells in a second sub-array of the first partition.In some embodiments, the plurality of short digit line sub-arrays 125 can each be configured to include the same number of rows of memory cells (eg, 119 in FIG. 1B and 319 in FIG. 3), The long digital line sub-arrays 126 can each be configured to include the same number of rows of memory cells (e.g., 118 in Figure IB), and/or the plurality of partitions can each be configured to be in each group The same number of the plurality of short and long digit line sub-arrays are included. However, embodiments are not so limited. For example, in various embodiments, depending on the implementation, the number of rows in at least one sub-array and/or the number of sub-arrays in at least one of the partitions may be different than other sub-arrays and/or partitions.The memory device 120 can include a shared I/O line (eg, 155 in FIG. 1C) configured to couple to the sensing circuits of the plurality of sub-arrays, eg, to selectively implement data values from the first sub-array Movement of memory cells to memory cells in the second sub-array. In various embodiments, memory device 120 can include a plurality of I/O lines shared by partitions (eg, 355 in FIG. 3 and 455-1, 455-2, ..., 455 in FIGS. 4A and 4B). M), for example, to selectively perform parallel movement of a plurality of data values from, for example, the same partition or a first sub-array of different partitions to a second sub-array. Controller 140 can be configured to move (copy, transfer, and/or transmit) data values between sub-arrays in a library of memory cells using DRAM protocols and DRAM logic and electrical interfaces in response to, for example, commands from host 110. (Use parallel split data movement as described in this article). For example, controller 140 can be configured to implement DRAM protocols as well as DRAM logic and electrical interfaces using stored instructions.As described herein, a memory cell array can include an implementation of a DRAM memory cell, wherein the controller 140 is configured to move data from a source location to a destination location via a shared I/O line in response to a command. The source location may be in a first bank in the memory device and the destination location may be in the second library and/or the source location may be in a first sub-array of one of the banks in the memory device and the destination location may be in the same library In the second sub-array. The first sub-array and the second sub-array may be in the same partition of the library or the sub-arrays may be in different partitions of the library.Memory device 120 can include multiple sub-arrays of memory cells. In various embodiments, the plurality of sub-arrays includes a first subset of the respective plurality of sub-arrays (eg, short digit line sub-arrays 125 in FIGS. 1B and 1C and corresponding elements in FIGS. 3, 4A, and 4B) And at a second subset of the respective plurality of sub-arrays (eg, the long digit line sub-array 126 in FIGS. 1B and 1C and the corresponding component symbol in FIGS. 4A and 4B). The memory device can include a first sensing circuit (eg, 150 in FIG. 1A and corresponding component symbols in FIGS. 2, 3, 4A, and 4B) coupled to the first subset 125, the first sensing The circuit includes sense amplifiers and computational components (eg, at 206 and 231 in Figure 2 and corresponding component symbols in Figures 3, 4A, and 4B, respectively). The first subset 125 can be configured, for example, as a number of cache subarrays to perform a plurality of sequential in-memory operations on data moved from the second subset 126.The memory device 120 can also include a controller (eg, 140 in FIGS. 1A-1C) configured to direct a certain number of data values (eg, initial data values and/or additional data values) from the second sub- Concentrated sub-arrays (eg, one or more sub-arrays) (eg, from the long digit line (storage) sub-array 126-0 in FIGS. 1B and 1C and the corresponding component symbols in FIGS. 4A and 4B) to the first sub- The first sub-array of the concentrated sub-array (e.g., to the short digit line (cache) sub-array 125-0 of Figures 1B and 1C and the corresponding element symbols in Figures 3, 4A, and 4B). The controller 140 can also be configured to direct the sense amplifiers 206 and/or the computing component 231 of the first sensing circuit coupled to the first subset 125 to perform the plurality of sequential operations on the number of data values.The controller 140 can also be configured to direct data values from sub-arrays in the first subset (eg, from the short digit line (cache) sub-array 125-0 in Figures IB and 1C and Figures 3, 4A, and 4B) a corresponding component symbol) to a sub-array in the second subset (eg, to the long digit line (storage) sub-array 126-0 in FIGS. 1B and 1C and the corresponding component symbol in FIGS. 4A and 4B) mobile. For example, controller 140 can be configured to direct execution of a second movement of data values that are performed in the plurality of sequences of the number of data values moved from the sub-array in the second subset The result of the operation. For example, the plurality of sequential operations can be performed by sense amplifiers and computational components of a cache sub-array of the first subset without completing the plurality of sequences by sense amplifiers and computational components of the cache sub-array The results of the plurality of sequential operations are moved to the storage sub-array of the second subset prior to the last operation in operation.In some embodiments, as described herein, the controller 140 can be configured to direct result data values that have been performed by the plurality of sequential operations from the cache sub-array to the original sub-array in the second subset A second movement of the memory in the first movement, the number of data values being transmitted from the memory in the first movement and/or the number of data values previously stored in the memory. However, embodiments are not so limited. For example, in various embodiments described herein, the controller 140 can also be configured to direct result data values that have been performed by the plurality of sequential operations from a particular location (eg, a cache subarray ( For example, the sensing circuit of 125-0) and/or a particular row) is moved to a second number of alternate destination locations. The alternate destination location may include different rows in cache sub-array 125-0, and/or a particular row in a different cache sub-array (eg, 125-1), and/or an original storage sub-array ( For example, different rows in 126-0), and/or specific rows in different storage sub-arrays (eg, 126-1). In various embodiments, the alternate destination location may further include therein the number of library registers 158 associated with the controller 140 (eg, selectively coupled to the controller 140) and/or the A number of specific registers and/or rows in vector register 159.In some embodiments, the sensing circuit 150 can be coupled to the first sub-array 125 in the first subset via the column 122 of memory cells, the sensing circuit including the sense amplifier 206 and the computing component 231 coupled to the column . In some embodiments, the number of memory cells in a column of the first sub-array 125 in the first subset may be at most half of the number of memory cells in a column of the first sub-array 126 in the second subset. Alternatively or additionally, in some embodiments, the first physical length of the sense lines (eg, a pair of complementary sense lines) of the first sub-array 125 in the first subset may be the first sub-part of the second subset At most half of the sensed second physical length of array 126. Alternatively or additionally, in some embodiments, the first physical length of the columns of the first sub-array 125 in the first subset may be at most half of the second physical length of the columns of the first sub-array 126 in the second subset . The comparative number of short digit line sub-arrays relative to the memory cells in the long digit line sub-array and/or the comparative physical length of the short digit line sub-arrays relative to the columns of the long digit line sub-array is from the corresponding row 119 in FIG. 1B. The span of 118 and is represented by the comparative length of the sub-arrays and/or digit lines in Figures 1C, 4A, and 4B.In various embodiments, the result of each of the respective plurality of sequential operations may be from a sub-array in the first subset (eg, short digital line sub-array 125 as shown in FIG. 1C and described in connection with FIG. 1C). 0) storing until the execution of the plurality of sequential operations is completed to calculate the result of the last of the plurality of sequential operations. The results of each of the respective plurality of sequential operations may be stored by a first sensing circuit 150 (eg, a sensing circuit that senses the component strips 124-0) coupled to the first subset until the completion of the Execution of a plurality of sequential operations to calculate the result of the last of the plurality of sequential operations.Memory device 120 can include a sensing circuit 150 coupled to a second subset of sub-arrays (eg, long digital line sub-array 126-0 as shown in FIG. 1C and described in connection with FIG. 1C). In some embodiments, the sensing circuit coupled to the second subset can include a sense amplifier but no computing components (eg, as shown at 206 and 231, respectively, and described in connection with FIG. 2). Although the sensing circuitry of the second subset may include both sense amplifiers and computational components in some embodiments, to distinguish embodiments that do not include computational components, the embodiments are referred to as the second subset The sensing circuitry of the second sensing circuit and including the first subset of computing components is referred to as a first sensing circuit. As such, the second subset of sub-arrays can be used to store a certain number of data values that can be subjected to multiple sequential operations by the first sensing circuit. For example, a certain number of sensed data values may be stored in the second sense circuit prior to the first movement of the data values to the first sense circuit of the first subset of sub-arrays.The first sensing circuit and the second sensing circuit of the memory device may be formed at equal intervals with the sensing lines of the respective first and second subsets of the plurality of sub-arrays, for example, as shown in FIGS. 1B, 1C, 3, 4A And shown in 4B. In some embodiments, column selection circuitry (eg, 358-1 and 358-2 in FIG. 3) can be used to selectively sense the first and second sub-portions by being selectively coupleable to at least the sense amplifier Data in a particular column (eg, 322-0) of memory cells of sub-array 325 in any of the sets, the sense amplifiers being coupled to respective sense lines of the particular column (eg, 305-1 and 305-2).A second subset of the sub-arrays (eg, memory cells of the long digital line sub-array 126) can be used to store data that can be manipulated by the first sensing circuit prior to the first movement of the data values to the first subset of sub-arrays value. Additionally, a second subset of the sub-arrays (eg, the same or different memory cells of the same or different long digit line sub-arrays 126) can be used to store the plurality of data values after the second movement has been performed by the first sensing circuit The result data value of a sequential operation. Alternatively or additionally, the sub-array of the first subset (eg, short digit line sub-array 125) may store the resulting data values of the plurality of sequential operations that have been performed by the first sensing circuit following the second movement of the data values . For example, instead of or in addition to the resulting data value, for example, moving from the sensing component strip 124-0 of the short digit line sub-array 125-0 to the long digit line sub-array 126-0 via the sensing component strip 124-1 The memory cells in row 118, the resulting data values can also be moved from the sense component strip 124-0 of the short digit line sub-array 125-0 to the line 119 of the short digit line sub-array (eg, a short digit line) A memory cell in one or more of the one or more of the arrays 125-0, 125-1, ..., 125-N-1.The controller 140 described herein can be configured to direct the number of data values from selected ones of the first subset (eg, long digit line sub-arrays 126) in the second subset to the first subset The first movement of the selected one of the first sub-array (eg, long digit line sub-array 126). The controller 140 described herein may be further configured to direct data values as a result of execution of the plurality of sequential operations from a first sub-array (eg, long digital line sub-array 126) in the first subset to The second movement of the selected row in the first subarray of the two subsets. For example, in some embodiments, data values can be moved from selected rows (or selected memory cells) of the second sub-array to selected rows (or selected memory cells) of the first sub-array, A sub-array of sensing circuits performs a plurality of sequential operations on the data values, and then the resulting data values can be, for example, from the sensing circuit and/or first after the plurality of sequential operations have been performed on the data values The rows of the sub-array are moved back to the same selected row (or the same selected memory cell) of the first sub-array of the second subset.Alternatively or additionally, the controller may be further configured to direct data values for the results of the execution of the plurality of sequential operations, for example, from the sensing circuit and/or the rows of the first sub-array to different from the first sub-array A second movement of the second sub-array in the second subset of (moving the number of data values from the first movement by the first movement). For example, the number of data values may have been moved from one or more rows of long digit line sub-array 126-0 by a first movement and the resulting data values may be moved to a long digit line sub-array 126 by a second movement. Any of -1, 126-2, ..., 126-N-1. Alternatively or additionally, the controller may be further configured to direct to a second movement of a number of library registers 158 and/or vector registers 159 shown in FIG. 1C and described in connection with FIG. 1C.Memory device 120 can include a controller (e.g., 140 in Figures 1A through 1C). Controller 140 can be coupled to a library 121 of memory devices. The controller can be configured to receive an instruction set from the host 110 to perform a plurality of sequential data processing operations, and to pass the command instructions to perform a plurality of sequential data processing operations in a library of the memory device 120.In some embodiments, memory device 120 can include connection circuitry (eg, as shown at 232-1 and 232-2 and described in connection with FIG. 2) configured to couple to a first subset Sensing circuits of a particular column in the first sub-array are coupled to a certain number of rows in corresponding columns in the first sub-array in the second subset. For example, the connection circuit can be configured to move data values to one or several selected rows and corresponding columns in a first sub-array (eg, short digit line sub-array 125) in the first subset for use The plurality of sequential operations are performed in, for example, a respective sensing component strip.Moving data values via, for example, a shared I/O line and/or a connection circuit may be directed by controller 140 executing the set of instructions for using the data values from a first sub-array in the second subset (eg, a long digital line sub-array) 126) moving to the selected row and the corresponding column in the first sub-array of the first subset. The selected rows and the corresponding columns in the first subset of the first subset can be configured to receive (eg, cache) data values. The controller 140 can then direct the plurality of sequential operations on the data values in the sensing circuitry of the first sub-array in the first subset.The controller 140 can be further configured to direct, for example, the data values that have been performed by the plurality of sequential operations from the first sub-array of the first subset (eg, short) via the shared I/O lines and/or connection circuitry The selected column and the corresponding column in the digital line sub-array 125) are moved to a certain number of rows in a corresponding column of the first sub-array (eg, long digit line sub-array 126) in the second subset. In various embodiments, the rows, columns, and/or subarrays to which the data values are moved after the plurality of sequential operations have been performed on the data values may be different from the data values sent from the long digital line subarray to Rows, columns, and/or subarrays from which short digit line subarrays originate. For example, the data values can be moved to different rows, columns, and/or sub-arrays in one or more long digit line sub-arrays and/or to different rows in one or more short digit line sub-arrays, Columns and/or subarrays.In some embodiments, a controller that executes, for example, a PIM command in a short digit line (eg, cache) sub-array attempts to access a row that is not cached in the short digit line sub-array The controller can move data from a suitably long digital line (e.g., storage) sub-array into a certain number of rows of the cache sub-array. When the row is not free and/or can be used to move data values into the cache sub-array, one or several rows of data values can be moved at least temporarily from the cache sub-array, for example, stored in another location, where The data values of the moved rows are then loaded (eg, written). This may also involve moving data values from a short digit line (eg, cache) sub-array into a long digit line (eg, storage) sub-array. In some embodiments, the data values may be retrieved directly from the long digit line sub-array, for example, when no operations are performed on the data values in advance. Alternatively or additionally, a memory request for a cached row in a short digital line sub-array may, for example, trigger a write back to a long digital line sub-array after an operation has been performed, which may then be followed from the long digital line The array retrieves data values.Attempted host, controller, and/or other access to data values stored in rows of long digital line subarrays that have been moved to short digital line subarrays, for example, cached in short digital line subarrays The cached version (eg,) in the short digit line sub-array can be used to achieve consistency, efficiency, speed, and the like. A particular short digit line (eg, cache) sub-array may also be associated with one or more (eg, a set of) long digit line (eg, storage) sub-arrays. For example, the same row from the storage sub-array may be cached across several corresponding groups (eg, partitions) of the segmented sub-array in corresponding identical rows of the cache sub-array. This may reduce the complexity of the controller determining source and destination locations for data movement and/or may allow parallel execution between long digit line sub-arrays and short digit line sub-arrays in one or more of the partitions Data movement, as described in this article.In various embodiments, memory device 120 can include an isolation circuit (not shown) configured to associate a first portion of shared I/O line 355 corresponding to a first partition with a second partition The second portion of the same shared I/O line 355 is disconnected. The controller 140 can be configured to direct the isolation circuit to disconnect the first portion of the shared I/O line 355 from the second portion during parallel movement of the data value within the first partition and during the second partition. Disconnecting a portion of the shared I/O line 355 isolates the movement of the data value within the first partition and the parallel movement of the data value within the second partition.4A and 4B show another schematic diagram illustrating circuitry for data movement in a memory device in accordance with a number of embodiments of the present invention. As illustrated in FIGS. 1B and 1C and shown in more detail in FIGS. 4A and 4B, the bank section of the DRAM memory device can include a plurality of sub-arrays at 425-0 in FIGS. 4A and 4B. Indicated as a short digit line sub-array 0 and indicated at 426-N-1 as a long digit line sub-array N-1.4A and 4B, which will be considered to be horizontally connected, illustrate each sub-array (e.g., short digit line sub-array 425-0 and long digit line, each partially shown in FIG. 4A and partially shown in FIG. 4B). Sub-array 426-N-1) may have at least a certain number of associated sense amplifiers 406-0, 406-1, ..., 406-X-1. Additionally, at least the short digit line sub-array 425-0 can have computing components 431-0, 431-1, ..., 431-X-1. In some embodiments, each sub-array 425-0, . . ., 426-N-1 may have one or more associated sensing component strips (eg, 124-0, . . . , 124 in FIGS. 1B and 1C). -N-1). According to embodiments described herein, each sub-array 425-0, ..., 426-N-1 may be split into portions 462-1 (shown in Figure 4A), 462-2, ..., 462-M (Figure Shown in 4B). Portions 462-1, ..., 462-M may each be respectively associated with a corresponding column (e.g., 422-0, 422-1, ..., 422-7 (which may be selectively coupleable to a given shared I/O line (e.g. , columns 422-0, ..., 422-X-1 of 455-1, 455, 2, ..., 455-M) together contain a specific number (for example, 2, 4, 8, 16 etc.) sensing An amplifier and/or computing component (eg, sensing circuit 150). At least for the short digit line sub-array 425-0, the corresponding pair of sense amplifiers and computing components can facilitate the sensing circuits indicated at 450-0, 450-1, ..., 450-X-1 in Figures 4A and 4B. Formation.In some embodiments, as shown in Figures 3, 4A and 4B, together with corresponding columns, a sense amplifier can be selectively coupled to the shared I/O line 455 (which can be a pair of shared differential lines) and/or The specific number of computing components can be eight. The number of portions 462-1, 462-2, ..., 462-M of the sub-array may be the same as the number of shared I/O lines 455-1, 455-2, ..., 455-M that may be coupled to the sub-array. The sub-arrays may be arranged according to various DRAM architectures for coupling shared I/O lines 455-1, 455, 2, ..., 455-M between sub-arrays 425-0, ..., 426-N-1 .For example, portion 462-1 of sub-array 0 (425-0) in Figure 4A may correspond to portions of the sub-array illustrated in Figure 3. As such, sense amplifier 0 (406-0) and computation component 0 (431-0) can be coupled to column 422-0. As described herein, a column can be configured to include a pair of complementary digit lines called digital line 0 and digital line 0*. However, alternative embodiments may include a single digit line 405-0 (sensing line) for a single column of memory cells. Embodiments are not so limited.As illustrated in Figures 1B and 1C and shown in more detail in Figures 4A and 4B, in various embodiments, sensing component strips can extend from one end of the sub-array to the opposite end of the sub-array. For example, as shown for sub-array 0 (425-0), the sensing component strip 0 (424-0), shown schematically in the folded sensing line architecture as being above and below the DRAM column, may be Sense amplifier 0 (406-0) and calculation component 0 (431-0) included in portion 462-1 of sub-array 0 (425-0) to sense amplifier X-1 in portion 462-M ( 406-X-1) and calculation component X-1 (431-X-1) and extending from sense amplifier 0 (406-0) and calculation component 0 (431-0) to sense amplifier X-1 (406- X-1) and calculation component X-1 (431-X-1).As described in connection with FIG. 3, the sense amplifiers 406-0, 406-1, . . ., 406-X-1 are combined with the computing components 431-0, 431-1, . . . , 431-X-1 in FIGS. 4A and 4B. And the configuration illustrated by the shared I/O line 0 (455-1) to the shared I/O line M-1 (455-M) is not limited to the sense amplifiers of the sensing circuit (450) in the folded DRAM architecture. One half of the combination of computing components is formed over the columns of memory cells and half is formed below columns 422-0, 422-1, ..., 422-X-1 of the memory cells. For example, in various embodiments, the sensing component strip 424 of a particular short digit line sub-array 425 can be formed on the memory cell column by any number of sense amplifiers and computing components of the sensing component strip and / or formed under the following conditions. Similarly, in various embodiments, the sensing component strips 424 of the particular long digit line sub-array 426 can be formed on and/or under the memory cell columns by any number of sense amplifiers of the sensing component strips. Formed under the circumstances. Thus, in some embodiments as illustrated in FIGS. 1B and 1C, all of the sense amplifiers and/or computational components of the sense sensing component strips may be formed above or below the memory cell column.As described in connection with FIG. 3, each sub-array can have a column selection circuit (eg, 358) configured to be associated with a particular column 422 of a sub-array (eg, sub-array 425-0) and to store the data. Values are coupled from sense amplifier 406 and/or computing component 431 to their complementary digital lines of a given shared I/O line 455-1, . . ., 455-M (eg, complementary shared I/O line 355 in FIG. 3). To implement data movement operations. For example, controller 140 can direct data values of memory cells in a particular row of long digital line sub-arrays 426-N-1 (eg, selected from row 118 in FIG. 1B) to be sensed and moved to the same or different The same or different numbered rows of one or more short digit line sub-arrays 425 in the numbered column. For example, in some embodiments, the data values may be moved from a portion of the first sub-array to a different portion of the second sub-array, eg, not necessarily from portion 462-1 of long digital line sub-array N-1 To the portion 462-1 of the short digit line sub-array 0. In some embodiments, the data values can be moved from columns in portion 462-1 to columns in portions 462-M using a shifting technique.A column selection circuit (eg, 358 in FIG. 3) can direct eight of the portions of the sub-array (eg, short digit line sub-array 425-0 or portion 462-1 of long digit line sub-array 426-N-1) Movement (e.g., sequential movement) of each of the columns (e.g., digit/digit*) may cause the sense amplifiers and/or computational components of the sensing component strips 424-0 of the respective portions to be in a particular order ( For example, all data values are stored (cached) and moved to a shared I/O line in the order of sense columns. In the case of a complementary digital line digit/digit* and a complementary shared I/O line 355 for each of the eight columns, there may be 16 data from one partial of the sub-array to the shared I/O line. A value (eg, a bit) is such that one data value (eg, a bit) is input from each of the sense amplifiers and/or computational components to each of the complementary shared I/O lines at a time.As such, the 2048 portions of the sub-array (eg, sub-array portion 462-1 of each of sub-arrays 425-0, ..., 426-N-1) each have eight columns and are each configured to couple In the case of different shared I/O lines (eg, 455-1 to 455-M), 2048 data values (eg, bits) can be moved to the plurality at substantially the same point in time (eg, in parallel) Share I/O lines. Thus, the plurality of shared I/O lines can be, for example, at least one thousand bits wide, for example, 2048 bits wide, to increase data movement in a DRAM implementation, for example, relative to a 64-bit wide data path. Speed, speed and / or efficiency.As illustrated in Figures 4A and 4B, for each sub-array (e.g., short digital line sub-array 425-0 and long digital line sub-array 426-N-1), one or more multiplexers 460- 1 and 460-2 may be coupled to sense amplifiers and/or computational components of each portion 462-1, 462-2, ..., 462-M of sensing component strips 424 of the sub-array. In various embodiments, multiplexer 460, illustrated in conjunction with FIGS. 4A and 4B, can include at least the functionality embodied by column select circuit 358 illustrated in connection with FIG. 3 and included in column select circuit 358. . Multiplexers 460-1 and 460-2 can be configured to access, select, receive, coordinate, combine the number of selected sense amplifiers in a portion of the sub-array (eg, portion 462-1) and / or computing data values (eg, bits) stored by the component and moving (eg, copying, transmitting, and/or transmitting) the data values to a shared I/O line (eg, shared I/O line 455-1). The multiplexer can be formed between the sense amplifier and/or the computing component and the shared I/O line. As such, a shared I/O line as described herein can be configured to couple a source location and a destination location between several pairs of bank section sub-arrays to achieve improved data movement.As described herein, the controller 140 can be coupled to a library (eg, 121) of memory devices (eg, 120) to execute commands to, for example, follow the data from the source location after performing operations on the data in the library (eg, For example, long digital line sub-array 426-N-1) moves to a destination location (eg, short digit line sub-array 425-0), and vice versa. In various embodiments, a library section can include multiple sub-arrays of memory cells in the library section, for example, sub-arrays 125-0 through 126-N-1 and 425-0 through 426-N-1. In various embodiments, the library section can further include a sensing circuit coupled to the plurality of sub-arrays via a plurality of columns (eg, 322-0, 422-0, and 422-1) of the memory unit (for example, 150). The sensing circuit can include a sense amplifier and/or a computing component coupled to each of the columns and configured to implement commands to move data (eg, 206 and 231, respectively, in FIG. 2 and FIG. 3, Corresponding component symbols in 4A and 4B).In various embodiments, the library section may further include shared I/O lines (eg, 155, 355, 455-1, and 455-M) to couple the source location to the destination location to move data. Additionally, the controller 140 can be configured to direct the plurality of sub-arrays and the sensing circuit to perform moving data to destination locations in the library section (eg, particular rows and/or columns of different selected sub-arrays) The data write operation of the selected memory cell.In various embodiments, the device can include sensing component strips (eg, 124 and 424), the sensing component strips comprising a number of sense amplifiers and/or computing components corresponding to the number of columns of memory cells, For example, each of the memory cell columns is configured to be coupled to a sense amplifier and/or a computing component. The number of sensing component strips (eg, 424-0 to 424-N-1) in the library section may correspond to the number of sub-arrays (eg, 425-0 to 426-N-1) in the library section. .The number of sense amplifiers and/or computing components can be selectively (e.g., sequentially) coupled to a shared I/O line (e.g., as by 355-1, 358-2, 357-1 in Figure 3) And the column selection circuit at 359-2 is shown). The column selection circuitry can be configured to selectively couple the shared I/O lines to, for example, source locations (eg, sub-array 325 as in FIG. 3 and sub-array portion 462-1 in FIGS. 4A and 4B). One or more of the eight sense amplifiers and computing components in the 462-M. As such, the eight sense amplifiers and/or computing components in the source location can be sequentially coupled to the shared I/O line. According to some embodiments, the number of shared I/O lines formed in the array may correspond to the number of columns in the array divided by the sense amplifiers that are selectively coupleable to each of the shared I/O lines and/or Or calculate the number of components (for example, 8). For example, when there are 16,384 columns in an array (eg, a bank section) or each of its sub-arrays and there is one sense amplifier and/or computational component per column, 16,384 columns divided by 8 yields 2048 shared I/Os line.The source sensing component strips (eg, 124 and 424) can include data values (eg, a certain number of bits) that can be selected and configured to cause row sensing from the source location to be moved in parallel to multiple shared I/Os A certain number of sense amplifiers and/or computational components of the line. For example, in response to a command for sequential sensing by a column selection circuit, data values stored in memory cells of selected columns of rows of sub-arrays may be sense amplifiers and/or computations of sensing component strips The components are sensed and stored (cached) in the sense amplifier and/or computing component until the number of data values (eg, the number of bits) reaches a number and/or threshold of data values stored in the row (eg, Sensing the number of sense amplifiers and/or computational components in the component strips, and then moving the data values via the plurality of shared I/O lines. In some embodiments, the threshold amount of data may correspond to at least one thousand bit width of the plurality of shared I/O lines.As described herein, the controller 140 can be configured to move data values from selected ones of the source locations and selected columns to selected ones of the destination locations and selected columns via the shared I/O lines. In various embodiments, the data values may be responsive to specific sensing component strips 424-0, . . . , 424 coupled to a particular sub-array 425-0, . . ., 426-N-1 and/or respective sub-arrays. -N-1 is moved by the command of the controller 140. The data values in the rows of the source (eg, first) sub-array may be sequentially moved to corresponding rows of the destination (eg, second) sub-array. In various embodiments, each sub-array may include 128, 256, 512, 1024 rows, and other number of rows, depending on whether the particular sub-array is a short digital line sub-array or a long digital line sub-array. For example, in some embodiments, the data values can be moved from a first row of the source sub-array to a corresponding first row of the destination sub-array and then from a second row of the source sub-array to a destination sub-array Corresponding second row, followed by moving from the third row of the source sub-array to the corresponding third row of the destination sub-array, and so on until reaching, for example, the last row of the source sub-array or the destination sub-array The last act is stopped. As described herein, the respective sub-arrays may be in the same partition or in different partitions.In various embodiments, selected rows and selected columns in a source location (eg, a first sub-array) input to controller 140 may be different than selected in a destination location (eg, a second sub-array) Rows and selected columns. As such, the locations of the data in the selected row and the selected column of memory cells in the source sub-array may be different than the locations of the data moved to the selected row and the selected column of memory cells in the destination sub-array. For example, the source location can be a particular row and a number of digit lines of portion 462-1 of long digit line sub-array 426-N-1 in FIG. 4A and the destination can be short digit line sub-array 425- in FIG. 4B. The different lines of the portion 462-M in 0 and several digit lines.As described herein, the destination sensing component strips (eg, 124 and 424) can be the same as the source sensing component strips. For example, multiple sense amplifiers and/or computational components can be selected and configured (eg, depending on commands and/or guidance from controller 140) to selectively move sensed data to coupled sharing The I/O line and selectively receives, for example, data that will be moved to a destination location from one of a plurality of coupled shared I/O lines. Column selection circuitry (eg, 358-1, 358-2, 355-1, and 359-2 in FIG. 3) and/or multiplexers described herein (eg, in FIGS. 4A and 4B) may be used. 460-1 and 460-2) perform selection of sense amplifiers and/or computational components in the strip of destination sensing components.In some embodiments, the controller 140 can be configured to receive a certain amount of data selectively receivable by the plurality of selected sense amplifiers and/or computational components in the strip of destination sensing components (eg, A certain number of data bits are written to selected rows and columns of destination locations in the destination sub-array. In some embodiments, the amount of data to be written corresponds to at least one kilobit width of the plurality of shared I/O lines.According to some embodiments, the destination sensing component strip may include a plurality of selected sense amplifiers and/or computing components configured to receive data values ( For example, the amount of bits (eg, the number of data bits) exceeds at least one thousand bits of the plurality of shared I/O lines to store the received data values. In various embodiments, the controller 140 can be configured to write the stored data values (eg, the number of data bits) to selected rows and columns of the destination locations as a plurality of subsets. In some embodiments, the amount of data values of at least a first subset of the written data may correspond to at least one kilobit width of the plurality of shared I/O lines. According to some embodiments, the controller 140 may be configured to write the stored data values (eg, the number of data bits) to the selected row and column in the destination location as a single set, eg, instead of As a subset of the data values.As described herein, the controller 140 can be coupled to a library (eg, 121) of memory devices (eg, 120) to execute commands for parallel split data movement in the library. The bank in the memory device can include a plurality of partitions (eg, 128-0, 128-1, ..., 128-M-1 in FIG. 1C), each partition including a corresponding plurality of sub-arrays (eg, as shown in FIG. 1B) And 125-0 and 125-1 and 126-0, ..., 126-N-1 shown in 1C and 425-0, ..., 426-N-1 shown in Figures 4A and 4B).The library may include the same spacing as the sensing lines of the plurality of sub-arrays and via a plurality of sensing lines (eg, 205-1 and 205-2, 305-1 and 305-2 in FIG. 2 and FIGS. 3, 4A and A corresponding component symbol in 4B is coupled to the sensing circuitry of the plurality of sub-arrays (eg, 150 in FIG. 1A and 250 in FIG. 2). The sensing circuitry including sense amplifiers and/or computational components (eg, at 206 and 231 in FIG. 2 and corresponding component symbols in FIGS. 3, 4A, and 4B, respectively) can be coupled to the sense lines.The library may also include a plurality of shared I/O lines (eg, 355 in FIG. 3 and 455-1, 455-2, ..., 455-M in FIGS. 4A and 4B), the plurality of shared I/O lines A sensing circuit configured to be coupled to the plurality of sub-arrays to be associated with a plurality of data values between sub-arrays of the second partition (eg, partition 128-1) (eg, in a short digit line sub-array 125- 2, in parallel with the movement of the long digital line sub-array 126-2 (not shown), may selectively implement a plurality of data values in the first partition (eg, partition 128-0 in FIG. 1C) Movement between arrays (e.g., between short digit line sub-array 125-0 and long digit line sub-array 126-0 in Figure 1C). An isolation circuit (not shown) can be configured to selectively select portions of one (several) I/O lines shared by the various partitions (eg, the first 128-0 partition and the second partition 128-1) Connect or disconnect.The row may be selected (eg, opened) by the controller 140 for the first sensing component strip via an appropriate selection line, and the data values of the memory cells in the row may be sensed. After sensing, the first sensing component strip can be coupled to the same shared I/O line along with coupling the second sensing component strip to the shared I/O line. The second sensing component strip can still be in a pre-charged state, for example, ready to accept data. After the data from the first sensing component strip has been moved (eg, driven) into the second sensing component strip, the second sensing component strip can be fired (eg, latched) to store the data to the corresponding In the sense amplifier and / or computing component. The row coupled to the second sensing component strip can be opened, for example, after the data is latched, and data resident in the sense amplifier and/or computing component can be written to the destination location of the row .In some embodiments, 2048 shared I/O lines can be configured as 2048-bit wide shared I/O lines. According to some embodiments, the number of periods for moving data from a first row in a source location to a second row in a destination location may be by the number of rows in the array that will be intersected by columns of memory cells in the array Divided by the 2048 bit width of the plurality of shared I/O lines. For example, an array (eg, a library, a library section, or a subarray thereof) can have 16,384 columns (which can correspond to 16,384 data values in a row), and 16,384 rows are divided by the number of intersections with the rows The 2048-bit width of a shared I/O line can produce eight cycles, each individual cycle being at substantially the same point in time (eg, in parallel) for each 2048 bit rate of data in the row. So that all 16,384 data bits in the row are moved after eight cycles have been completed. For example, a plurality of sense amplifiers or computational components (eg, eight sense amplifiers or subsets of computational components) in a sense amplifier or computational component in a sense circuit at a source location, as illustrated in FIGS. 4A and 4B Only one of the displays can be coupled to the corresponding shared I/O line at a time. In an embodiment with 16,384 shared I/O lines, all 16,384 data bits can be moved in parallel.Alternatively or additionally, the bandwidth for moving data from the first row in the source location to the second row in the destination location may be by dividing the number of rows in the array that are intersected by the columns of memory cells in the array by the The 2048 bit width of the plurality of shared I/O lines is determined by multiplying the result by the clock rate of the controller. In some embodiments, determining the number of data values in a row of the array can be based on the plurality of sensing (digital) lines in the array.In some embodiments, the source location in the first sub-array and the destination location in the second sub-array may be in a single library section of the memory device, eg, as shown in FIGS. 1B through 1C and FIGS. 4A through 4B . Alternatively or additionally, the source locations in the first sub-array and the destination locations in the second sub-array may be coupled to a plurality of shared I/O lines and/or connection circuits (eg, as at 232-1 and 232- A separate library and library section of the memory device shown at 2 and described in connection with FIG. 2. As such, the data values may be moved (eg, in parallel) from the first sensing component strip of the first sub-array to the second sub-array via the plurality of shared I/O lines and/or connection circuitry Two sensing component strips.In various embodiments, the controller 140 can select (eg, open) a first row of memory cells of the first sensing component strip (which corresponds to the source location) via an appropriate selection line to sense data stored therein Coupling the plurality of shared I/O lines to the first sense component strip and coupling the second sense component strip to the plurality of shared I/O lines, eg, via column select circuit 358- 1, 358-2, 359-1 and 359-2 and/or multiplexers 460-1 and 460-2. As such, data values can be moved in parallel from the first sensing component strip to the second sensing component strip via the plurality of shared I/O lines. The first sensing component strip can store (eg, cache) the sensed data and the second sensing component strip can store (eg, cache) the moved data.The controller 140 can select (eg, open) a second row of memory cells of the second sense component strip (which corresponds to the destination location) via an appropriate selection line, eg, via column select circuits 358-1, 358-2, 359-1 and 359-2 and/or multiplexers 460-1 and 460-2. The controller 140 can then direct the data moving to the second sensing component strip to the destination location in the second memory unit row.Shared I/O lines can be shared between some or all of the sensing component strips. In various embodiments, a sense component strip or a pair of sense component strips, for example, coupling source locations and destination locations, can communicate via shared I/O lines at any given time. As described herein, the source rows of the source sub-array (eg, any one of the 512 rows) may be different (eg, do not need to match) the destination row of the destination sub-array, where in various embodiments the source And the destination sub-array can be in the same or different banks and library sections of the memory unit. Moreover, a selected source column (eg, any one of the eight columns) configured to couple to a particular shared I/O line can be different (eg, does not need to match) a selected destination column of the destination sub-array .As described herein, I/O lines 455 can be shared by sensing circuitry 424 of a second subset (eg, long digital line sub-array 426) and a first subset (eg, short digital line sub-array 425). The shared I/O lines can be configured to be selectively coupleable to the sensing circuitry of the first subset to enable data values stored in selected ones of the selected rows in the second subset to be moved to the first Sensing circuitry for selected sub-arrays in the subset.The controller 140 can be configured to direct a plurality of sequential operations on the data values in the sensing circuits of the selected sub-array in the first subset. In some embodiments, the controller can be configured to direct the data values from the sensing circuit 450 of the selected sub-array 425 of the first subset prior to performing the plurality of sequential operations on the data values by the sensing circuit Moves to the selected memory cell in the selected row in the selected subarray. For example, data values may be moved from the sensing circuit 450 to be stored in memory cells in the short digital line sub-array 425 before the plurality of sequential operations have been performed on the data values. In some embodiments, the controller can be configured to direct the data values from the sensing circuit 450 of the selected sub-array 425 of the first subset to be performed after the plurality of sequential operations are performed on the data values by the sensing circuit to Selects the selected memory cell in the selected row in the subarray. For example, data values may be moved from the sensing circuit 450 to be stored in memory cells in the short digital line sub-array 425 after the plurality of sequential operations have been performed on the data values in the sensing circuit 450. This may be the first time that the data value is stored in a memory unit in the short digit line sub-array 425 or the data values from which the plurality of sequential operations are performed may be stored by overwriting the data values previously stored in the memory unit. .The controller 140 can be configured to direct the data values that have been performed by the plurality of sequential operations from the selected sub-array of the first subset (eg, the selected short digit line sub-array 425) via the shared I/O line 455. The sense circuit 450 moves to a selected one of the selected sub-arrays (eg, selected long digit line sub-arrays 426) in the second subset. A plurality of shared I/O lines 455-1, 455, 2, ..., 455-M can be configured to be selectively coupled to the sensing circuits 450 of the plurality of sub-arrays to selectively enable storage in the second The plurality of data values in the rows of the subset can be moved in parallel to a corresponding plurality of sense amplifiers and/or computational components in the first subset of selectively coupled sensing circuits. In some embodiments, the plurality of shared I/O lines 455-1, 455, 2, . . ., 455-M can be configured to be selectively coupled to the sensing circuits 450 of the plurality of sub-arrays to be Selectively enabling a plurality of data values to be moved in parallel from a corresponding plurality of sense amplifiers sensing the plurality of data values stored in a row of the second subset to a selectively coupled sense of the first subset Measuring circuit. In some embodiments, the plurality of sense amplifiers can be included in the sensing circuit of the second subset without the coupled computing component. In some embodiments, the number of shared I/O lines may correspond to the number of bit widths of the shared I/O lines.The sensing circuit 450 described herein can be included in a plurality of sensing component strips 424-0, . . . , 424-N-1 and each sensing component strip can be associated with the plurality of sub-arrays in the library The respective subsets 425-0, ..., 426-N-1 of the first subset and the second subset are physically associated. The number of strips of sensing components in the library of memory devices may correspond to a first subset in the library and a number of the plurality of sub-arrays in the second subset. Each sensing component strip can be coupled to the first subset of the plurality of sub-arrays and the respective sub-array of the second subset and the I/O lines can be coupled by the plurality of sensing component strips The sensing circuits 450 are selectively shareable.As shown in the sensing component strip 424-0 associated with the short digit line sub-array 425-0, the sensing component strips can be configured to include a number and a first subset configured for in-memory operation The plurality of columns 422 of memory cells correspond to a plurality of sense amplifiers 406 and computing components 431. The number of sense amplifiers and computing components in the sense component strip 424-0 can be selectively coupled to a shared I/O line, for example, each of the respective sense amplifiers and/or computational components can be selected One of the shared I/O lines 455-1, 455, 2, ..., 455-M is coupled to one another.As shown in the sensing component strips 424-N-1 associated with the long digital line sub-array 426-N-1, the sensing component strips can be configured to include a number and a number configured for data storage. The number of plurality of columns 422 of memory cells in the two subsets corresponds to a plurality of sense amplifiers 406 (eg, without computing components). The number of sense amplifiers in sense component strips 424-N-1 can be selectively coupled to a shared I/O line, eg, each of the respective sense amplifiers can be selectively coupled to a shared I /O line one of 455-1, 455, 2, ..., 455-M.In some embodiments, the first subset of the plurality of sub-arrays (eg, short digit line sub-array 425) can be a number of sub-arrays of PIM DRAM cells. By comparison, in some embodiments, a second subset of the plurality of sub-arrays (eg, long digital line sub-array 426) can be or can include a number of sub-arrays of memory cells other than PIM DRAM cells. For example, as previously described, the memory cells of the second subset can be associated with sensing circuits that are not formed with computing components such that processing functionality is reduced or eliminated. Alternatively or in addition, one or several types of memory cells other than DRAM may be utilized in a long digital line sub-array for storing data.In various embodiments, as shown in Figures IB and 1C, the number of sub-arrays in the first subset may correspond to the number of sub-arrays in the second subset, for example, in a 1:1 ratio. For example, as shown in FIG. 1C, each of the number of sub-arrays in the first subset can be physically associated with a respective sub-array in the second subset. Alternatively or additionally, as shown in FIG. 1B, the number of sub-arrays in the first subset may be physically associated with each other as a first block and the number of sub-arrays in a second subset may also be used as a second block And physically related to each other. These alternate configurations can vary between partitions of the library and/or library. In some embodiments, the number of sub-arrays in the first subset may correspond to respective complex numbers of the sub-arrays in the second subset, eg, wherein the sub-arrays in the first subset are relative to the plurality of sub-segments in the second subset The array is configured at a ratio of 1:2, 1:4, and/or 1:8. For example, each of the number of sub-arrays in the first subset may be physically associated with the respective plurality of sub-arrays in the second subset, eg, one sub-array in the first subset may be adjacent to The four sub-arrays in the second set, which may be followed by another sub-array in the first subset, adjacent to the four sub-arrays in the second set, and the like.The memory device 120 described herein can include a first subset of a plurality of sub-arrays, a second subset of the plurality of sub-arrays, and a plurality of partitions (eg, 128-0, 128-1, ... in Figure 1C). 128-M-1), wherein in some embodiments each of the plurality of partitions may include at least one sub-array from the respective first subset 125 and at least from the respective second subset 126 A subarray. Memory device 120 can include I/O lines 155 that are shared by partitions. The shared I/O line 155 can include multiple portions, for example, the plurality of portions can correspond to the length of the partitions 128-0, 128-1, ..., 128-M-1. The isolation circuit can be configured to selectively connect the first portion of the plurality of portions of the shared I/O line with the second portion of the shared I/O line, wherein the first portion corresponds to the plurality of partitions The first partition (eg, 128-0) and the second portion corresponds to a second partition (eg, 128-1) of the plurality of partitions.In some embodiments, the resulting data values that have been performed in a plurality of sequential operations in the short digital line cache sub-array can be passed back to the same long digital line storage sub-array from which the data values were originally transmitted and/or The data values that have been performed are returned for storage in a long digital line sub-array that is different from the storage sub-array from which the data values were originally transmitted. Thus, the resulting data values for which the plurality of sequential operations have been performed can be returned for storage in more than one long digital line sub-array. Alternatively or additionally, raw data values may be obtained from at least one of the number of library registers 158 and/or vector registers 159 described herein and/or result data values may be sent to the number of library registers 158 And/or at least one of the vector registers 159.As described herein, the controller 140 can be coupled to a library (eg, 121) of memory devices (eg, 120) to execute commands to perform a plurality of sequential operations. I/O lines (e.g., 455-1, 455-2, ..., 455-M in Figures 4A and 4B) may be shared by the second subset 426 and the sensing circuitry 450 of the first subset 425. The shared I/O lines can be configured to be selectively coupleable to the sensing circuitry of the first subset such that a certain number of data values stored in the second subset can be moved to the selected sub-array of the first subset Sensing circuit. As described herein, the controller 140 is configured to direct a plurality of sequential in-memory operations on the number of data values in the sensing circuit 450 of a selected sub-array (eg, 425-0) in the first subset. .In some embodiments, the controller 140 can be configured to direct the number of data values from a selected sub-array in the first subset prior to performing the plurality of sequential operations on the data values by the sensing circuit (eg, The sensing circuits of 425-0) (eg, 450-0, 450-1, ..., 450-X-1) move to a certain number of selected rows 119 of the selected sub-array. Alternatively or additionally, the controller 140 can be configured to direct the number of data values from the sensing sub-array of the selected sub-array after performing the plurality of sequential operations on the data values by the sensing circuit Moves to a certain number of selected rows of the selected subarray.In some embodiments, the controller 140 can be configured to direct the data values generated by performing the plurality of sequential operations from the selected sub-array in the first subset via the shared I/O line (eg, 455-1) ( For example, a sensing circuit (e.g., 450-, 450-1, ..., 450-X-1) of 425-0) moves to a selected sub-array (e.g., 425-N-1) in the second subset. A plurality of shared I/O lines (eg, 455-1, 455-2, . . ., 455-M) can be configured to be selectively coupled to the sensing circuits of the plurality of sub-arrays (eg, sub-array 425- 0 and 426-N-1 sensing circuits 450-0, 450-1, ..., 450-X-1) to selectively enable a plurality of data values stored in the second subset to be moved in parallel to the first A subset of the plurality of sense amplifiers and/or computing components in the subset of the sensing circuits can be selectively coupled. The plurality of shared I/O lines can be configured to be selectively coupleable to the sensing circuits of the plurality of sub-arrays to selectively enable the plurality of data values to be stored from the sensing in the second subset The plurality of sense amplifiers (eg, 406-0, 406-1, . . . , 406-X-1) of the plurality of data values in 426 are selectively coupled to the first subset 425 for selective coupling. A sensing circuit (eg, including sense amplifier 406 and computing component 431). The plurality of sense amplifiers (eg, 406-0, 406-1, . . . , 406-X-1) may be included in a sense circuit of the second subset 426 (eg, 450-0, 450-1, ... , 450-X-1). In some embodiments, the sensing circuitry of the second subset 426 may not include the computing component 431 as compared to the sensing circuitry of the first subset 425.In some embodiments, memory device 120 can include a number (eg, one or more) of library registers 158 that can be selectively coupled to controller 140. As described herein, the controller 140 can be configured to direct a plurality of sequential in-memory operations on the number of data values in a sensing circuit of a selected sub-array in the first subset and to perform the plurality of The data values generated by the sequential operations are moved from the sensing circuit to the selected destination. For example, the selected destination may be selected row 119 of the selected subset of the first subset 425, selected rows 118 of the selected subset of the second subset 426, and/or selected The selected row in library register 158 (not shown).In some embodiments, memory device 120 can include selection of a sensing sub-array of a selected subset of the first subset (eg, sensing component strips 424-0 in FIGS. 4A and 4B) and a second subset Sensing circuitry of the stator array (eg, sensing component strips 424-N-1 in FIGS. 4A and 4B) and I/O lines shared by selected library registers 158 (eg, as shown at 155 and in conjunction with the map) 1C described). The shared I/O lines can be configured to be selectively coupleable to the sensing circuitry of the first subset such that a certain number of result data values stored in the first subset 425 can be moved to a selected destination ( For example, selected ones 118 of the selected sub-array of the second subset 426 and/or selected ones of the selected library registers 158).In some embodiments, memory device 120 can include a number (eg, one or more) of vector registers 159 that can be selectively coupled to controller 140. Thus, as depicted in FIG. 1C and described in connection with FIG. 1C, the sensing circuitry of the selected sub-array of the first subset and the sensing circuitry of the selected sub-array of the second subset and the selected bank registers are shared The I/O lines can be further shared by selected vector registers 159. In some embodiments, the number of result data values stored in the first subset can be moved to a selected destination, except for selected ones 118 of the selected sub-array of the second subset 426, the selection The destination may also include selected rows in the selected bank register 158 and/or selected rows in the vector register 159 (not shown).In some embodiments, control logic (eg, connected to controller 140 and/or logic circuit 170 and/or controller 140 and/or a portion of logic circuit 170) may be responsible for caching from the instructions in FIGS. 1A and 1B. Memory 171, array 130, and/or host 110 take the form of a microcode engine (not shown) that extracts and executes machine instructions (eg, microcode instructions). The microcode engine may also be in the form of a number of microcode engines and/or ALU circuits. The microcode engine can be configured to execute a set of instructions to direct a number of data values from a source row selected from a first subset of the plurality of subarrays (eg, row 119 in subset 425) or a second A corresponding number of memory cells of the source column (eg, row 118 in subset 426) in the subset are moved to a selected row in the selected bank register 158 and/or a corresponding row in the selected row in the selected vector register 159 A number of memory cells.The microcode engine can be further configured to execute a set of instructions to selectively direct the stored data values to selected subarrays in the second subset 426, selected rows and/or vectors in the selected library registers 158 In the selected row in register 159. The storage of the respective data values may be selectively offset by a number of memory cells in the selected destination relative to the storage of the corresponding data values in the memory cells in the source row 119 of the first subset 425. In some embodiments, the first number of memory cells in the selected source row 119 in the first subset may be different from the source row 118 in the second subset 426, the selected row in the selected bank register 158, and/or Or a second number of memory cells in at least one of the selected ones of vector registers 159.In some embodiments, memory device 120 can include a sensing sub-array of selected sub-arrays of a first subset (eg, 425-0) and a selected sub-array of a second subset (eg, 426-N- The sense circuit 450 of 1), the selected bank register 158, and the I/O lines shared by the selected vector register 159, for example, 455-1, 455-2, ..., 455-M. The microcode engine can be configured to execute a set of instructions to direct a shared I/O line to be selectively coupled to the first subset and the second subset of sensing circuits to selectively cause the first subset to be stored A certain number of result data values in 425 and/or second subset 426 can be moved to the selected destination. In various embodiments, the selected destination may be a selected row in the selected bank register 158 and/or a selected row in the selected vector register 159.In some embodiments, memory device 120 may include connection circuitry in addition to or in lieu of the shared I/O lines described herein (eg, as shown at 232-1 and 232-2 and described in connection with FIG. 2). The connection circuit can be configured to couple to a certain number of sub-arrays in the second subset (eg, long digital line sub-arrays 126-0, 126-1, ... as shown in Figure 1C and described in connection with Figure 1C). Sensing circuit connections of particular columns in 126-N-1) (eg, columns 422-0, 422-1, ..., 422-X-1 in Figures 4A and 4B) (e.g., as in 217-1) And shown in 217-2) to a certain number of rows in a corresponding column of the first sub-array (eg, short digit line sub-array 125-0) in the first subset. The microcode engine can be configured to execute a set of instructions to direct a connection circuit to move a plurality of data values from the number of subarrays in a second subset to corresponding plurality of selected ones in a first subset of the first subset Rows 119 and corresponding columns are used to perform the plurality of sequential operations. The plurality of selected rows and corresponding columns in the first subset of the first subset may be configured (eg, opened) to receive the plurality of data values.The controller 140 can direct the plurality of sequential operations to the plurality of data values in the sensing circuit 250 of the first sub-array (eg, 125-0) in the first subset. For example, in some embodiments, memory device 120 can be configured to move a plurality of data values from one or more of long digital line sub-arrays 126-0, 126-1, . . ., 126-N-1 The selected row 119 of the selected short digit line sub-array 125-0 is selected (eg, sequentially or in parallel) to enable the plurality of sequential operations to be performed on the data values.Connection circuit 232 can be further configured to be selectively coupled to sensing circuitry (eg, sense amplifier 206 and computing component 231) of first subset 425 and sensing circuitry of second subset 426 (eg, sensing Amplifier 206) can selectively cause a certain number of result data values stored in first subset 425 and second subset 426 to be moved to a selected destination. Similar to the shared I/O line, the selected destination may be the selected row in the selected bank register 158 and/or the selected row in the selected vector register 159.As such, after the plurality of sequential operations are performed on the first data value by the sensing circuitry of the first sub-array, the booted data via the first portion of the shared I/O line (eg, corresponding to partition 128-0) The movement may be from a first sub-array in the first subset (eg, short digit line sub-array 125-0) to a third sub-array in the second subset (eg, long digit line sub-array 126-1). In some embodiments, after the plurality of sequential operations are performed on the second data value by the sensing circuitry of the second sub-array, via the second portion of the shared I/O line (eg, corresponding to partition 128-1) The guided data movement performed may be from a second sub-array in the first subset (eg, short digit line sub-array 125-2) to a fourth sub-array in the second subset (eg, long digit line sub-array 126) -2 (not shown)). For example, the guided data movement can be within a first partition (eg, 128-0), and/or the guided data movement can be, for example, at a second partition (eg, 128-1) Execute internally in parallel.In various embodiments, the controller 140 can be configured to selectively direct an isolation circuit (not shown) to connect the first portion (eg, corresponding to the partition 128-0) to the second portion during the guided data movement (For example, corresponding to any of the partitions 128-1, ..., 128-M-1). The guided data movement via the connected first portion and the second portion of the shared I/O line may be from a sub-array in a second subset of the second portion (eg, long digital line sub-array 126-N- 1) to a sub-array in the first subset of the first portion (eg, short digit line sub-array 125-0). In various embodiments, the controller 140 can also be configured to selectively direct the isolation circuit to connect the first portion to the second portion during the guided data movement, after relaying the plurality of sequential operations on the data values The guided data movement via the connected first portion and the second portion of the shared I/O line may be a sub-array from the first subset of the first portion (eg, short digit line sub-array 125-0) Subarrays to the second subset in the second portion (eg, long digital line sub-arrays 126-N-1 from which data values were originally transmitted) and/or to partitions 128-1, ..., 128-M- Any other long digit line subarray in 1.In various embodiments, the number of sub-arrays may be different between multiple partitions in the library and/or between libraries. The ratio of the long digit line sub-array to the short digit line sub-array or whether any type of sub-array exists in the partition before connecting the partitions may also be between multiple partitions in the library and/or between banks different.As described herein, a sensing component strip (eg, 424-N-1) can include a number of sense amplifiers configured to pass a certain amount of data from a second subset The rows of the first sub-array (eg, long digit line sub-array 426-N-1) (eg, one or more of rows 118) are moved in parallel to multiple shared I/O lines (eg, 455-1, 455) -2,..., 455-M), wherein the amount of data corresponds to at least one thousand bit width of the plurality of shared I/O lines. A sensing component strip (eg, 424-0) associated with a first sub-array of the first subset (eg, short digit line sub-array 425-0) can include configured to receive (eg, cache) from A row of the first sub-array of the second subset senses and a certain number of sense amplifiers 406 and computing component 431 of a certain amount of data moving in parallel via the plurality of shared I/O lines. The controller 140 can be configured to direct at least one of the sensing component strips associated with the short digit line sub-array to perform a plurality of sequential operations on the at least one of the quantities of received data.Although the description herein has mentioned several parts and partitions for the sake of clarity, the apparatus and method presented herein can be adapted to any number of shared I/O lines, partitions, sub-arrays, and / or one of the lines. For example, controller 140 can send a signal to direct an isolation circuit via a corresponding portion of the shared I/O line to connect and disconnect the first sub-array in the library with the last sub-array in the library to enable data to enable data Moving from a sub-array in any of the partitions to a sub-array in any of the other partitions, for example, the partitions may be contiguous and/or separated by a number of other partitions. Additionally, although the two disconnected portions of the shared I/O line are described to achieve parallel data movement within two respective pairs of partitions, the controller 140 can send a signal to direct any of the shared I/O lines. A number of portions of the isolation circuit are connected and disconnected to achieve parallel data movement within any number of corresponding pairs of partitions. Moreover, the data can be selectively selectively moved in parallel in any of the first direction and/or the second direction in respective portions of the shared I/O line.As described herein, a method is provided for operating memory device 120 to perform in-memory operations by executing non-transitory instructions by processing resources. The method can include performing a plurality of sequential in-memory operations on a plurality of data values. The number of the plurality of data values may correspond to the number of sense amplifiers 406 and/or computing components 431 in the first sensing component strip (eg, 424-0), the sense amplifier 406 and/or the computing component 431 Coupled to receive the plurality of data values from a selected second sub-array (eg, 426-0) to a selected first sub-array (eg, 425-0) and/or to the plurality of data values Take action. Performing the sensing after selecting the plurality of data values in the selected second sub-array and moving the plurality of sensed data values to the first sensing component strip coupled to the selected first sub-array Multiple sequential operations.For example, a selected first row of selected second sub-arrays (eg, long digit line sub-arrays 426-N-1) in library 121 of the memory device (eg, one or more of rows 118) The data value is sensed in the selected memory cell. The sensed data value can be moved to a first sense component strip (eg, 424-0) coupled to a selected first sub-array (eg, short digit line sub-array 425-0) in the library. In some embodiments, the selected first sub-array may be configured with a number of up to half of the number of memory cells in the column of the selected second sub-array in the column of the selected first sub-array Memory unit. A plurality of sequential operations may be performed on the sensed data values in the first sense component strip coupled to the selected first sub-array. As described herein, the resulting data values for which the plurality of sequential operations have been performed may be moved from the first sensing component strip (eg, 424-0) to the selected sub-array (eg, short digit line sub-array 425) And/or memory cells in selected ones of the long digit line sub-arrays 426) and/or memory cells in selected ones of the registers (eg, bank registers 158 and/or vector registers 159).In various embodiments, the method can include sequentially storing the plurality of sensed data values in a library coupled to a library of selected second (eg, short digit line) sub-arrays 426-N-1 Two sensing component strips (eg, 424-N-1) and moving the plurality of sensed data values from the second sensing component strip to the first sensing coupled to the selected first sub-array Component strips.The first data value generated by performing the plurality of sequential operations may be moved from a first sensing component strip (eg, 424-0) of the short digit line array 425-0 to a selected first selected sub-array One line 119. The resulting first data value can be stored in a selected first row 119 of the selected first sub-array (eg, short digit line sub-array 425-0).In some embodiments, the method can further include performing another operation on the resulting first data value moved from the selected first row by the first sensing component strip coupled to the selected first sub-array. The second data value generated by performing other operations may be stored in the selected second row of the selected first sub-array. In some embodiments, the method can further include moving the resulting first data value from the selected first row of the selected first sub-array to the selected second row of the selected first sub-array. Following moving the resulting first data value to the selected second row of the selected first sub-array, a first sense component strip coupled to the selected first sub-array (eg, short digit line array 425) The sensing component strip 424-0) of -0 performs another operation on the resulting first data value.Alternatively or additionally, the method can be further included in a first sense component strip coupled to the selected first sub-array (eg, sense component strip 424-0 of short digit line array 425-0) The plurality of sensed data values perform the plurality of sequential operations. Data values generated by performing the plurality of sequential operations may be moved from the first sensing component strip to selected ones of the second sub-array (eg, row 118 in long digit line sub-array 425-N-1) .In various embodiments, the resulting data values for which the plurality of sequential operations have been performed are selectively moveable to a number of locations, wherein moving the resulting data values to one location does not exclude moving the resulting data values to one or more Other locations. For example, the resulting data value can be moved from the sensing component strip (eg, 424-0) to a selected one of the selected first rows of the selected first sub-array in the same bank of the memory device. For example, the resulting data value for which the plurality of sequential operations have been performed can be passed back to the memory unit from which the resulting data value was originally transmitted. The resulting data value can be moved from the sensing component strip to a selected one of the selected second rows of the selected second sub-array in the same bank. For example, the resulting data values can be passed back to memory cells in different rows in the sub-array from which the resulting data values are transmitted. The resulting data value can be moved from the sensing component strip to a selected one of the selected ones of the selected second sub-array in the same bank. For example, the resulting data values can be passed back to memory cells in a row of sub-arrays from which the resulting data values of different sub-arrays are transmitted.The resulting data value can be moved from the sensing component strip to a selected one of the plurality of selected rows of the selected second sub-array in the same bank. For example, the resulting data value can be passed back to the memory cells in each of more than one of the sub-arrays from which the result data values are transmitted. The resulting data value can be moved from the sensing component strip to selected ones of each of the plurality of selected rows, wherein each selected row is in a respective one of the plurality of sub-arrays in the same bank. For example, the resulting data values can be passed back to memory cells in each of more than one row, where each row is in a different sub-array in the library from which the resulting data values are sent.In some embodiments, the resulting data values can be moved from the sensing component strip to selected ones of the selected ones of the selected ones of the different banks. For example, the resulting data values for which the plurality of sequential operations have been performed may be passed back to memory cells in the sub-array from which the resulting data values in different banks of the memory device are transmitted. Although the data value movement via the shared I/O line can be in the same bank, the connection circuits 232-1 and 232-2 described in connection with FIG. 2 can be utilized for data movement between the banks.As described herein, in some embodiments, the method can include storing the sensed data value in a second sense component strip coupled to a selected second sub-array (eg, 426-N-1) ( For example, 424-N-1). The sensed data value is moveable from the second sense component strip to a first sense component strip (eg, 424-0) coupled to a selected first sub-array (eg, 425-0). The sensed data values may be stored in memory cells in a selected second row of selected first sub-arrays (eg, one or more of rows 119). In various embodiments, the sensed data values may be stored in the selected first sub-array prior to performing an operation on the sensed data values and/or subsequent operations on the sensed data values.The method can include performing a plurality of operations (eg, a sequence of operations) on the sensed data values in a sense component strip coupled to the selected first sub-array. For example, a certain number of data values can be moved from a row of long digit line sub-arrays (eg, 426-N-1) to a short digit line sub-array (eg, 425-0) for use in relation to the sequence of operations The results of each of the operations are passed back to the long digit line sub-array with improved speed, rate and/or efficiency to perform the sequence. Each operation can be performed at a modified speed, rate, and/or efficiency in a sensing component strip coupled to a short digital line sub-array and the advantages can be proportionally increased with each additional operation in the sequence of operations. The resulting data values for which the plurality of operations have been performed may be moved from the sensing component strip to a selected one of a selected number of locations and/or memory cells in a selected one of the registers, as herein description.In some embodiments, the method can include coupling to a selected first sub-array (eg, 425- via an I/O line (eg, 455-1) shared by the first and second sensing component strips. 0) a first sensing component strip (eg, 424-0) and a second sensing component strip coupled to a selected second sub-array (eg, 426-N-1) (eg, 424-N- 1) Selectively coupled. The method can include moving the plurality of sensed data values from a second sense component strip coupled to a selected second sub-array to a first one coupled to a selected first sub-array via a shared I/O line Sensing component strips. In various embodiments, the method can include performing the plurality of sequential operations by the first sensing component strip without the result of the respective plurality of operations prior to completing a last one of the plurality of sequential operations Moving to a second sensing component strip or memory unit of the second sub-array. The method can include, from a first sense component strip, a data value generated by completing the last one of the plurality of sequential operations via a shared I/O line (eg, which can be different than a previously shared I/O line) A second sensing component strip (eg, 424-) that moves (eg, 424-0) to a second sub-array (eg, one or more sub-arrays selected from 426-0, . . . , 426-N-1) N-1) or memory unit. Data values resulting from the completion of the plurality of sequential operations may be written to at least one selected memory cell of at least one selected row 118 of the selected first sub-array.Although illustrated and described herein, including controllers, short digital line sub-arrays, long digital line sub-arrays, bank registers, vector registers, sensing circuits, sense amplifiers, computing components, sensing component strips, shared I Example embodiments of various combinations and configurations of /O lines, column selection circuits, multiplexers, connection circuits, etc., but embodiments of the invention are not limited to those combinations explicitly set forth herein. Controllers, short digital line subarrays, long digital line subarrays, bank registers, vector registers, sensing circuits, sense amplifiers, computing components, sensing component strips, shared I/O lines, columns, as disclosed herein Other combinations and configurations of selection circuits, multiplexers, connection circuits, and the like are expressly included within the scope of the present invention.Although specific embodiments have been illustrated and described herein, it will be understood by those skilled in the art The invention is intended to cover modifications or variations of one or more embodiments of the invention. It should be understood that the above description has been made in an illustrative and non-limiting manner. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those skilled in the <RTIgt; The scope of one or more embodiments of the invention includes other applications in which the above structures and processes are used. The scope of the one or more embodiments of the invention should beIn the foregoing embodiments, some of the features are grouped together in a single embodiment for the purpose of simplifying the invention. The method of the present invention is not to be construed as being limited to the embodiment of the present invention. Instead, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Therefore, the following claims are hereby incorporated into the claims, and each of the claims |
The present disclosure relates to the field of integrated circuit package design and, more particularly, to packages using a bumpless build-up layer (BBUL) designs. Embodiments of the present description relate to the field of fabricating microelectronic packages, wherein a first microelectronic device having through-silicon vias may be stacked with a second microelectronic device and used in a bumpless build-up layer package. |
CLAIMS What is claimed is: 1. A microelectronic package comprising: a first microelectronic device having an active surface, an opposing back surface, and at least one side, wherein the first microelectronic device includes at least one through-silicon via extending into the first microelectronic device from the first microelectronic device back surface; a second microelectronic device having an active surface, an opposing back surface, and at least one side; at least one interconnect electrically connecting the second microelectronic device active surface and the at least one first microelectronic device through-silicon via proximate the first microelectronic device back surface; and an encapsulation material adjacent the at least one first microelectronic device side and proximate the at least one second microelectronic device side. 2. The microelectronic package of claim 1, wherein the encapsulation material includes a back surface substantially planar to the second microelectronic device back surface. 3. The microelectronic package of claim 1, wherein the encapsulation material includes a front surface proximate the first microelectronic device active surface. 4. The microelectronic package of claim 3, further including a build-up layer formed adjacent encapsulation material front surface. 5. The microelectronic package of claim 4, wherein the build-up layer is electrically connected to the first microelectronic device active surface. 6. The microelectronic package of claim 1, wherein the encapsulation material comprises a silica-filled epoxy. 7. The microelectronic package of claim 1, further including an underfill material disposed between the first microelectronic device and the second microelectronic device. 8. The microelectronic package of claim 1 , wherein the first microelectronic device comprises a microprocessor. 9. The microelectronic package of claim 1, wherein the second microelectronic device comprises a memory device. 10. A microelectronic package comprising: a first microelectronic device having an active surface, an opposing back surface, and at least one side, wherein the first microelectronic device includes at least one through-silicon via extending into the first microelectronic device from the first microelectronic device back surface; a second microelectronic device having an active surface, an opposing back surface, and at least one side; at least one interconnect electrically connecting first microelectronic device active surface and the second microelectronic device active surface; and an encapsulation material adjacent the at least one first microelectronic device side and proximate the at least one second microelectronic device side. 11. The microelectronic package of claim 10, wherein the encapsulation material includes a back surface substantially planar to the second microelectronic device back surface. 12. The microelectronic package of claim 11, wherein the encapsulation material includes a front surface proximate the first microelectronic device back surface. 13. The microelectronic package of claim 12, further including a build-up layer formed adjacent encapsulation material front surface. 14. The microelectronic package of claim 13, wherein the build-up layer is electrically connected to the at least one first microelectronic device through-silicon via proximate the first microelectronic device back surface. 15. The microelectronic package of claim 10, wherein the encapsulation material comprises a silica-filled epoxy. 16. The microelectronic package of claim 10, further including an underfill material disposed between the first microelectronic device and the second microelectronic device. 17. The microelectronic package of claim 10, wherein the first microelectronic device comprises a microprocessor. 18. The microelectronic package of claim 10, wherein the second microelectronic device comprises a memory device. |
BUMPLESS BUILD-UP LAYER PACKAGE WITH A PRE-STACKED MICROELECTRONIC DEVICES BACKGROUND [0001] Embodiments of the present description generally relate to the field of microelectronic device package designs and, more particularly, to a microelectronic device package having pre-stacked microelectronic devices in a bumpless build-up layer (BBUL) design. BRIEF DESCRIPTION OF THE DRAWINGS [0002] The subject matter of the present disclosure is particularly pointed out and distinctly claimed in the concluding portion of the specification. The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. It is understood that the accompanying drawings depict only several embodiments in accordance with the present disclosure and are, therefore, not to be considered limiting of its scope. The disclosure will be described with additional specificity and detail through use of the accompanying drawings, such that the advantages of the present disclosure can be more readily ascertained, in which: [0003] FIGs. 1-9 illustrate side cross-sectional views of a process of forming a microelectronic device package having pre-stacked microelectronic devices in a bumpless build-up layer design. [0004] FIG. 10 illustrates a side cross-sectional view of another embodiment of a microelectronic device package having pre-stacked microelectronic devices in a bumpless build-up layer design. DETAILED DESCRIPTION [0005] In the following detailed description, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments in which the claimed subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the subject matter. It is to be understood that the various embodiments, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described herein, in connection with one embodiment, may be implemented within other embodiments without departing from the spirit and scope of the claimed subject matter. In addition, it is to be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the claimed subject matter. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the subject matter is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the appended claims are entitled. In the drawings, like numerals refer to the same or similar elements or functionality throughout the several views, and that elements depicted therein are not necessarily to scale with one another, rather individual elements may be enlarged or reduced in order to more easily comprehend the elements in the context of the present description. [0006] Embodiments of the present description relate to the field of fabricating microelectronic packages, wherein a first microelectronic device having through-silicon vias may be stacked with a second microelectronic device and used in a bumpless build-up layer package. [0007] FIGs. 1-8 illustrate cross-sectional views of an embodiment of a process of forming a bumpless build-up layer coreless (BBUL-C) microelectronic package. As shown in FIG. 1, a first microelectronic device 102 may be provided, wherein the first microelectronic device 102 includes an active surface 104, an opposing back surface 106 that is substantially parallel to the first microelectronic device active surface 104, and at least one side 108 extending from the first microelectronic device active surface 104 to the first microelectronic device back surface 106. The first microelectronic device 102 may have an active portion 105 proximate the first microelectronic device active surface 104 and a substrate portion 107 extending from the first microelectronic device active portion 105 to the first microelectronic device back surface 106. As will be understood to those skilled in the art, the first microelectronic device active portion 105 comprises the integrated circuitry and interconnections (not shown) of the first microelectronic device 102. The first microelectronic device 102 may be any appropriate integrated circuit device including but not limited to a microprocessor (single or multi-core), a memory device, a chipset, a graphics device, an application specific integrated circuit, or the like. In one embodiment, the first microelectronic device 102 is a microprocessor. [0008] The first microelectronic device 102 may have at least one conductive via extending through the first microelectronic device substrate portion 107 from the first microelectronic device back surface 106 to the first microelectronic device active portion 105. Such a conductive via configuration is known as a through- silicon via 112. The first microelectronic device through- silicon via(s) 112 may be in electrical communication with the integrated circuitry (not shown) in the first microelectronic device active portion 105. Each first microelectronic device through-silicon via 112 may have a contact land 116 on the first microelectronic device back surface 106. Although the first microelectronic device back surface contact lands are shown directly adjacent the first microelectronic device through-silicon vias 112, it is understood that they may be positioned at any appropriate location on the first microelectronic die back surface with conductive traces forming electrical contact therebetween. The first microelectronic device through-silicon vias 112 and the first microelectronic device back surface contact lands 116 may be fabricated by any technique known in the art, including, but not limited to drilling (laser and ion), lithography, plating, and deposition, and may be made of any appropriate conductive material, including but not limited to copper, aluminum, silver, gold, or alloys thereof. [0009] As shown in FIG. 2, a second microelectronic device 122 may be aligned with the first microelectronic device 102. The second microelectronic device 122 may have an active surface 124, a back surface 126 that is substantially parallel to the second microelectronic device active surface 124, and at least one side 128 extending from the second microelectronic device active surface 124 to the second microelectronic device back surface 126. The second microelectronic device 122 may further include at least one contact land 132 adjacent the microelectronic device active surface 124, wherein the second microelectronic device contact lands 132 may be connected to integrated circuits (not shown) within the second microelectronic device 122. The second microelectronic device 122 may be any appropriate integrated circuit device including but not limited to a microprocessor (single or multi-core), a memory device, a chipset, a graphics device, an application specific integrated circuit, or the like. In one embodiment, the second microelectronic device 122 is a memory device. The second microelectronic device contact lands 132 may be any appropriate conductive material, including but not limited to copper, aluminum, silver, gold, or alloys thereof. [0010] As further shown in FIG. 2, the second microelectronic device 122 may be attached to the first microelectronic device 102 through a plurality of interconnects 136 (shown as solder balls) connecting the second microelectronic device contact lands 132 to the first microelectronic device back surface contact lands 116, thereby forming a stacked structure 140. An underfill material 138, such as an epoxy material, may be disposed between the first microelectronic device back surface 106 and the second microelectronic device active surface 124, and around the plurality of interconnects 136. The underfill material 138 may enhance the structural integrity of the stacked structure 140. [0011] As shown in FIG. 3, the second microelectronic device back surface 126 may be attached to a carrier 150, such as with a DBF (die backside film) or an adhesive (not shown), as known to those skilled in the art. An encapsulation material 152 may be disposed adjacent the second microelectronic device side(s) 128, the first microelectronic side(s) 108, and over the first microelectronic device active surface 104 including the first microelectronic device active surface contact land(s) 114, thereby forming a front surface 154 of the encapsulation material 152, as shown in FIG. 4. The placement of the second microelectronic device back surface 126 on the carrier 150 may result in a back surface 156 of the encapsulation material 152 being formed substantially planar with the second microelectronic device back surface 126, thereby forming substrate 160. [0012] The encapsulation material 152 may be disposed by any process known in the art, including a laminated process, as will be understood to those skilled in the art, and may be any appropriate dielectric material, including, but not limited to silica-filled epoxies, such as are available from Ajinomoto Fine-Techno Co., Inc., 1-2 Suzuki-cho, Kawasaki-ku, Kawasaki-shi, 210-0801, Japan (Ajinomoto GX13, Ajinomoto GX92, and the like). [0013] Vias 162 may be formed through the encapsulation material front surface 154 to expose at least a portion of each first microelectronic device active surface contact land 114, as shown in FIG. 5. The vias 162 of FIG. 5 may be formed by any technique known in the art, including but not limited to laser drilling, ion drilling, and lithography, as will be understood to those skilled in the art. A patterning and plating process may be used to fill the vias 162 to form conductive vias 164 and to simultaneously form first layer conductive traces 172, as will be understood by those skilled in the art, as shown in FIG. 6. [0014] As shown in FIG. 7, a build-up layer 170 may be formed on the encapsulation material front surface 154. The build-up layer 170 may comprise a plurality of dielectric layers with conductive traces formed on each dielectric layer with conductive vias extending through each dielectric layer to connect the conductive traces on different layers. Referring to FIG. 7, the build-up layer 170 may comprise the first layer conductive traces 172 with a dielectric layer 174 formed adjacent the first layer conductive traces 172 and the encapsulation material front surface 154. At least one trace-to-trace conductive via 176 may extend through the dielectric layer 174 to connect at least one first layer conductive trace 172 to a second layer conductive trace 178. A solder resist material 180 may be patterned on the dielectric layer 174 and second layer conductive traces 178 having at least one opening 182 exposing at least a portion of the second layer conductive traces 178. [0015] As shown in FIG. 8, at least one external interconnect 184 may be formed on the second layer conductive traces 178 through patterned openings 182 in the solder resist material 180. The external interconnects 184 may be a solder material and may be used to connect the build-up layer 170 to external components (not shown). [0016] It is understood that although only one dielectric layer and two conductive trace layers are shown, the build-up layer 170 may be any appropriate number of dielectric layers and conductive trace layers. The dielectric layer(s), such as the dielectric layer 174, may be formed by any technique known in the art and may be any appropriate dielectric material. The conductive trace layers, such as first layer conductive traces 172 and the second layer conductive traces 178, and the conductive vias 176, may be fabricated by any technique known in the art, including but not limited to plating and lithography, and may be made of any appropriate conductive material, including but not limited to copper, aluminum, silver, gold, or alloys thereof. [0017] The carrier 150 may be removed, resulting in a microelectronic package 190, as shown in FIG. 9. The stacking and encapsulation of the first microelectronic device 102 and the second microelectronic device 122 results in the microelectronic package 190 being sufficiently thick enough to prevent warpage in the microelectronic package 190, which may result in a reduction in yield losses from solder ball bridging and/or non- contact opens, as will be understood to those skilled in the art. [0018] Another embodiment of a microelectronic package 192 is shown in FIG. 10. In this embodiment, the first microelectronic device active surface 104 may be in electrical communication with the second microelectronic device active surface 124 through the interconnects 136 extending between the first microelectronic device active surface contact land 114 and the second microelectronic device contact lands 132. The build-up layer 170 may be formed proximate on the first microelectronic device back surface and may be in electrical communication with the first microelectronic device through-silicon vias 112. [0019] It is also understood that the subject matter of the present description is not necessarily limited to specific applications illustrated in FIGs. 1-10. The subject matter may be applied to other stacked device applications. Furthermore, the subject matter may also be used in any appropriate application outside of the microelectronic device fabrication field. Furthermore, the subject matter of the present description may be a part of a larger bumpless build-up package, it may include multiple stacked microelectronic dice, it may be formed at a wafer level, or any number of appropriate variations, as will be understood to those skilled in the art. [0020] The detailed description has described various embodiments of the devices and/or processes through the use of illustrations, block diagrams, flowcharts, and/or examples. Insofar as such illustrations, block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within each illustration, block diagram, flowchart, and/or example can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. [0021] The described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is understood that such illustrations are merely exemplary, and that many alternate structures can be implemented to achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Thus, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of structures or intermediate components. Likewise, any two components so associated can also be viewed as being "operably connected", or "operably coupled", to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being "operably couplable", to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components. [0022] It will be understood by those skilled in the art that terms used herein, and especially in the appended claims are generally intended as "open" terms. In general, the terms "including" or "includes" should be interpreted as "including but not limited to" or "includes but is not limited to", respectively. Additionally, the term "having" should be interpreted as "having at least". [0023] The use of plural and/or singular terms within the detailed description can be translated from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or the application. [0024] It will be further understood by those skilled in the art that if an indication of the number of elements is used in a claim, the intent for the claim to be so limited will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. Additionally, if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean "at least" the recited number. [0025] The use of the terms "an embodiment," "one embodiment," "some embodiments," "another embodiment," or "other embodiments" in the specification may mean that a particular feature, structure, or characteristic described in connection with one or more embodiments may be included in at least some embodiments, but not necessarily in all embodiments. The various uses of the terms "an embodiment," "one embodiment," "another embodiment," or "other embodiments" in the detailed description are not necessarily all referring to the same embodiments. [0026] While certain exemplary techniques have been described and shown herein using various methods and systems, it should be understood by those skilled in the art that various other modifications may be made, and equivalents may be substituted, without departing from claimed subject matter or spirit thereof. Additionally, many modifications may be made to adapt a particular situation to the teachings of claimed subject matter without departing from the central concept described herein. Therefore, it is intended that claimed subject matter not be limited to the particular examples disclosed, but that such claimed subject matter also may include all implementations falling within the scope of the appended claims, and equivalents thereof. |
The invention relates to an on-die ECC with an error counter and internal address generation, and to a memory subsystem for realizing A memory subsystem enables managing error correction information. A memory device internally performs error detection for a range of memory locations, and increments an internal count for each error detected. The memory device includes ECC logic to generate an error result indicating a difference between the internal count and a baseline number of errors preset for the memory device. The memory device can provide the error result to an associated host of the system to expose only a number of errors accumulated without exposing internal errors from prior to incorporation into a system. The memory device can be made capable to generate internal addresses to execute commands received from the memory controller. The memory device can be made capable to reset the counter after a first pass through the memory area in which errors are counted. |
1.A random access memory (RAM) device, including:Memory array; andAn error checking and correction (ECC) module for performing ECC operations on multiple rows of the memory array, the ECC module including a counter for accumulating error counts, the error counting being responsive to detection in any of the multiple rows The ECC module is used to generate the error result as the difference between the accumulated error count and the non-zero error threshold to be reached before the error result is incremented.2.The RAM device of claim 1, wherein the ECC module is configured to perform the ECC operation in response to an error detection test initiated by the RAM device.3.The RAM device according to claim 1, wherein the ECC module is configured to perform ECC operations on the rows of the memory array in a bounded address space.4.The RAM device according to claim 1, wherein the ECC module is configured to perform ECC operations on all rows of the memory array.5.The RAM device according to claim 1, wherein the ECC module is used to internally generate address information for the rows of the memory array.6.5. The RAM device of claim 5, wherein the ECC module is configured to automatically reset the accumulated error count in response to detecting an address rollover to a previously tested address.7.The RAM device according to claim 1, wherein the ECC module further comprises a register for storing the error result to indicate the number of errors since being deployed in the system.8.8. The RAM device of claim 7, wherein the register includes a register accessible by an associated memory controller.9.The RAM device according to claim 1, wherein the reference number of errors includes the number of errors detected during the manufacturing test of the RAM device.10.The RAM device of claim 1, wherein the RAM device comprises a volatile dynamic random access memory (DRAM) device.11.The RAM device according to claim 1, wherein the RAM device comprises a non-volatile RAM device.12.A system including:Memory controller; andMultiple random access memory (RAM) devices coupled in parallel, wherein the RAM device includes:Memory array; andAn error check and correction (ECC) module for performing ECC operations on multiple rows of the memory array, the ECC module includes a counter for accumulating error counts, the error count is in response to any row in the multiple rows Is incremented when an error is detected in, wherein the ECC module is used to generate an error result as the difference between the accumulated error count and a non-zero error threshold to be reached before the error result is incremented;The RAM device provides internal error correction for data independently of error correction based on the check bit provided by the memory controller.13.The system of claim 12, wherein the ECC module is configured to perform the ECC operation in response to an error detection test initiated by the RAM device.14.The system according to claim 12, wherein the ECC module is used to internally generate address information for the rows of the memory array.15.The system according to claim 14, wherein the ECC module is configured to automatically reset the accumulated error count in response to detecting an address rollover to a previously tested address.16.The system according to claim 12, wherein the ECC module further comprises a register for storing the error result to indicate the number of errors since being deployed in the system.17.The system of claim 16, wherein the registers include registers that are accessible by an associated memory controller.18.The system of claim 12, wherein the RAM device comprises a volatile dynamic random access memory (DRAM) device.19.The system of claim 12, wherein the RAM device comprises a non-volatile RAM device.20.The system according to claim 12, further comprising one or more of the following:At least one processor communicatively coupled to the memory controller;A display communicatively coupled to at least one processor; orA network interface communicatively coupled to at least one processor.21.A method for managing error correction information in storage, including:Use the ECC module in the synchronous dynamic random access memory (SDRAM) device to initiate error checking and correction (ECC) operations on multiple rows of the memory array;In response to detecting an error in any of the multiple rows, accumulate the error count to be incremented; andThe error result is generated as the difference between the accumulated error count and the non-zero error threshold to be reached before the error result is incremented.22.The method of claim 21, further comprising:The information for the multiple row addresses of the memory array is internally generated.23.The method of claim 22, further comprising:The accumulated error count is automatically reset in response to detecting an address rollover to the previously tested address.24.The method of claim 21, further comprising storing the error result in a register to indicate the number of errors since deployment in the system. |
On-die ECC using error counter and internal address generationThis application is a divisional application, and the title of the parent case is "On-die ECC generated using error counter and internal address". The filing date is May 27, 2016, and the application number is 201680024940.1.Related use casesThis application is based on a non-provisional patent application of U.S. Provisional Patent Application No. 62/168,828 filed on May 31, 2015. This application requires the priority rights of that provisional application. The provisional application is incorporated herein by reference.Technical fieldEmbodiments of the present invention generally relate to memory devices, and more specifically to memories that provide selective internal error correction information.Copyright notice/licenseThe published part of this patent document may contain copyright-protected material. The copyright owner does not object to anyone copying this patent document or patent publication, because it appears in the patent and trademark office patent documents or records, but still retains all copyright rights in other aspects. The copyright notice applies to all the data described in the following and its accompanying drawings and any software described below: Copyright ©2015, Intel Corporation, copyrighted, and cannot be reprinted.Background techniqueComputing devices use memory devices to store data and codes for the processor to execute its operations. As memory devices decrease in size and increase in density, they encounter more errors during processing, which is referred to as a yield problem. Therefore, memory devices suffer from increased bit failures even with modern processing techniques. To mitigate bit failures, modern memory devices provide internal error correction mechanisms, such as ECC (Error Correction Code). The ECC data is generated inside the memory device, and the ECC data is used inside the memory device. Internal error correction within the memory device can be used as a supplement to whatever system-wide error correction or error mitigation the system is configured to use in data exchange between the memory device and the memory controller.The SBE (Single Bit Error) corrected by the memory device appears to the memory controller or host system as if there is no error. Therefore, if additional errors accumulate in the memory device after manufacturing, the memory device will continue to perform ECC, and the increased number of failures of the memory device may not be visible to the host system. Memory devices would traditionally need to identify information related to internal error correction to provide information related to error accumulation. However, exposing error correction information can provide proprietary information related to the process and manufacturing, such as internal error information or details related to internal error correction. There is currently no mechanism for revealing information related to error accumulation in a memory device without revealing information related to internal error correction.Summary of the inventionA synchronous dynamic random access memory (SDRAM) device according to the first aspect of the present invention includes:Memory array; andAn error checking and correction (ECC) circuit for performing ECC operations on multiple rows of the memory array, the ECC circuit including a counter for accumulating an error count to be incremented in response to detecting an error in any of the multiple rows , Where the ECC circuit is used to generate the error result as the difference between the accumulated error count and the non-zero error threshold to be reached before the error result is incremented.A system with a memory device according to the second aspect of the present invention includes:Memory controller; andA plurality of synchronous dynamic random access memory (SDRAM) devices coupled in parallel, wherein the SDRAM devices include:Memory array; andAn error checking and correction (ECC) circuit for performing ECC operations on multiple rows of the memory array. The ECC circuit includes a counter for accumulating an increment to be incremented in response to detecting an error in any of the multiple rows Error count, wherein the ECC circuit is used to generate an error result as the difference between the accumulated error count and a non-zero error threshold to be reached before incrementing the error result;The SDRAM device provides internal error correction for data independently of error correction based on the check bit provided by the memory controller.A method for managing error correction information in a memory according to the third aspect of the present invention includes:Use the ECC circuit in the synchronous dynamic random access memory (SDRAM) device to initiate error checking and correction (ECC) operations on multiple rows of the memory array;In response to detecting an error in any of the multiple rows, accumulate the error count to be incremented; andThe error result is generated as the difference between the accumulated error count and the non-zero error threshold to be reached before the error result is incremented.Description of the drawingsThe following description includes a discussion of the accompanying drawings with explanations given by way of example of implementation of embodiments of the present invention. The drawings should be understood to be by way of example and not by way of limitation. As used herein, references to one or more "embodiments" are to be understood as describing specific features, structures, and/or characteristics included in at least one implementation of the present invention. Therefore, phrases such as "in one embodiment" or "in an alternative embodiment" appearing herein describe various embodiments and implementations of the invention, and do not necessarily all refer to the same embodiment. However, they are not necessarily mutually exclusive.Figure 1 is a block diagram of an embodiment of a system with a memory device capable of exposing internal error correction bits for use by an external memory controller.Figure 2 is a block diagram of an embodiment of a system with a memory device capable of exposing internal error correction bits for use by an external memory controller.Figure 3 is a block diagram of an embodiment of a system in which a memory device generates an internal address for executing a received command.Figure 4 is a flowchart of an embodiment of a process for managing internal ECC information, including generating internal addresses.Figure 5 is a block diagram of an embodiment of a computing system in which a memory device that generates an internal address can be implemented.Fig. 6 is a block diagram of an embodiment of a mobile device in which a memory device that generates an internal address can be implemented.The following is a description of certain details and implementations, including a description of the accompanying drawings, which may depict some or all of the embodiments described below, as well as discuss other potential embodiments or implementations of the inventive concepts presented herein.Detailed waysAs described in this article, the memory subsystem implements management of error correction information. The memory device performs error correction within the range of memory locations, and increments an internal count for each detected error. The memory device includes ECC logic to generate an error result indicating the difference between the internal count and a reference number of errors preset to the memory device. The memory device can provide the error result to the associated host of the system, so as to expose only the accumulated error number, and not to expose the internal error before being incorporated into the system. In one embodiment, the memory device can generate internal addresses to execute commands received from the memory controller. In one embodiment, the memory device can reset the counter after the first pass through the storage area in which errors are counted.In one embodiment, the memory device generates an internal address to execute the command received from the memory controller. The memory device performs error correction to correct a single-bit error (SBE) in the accessed data, and generates an error count indicating the number of corrected SBEs exceeding the reference number of SBEs preset to the memory device. The memory device provides an error count to the memory controller so as to expose only the number of SBEs accumulated after manufacturing. In one embodiment, the memory device can reset the counter after the first pass through the storage area in which errors are counted.References to memory devices can be applied to different memory types and, in particular, to any memory with a group architecture. Memory devices generally refer to volatile memory technology. Volatile memory is memory whose state (and therefore the data stored on it) is indeterminate when power is interrupted to the device. Non-volatile memory refers to a memory whose state is determined even when power is interrupted to the device. Dynamic volatile memory requires the data stored in the device to be refreshed to maintain state. An example of dynamic volatile memory includes DRAM (Dynamic Random Access Memory) or some variants such as Synchronous DRAM (SDRAM). The memory subsystem described in this article is compatible with a variety of memory technologies, such as DDR4 (DDR version 4, the initial specification published by JEDEC in September 2012), LPDDR4 (Low Power Double Data Transfer Rate (LPDDR) version) 4. JESD209-4, originally released by JEDEC in August 2014), WIO2 (Wide I/O 2 (WideIO2), JESD229-2, originally released by JEDEC in August 2014), HBM (high bandwidth memory DRAM, JESD235, originally released by JEDEC in October 2013), DDR5 (DDR version 5, currently under discussion by JEDEC), HBM2 (HBM version 2, currently under discussion by JEDEC) and/or other and derived or derived from such specifications Expanded technology.The current design of DRAM devices used for technologies such as WIO2 and LPDDR4 is to include extra bits internally to store error correction data (e.g., ECC (Error Correction Code) information). For internal ECC, DRAM uses single error correction (SEC) (which uses 8 dedicated ECC bits per 128 data bits) to internally detect and correct single bit errors (SBE). The external data transfer size and internal prefetch size are both 128 bits in the case of LPDDR4 and WIO2. But for internal ECC, these designs traditionally lack a method to track error accumulation in DRAM, which makes the device prone to accumulate errors until the number of errors overwhelms the on-die or internal ECC's ability to correct the SBE. If too many errors accumulate, the device transmits data with uncorrected errors to the memory controller, causing a malfunction.If the DRAM does not perform internal error correction, the system may be able to perform system-level error correction, but all errors within the DRAM will be visible. Exposing information related to all errors will expose internal error information, which can reveal information that is proprietary to memory device manufacturers and that is generally considered undesirable to be shared within the industry. As described herein, the system can provide a relative error count to indicate how many errors have accumulated since the memory device was shipped or deployed in the system. Relative errors as described herein can indicate how many errors have been accumulated beyond the baseline number of errors. The reference number of errors of the memory device is the number of errors detected during the manufacturing test. The accumulation of errors can be determined, for example, by counting the total number of errors compared with the reference number. Note, however, that even internal errors detected during manufacturing can be proprietary information, and the device manufacturer can configure this amount internally, and configure the memory device to expose only the accumulated difference.Generally, during normal operation, DDR4 DRAM devices implemented for internal ECC do not signal that single-bit errors have been corrected. In one embodiment, the total number of single-bit errors on the device is permanently stored in the memory device during manufacturing. For example, the number of errors can be stored in an error threshold register. The stored error threshold count represents the total number of single-bit errors detected during manufacturing testing. In one embodiment, the error threshold register is not directly readable by the user and/or host system. In one embodiment, the memory device includes an error counter that counts the SBE and a comparator that compares the result of the error counter with an error threshold register.In one embodiment, the memory device includes an address generator to generate an internal address for the received command. Therefore, the memory device can control which locations of the memory device are to be accessed and when. The memory device can then manage the error detection internally and count the number of errors relative to a baseline number of errors. The internally generated address can enable the memory device to internally reset the error accumulation counter and prevent the internal error count from continuing to increase. Without resetting the counter after the first pass through the memory space, the user can potentially run the error test twice (where errors continue to accumulate), and then simply divide the number of errors in half to expose internal errors Information related to internal error correction. By resetting the counter, each time an error test is run, the same number of errors will be revealed, even when running continuously.In one embodiment, the error counter is cleared during reset. In one embodiment, the error counter is enabled by setting a mode register bit (e.g. MRx bit A[y]) and is initially cleared. Once enabled, the counter can be incremented for every read that detects a single-bit error. A single pass through the array is allowed by only allowing the DRAM to generate addresses for reading the array. Once a single pass through all memory locations is completed, in one embodiment, the control logic reads the relative error count result and resets the counter. In one embodiment, the relative error count can be stored in a multi-function register, which can then be read by the host (e.g., memory controller). For example, the memory device can store the relative error count in a multi-function register (eg MPR3, page 3 register). In one embodiment, the register or other storage location includes a count representing the difference between the number of errors detected since the register was enabled and the stored error count. In one embodiment, in addition to reporting accumulated errors, the memory device can also report the address of one or more rows containing the highest number of errors. In one embodiment, the memory device can report how many errors are contained in the row with the highest number of errors. In one embodiment, on-die or DRAM internal counters can generate internal addresses for read error passes through the memory resource or memory device array. At the beginning of the read error pass through the array, the relative error count result register can be cleared. After one pass through the array, the relative count can be read from the DRAM. In one embodiment, if a second pass is tried, the error result register is cleared before the start of the second pass.Figure 1 is a block diagram of an embodiment of a system with a memory device capable of exposing internal error correction bits for use by an external memory controller. The system 100 includes elements of a memory subsystem in a computing device. The host 110 represents a host computing platform, which runs an operating system (OS) and applications. The OS and applications run operations that cause memory access. The host 110 includes a processor or a processing unit, which can be a single-core or a multi-core processor. The system 100 can be implemented as an SOC or implemented with independent components. When a plurality of memory devices 120 are included in the system 100, each memory device can independently manage its internal ECC separately from the host or from other memory devices.The memory controller 112 represents control logic, which generates memory access commands in response to the execution of the operation by the processor(s). In one embodiment, the system 100 includes multiple memory controllers. In one embodiment, the system 100 includes one or more memory controllers per channel, where the channels are coupled to access multiple memory devices. Each channel is a separate access path to the memory, so multiple different memory accesses can occur in parallel on different channels. In one embodiment, the memory controller 112 is a component of the host 110, such as logic implemented on the same die or package space as the host processor. Therefore, the memory controller can be implemented as an integral part of the same die as the host processor, or be coupled to the host processor in the system in a chip (SoC) configuration.The memory device 120 represents a memory resource of the system 100, and can be, for example, a DRAM device. The memory devices 120 each include a plurality of memory arrays 122. The memory array 122 represents the logic in which the memory device 120 stores data bits. The memory device 120 includes I/O logic 126, which represents interconnection logic that enables the memory device to be coupled to the memory controller 112. The I/O logic 126 can include a command/address bus (commonly referred to as a C/A bus, CMD/ADDR bus, or ADD/CMD bus). The I/O logic 126 can also include data buses and other signal lines. The I/O logic 126 can include signal lines, connectors, drivers, transceivers, termination control, and/or other logic that enables communication between the memory controller and the memory device.In one embodiment, the memory device 120 includes an ECC 124, which represents the logic and memory that implement internal error correction. Therefore, ECC 124 represents the ability of the memory device 120 to generate and use internal error correction bits. In one embodiment, ECC 124 is an integral part of an internal controller in memory device 120 (not specifically shown). Such an internal controller controls the operation of the memory device, such as the reception and processing of commands and the operation of the commands, including the timing of operations that control running commands and return data in response to a request from the memory controller (outside the memory device). In one embodiment, the ECC 124 can be fully or partially implemented as a circuit separate from the internal controller. In one embodiment, the ECC 124 enables the memory device 120 to perform reading of each memory location in the range of addresses and to detect and correct SBEs, and to increment the error count for each SBE corrected.In one embodiment, the memory controller 112 generates a command or request to count the ECC to determine the accumulated error from the memory device 120. For clarity in the description, consider that ECC 124 processes such requests and can generate counts in response to such requests. The ECC 124 enables the memory to perform a series of operations for detecting errors in response to an error test command received from the memory controller. For example, ECC 124 can include or have access to a counter that is incremented to track detected errors in read memory locations. As described herein, the memory device 120 can generate a sequence of memory locations passing through the memory space or a range of memory address locations to be tested for errors.In one embodiment, ECC 124 determines the number of errors in the memory space and generates a count of corrected errors. In one embodiment, the ECC 124 generates the relative error count by calculating the difference between the number of detected errors and the known number of errors preset to the memory device before deployment. For example, the preset error can be a threshold or reference number generated during manufacturing testing to detect the number of errors existing in the device during manufacturing. In one embodiment, ECC 124 includes or has access to comparator logic that can calculate the difference based on a threshold.In one embodiment, when the ECC 124 receives a continuous command to perform an error test, it will determine when the entire memory space of the memory array has been tested and can reset the error count. Therefore, each time the error check is performed, the ECC 124 can restart the error count, and the difference between the generated count and the preset number of the threshold or reference number of errors of the indicating device will also be restarted each time. In one embodiment, when the memory subsystem is reset, the ECC 124 can also reset the error count in response to the start condition.Figure 2 is a block diagram of an embodiment of a system with a memory device capable of exposing internal error correction bits for use by an external memory controller. The system 200 is an example of an embodiment of the system 100 of FIG. 1. The system 200 illustrates the address generation logic 240 within the memory device 220 that generates addresses for internal operations in response to commands received from the memory controller 210. The memory device 220 and the memory controller 210 communicate via an I/O interface (not specifically shown) between the devices.The address generation 240 can include an address counter, which is used by an internal controller (not specifically shown) within the memory device 220 to determine which address space to address an operation, such as a read, to detect errors. Traditionally, the memory controller generates addresses for ECC testing, and the memory device simply runs the commands provided at the addresses indicated by the memory controller. However, through the address generation 240 in the memory device 220, the memory controller can simply generate a command or request for the ECC test, and allow the memory device itself to generate an address internally. The address generation 240 can include a mechanism (e.g., a counter) to track the address under test. Therefore, the memory device itself can manage the error correction test.The memory device 220 includes a data storage device 222, which represents a storage space in the memory device 220, in which the memory device writes data received from the memory controller 210 and accesses the stored data to be sent to the memory controller 210. In one embodiment, the memory device 220 includes ECC logic 230. ECC logic 230 represents the logic used by the memory device to calculate error correction. For example, the ECC logic 230 can enable the memory device 220 to detect and correct an SBE for data taken from a memory location within the range of the measured address. The ECC logic 230 can represent an application that controls error correction from inside the memory device 220 to outside the memory controller 210 within the memory device. The ECC logic 230 can be implemented at least partially within the processor device, for example, by the internal controller of the memory device 220. In one embodiment, the ECC logic 230 is fully or partially implemented in a circuit separate from the internal controller.In one embodiment, the ECC control logic 230 includes or uses information stored in the memory 220. More specifically, the ECC control logic 230 can use a threshold 232, which represents a baseline number of errors of the memory device 220. In one embodiment, the BIOS (Basic Input/Output System) or other logic can determine the baseline number of errors, and write the number for permanent storage in a register or other storage location in the memory device 220. By using the threshold 232, the ECC control logic 230 can generate an error output indicating the number of accumulated errors without exposing the reference number. For example, the error output can indicate the number of corrected SBEs that exceeds the reference number of SBEs preset to the memory device.In one embodiment, the ECC control logic 230 includes or uses a counter 234, which is a counter that indicates how many errors are present (eg, how many SBEs are detected and corrected) in the data storage device 222. The counter 234 can be reset by the ECC control logic 230 each pass through the data storage space to determine how many errors exist. Therefore, the counter 234 can accumulate counts for each detected error, but will not continue accumulating errors once the entire storage space has been checked. In one embodiment, checking the storage space again will cause the ECC control logic 230 to determine that the address generator 240 has reached the maximum address of the storage space, and thus will reset the counter 234 in response to detecting that the counter has rolled over. When the address generator 240 completes all addresses and returns to the start address, the counter rolls over.In one embodiment, the ECC control logic 230 can generate an error output that includes an indication of the row with the highest number of accumulated errors. Therefore, in one embodiment, the ECC control logic 230 accumulates errors on a per-line basis and identifies the line with the highest number of errors. For example, the counter 234 can include a plurality of counters. The counter 234 can include a counter for each row, all of which can be added up to get the total number of detected errors. The counter 234 can include a global counter that accumulates all errors of all rows and a row counter that is reset at the end of each row. This single row count can be compared with the highest row count, and if the count exceeds the current highest count, the new count can be stored in the storage location, and the address of the row can also be stored. In one embodiment, if the count is the same as the highest count, the count will not be changed, but multiple row addresses will be stored. When providing error results to the memory controller 210, the ECC control logic 230 can report the total number of accumulated errors and an indication of the row with the highest error count. Reporting the row with the highest error count can include the count of the reporting row and the address of one or more rows with the highest count. In one embodiment, reporting the row with the highest error count includes reporting the row address instead of the count of the number of errors.In one embodiment, the memory device 220 includes a register 224, which represents one or more storage locations or registers where the ECC control logic 230 can store error count and/or error report data. For example, the ECC control logic 230 can record the total number of accumulated errors that exceed the threshold of the memory device 220 in the register 224. In one embodiment, the register 224 includes a mode register or a multi-function register or other storage location. In one embodiment, the register 224 can include or point to other storage locations that store row address information indicating the highest number of rows or rows with errors. In one embodiment, the register 224 represents a storage device within the memory device 220 that is accessible by the memory controller 210, which enables the memory controller 210 to access report data. In one embodiment, the memory device 220 sends report data to the memory controller 210.In one embodiment, the memory controller 210 includes an ECC 212, which represents ECC logic for use at the system level in the system 200. It will be understood that the ECC control logic 230 is the logic in each memory device coupled to the memory controller 210, and its ECC 212 represents the logic in the memory controller 210 that performs ECC on the data received from each memory device. By performing an ECC operation on the data and then transferring the data to the memory controller, the operation result of the ECC control logic 230 can be transparent to the memory controller 210. In one embodiment, the ECC 212 is logic included in the circuit of the memory controller 210. In one embodiment, the ECC 212 is logic external to the memory controller circuit, which is accessible by the memory controller and can perform one or more operations related to the ECC within the system 200. For example, in one embodiment, the ECC 212 can include an ECC circuit in a processor or SoC coupled to the memory controller 210.However, the memory controller 210 can also provide ECC for the system 200 based not only on data from the memory device 220, but also on data from multiple connected memory devices. Therefore, not only the error count from the memory device 220 as described herein can provide information about the service life of the memory device itself, but it can also be used as metadata of the memory controller 210 for system-level ECC implementation to operate. By knowing the corrected errors from the memory device level, the memory controller can adjust its error correction operations.Figure 3 is a block diagram of an embodiment of a system in which a memory device generates an internal address for executing a received command. The system 300 is an embodiment according to the ECC control logic of the system 100 and/or the system 200. It will be understood that the system 300 can be an integral part of the logic of the system, and can include additional logic (not specifically shown). For example, the system 300 can represent logic specifically used to expose errors that exceed a threshold, but does not specifically illustrate the logic used to perform error detection or receive and process commands.In one embodiment, the ECC control logic receives a command to initiate an ECC test. In one embodiment, the ECC control logic is part of the on-die controller in the memory device. The controller controls the operation of the memory device, including generating internal commands and/or control signals to cause operations required to execute the commands received by the memory device. The control logic can generate a reset signal at the beginning and pass the signal to the row-column address generator 310 inside the memory device. The start can be any time the device is powered up and initialized and any time the host computer system is configured to perform an ECC test to determine the number of accumulated errors. The reset signal at the beginning can be a binary (true/false) signal indicating whether to reset the counter.In one embodiment, the on-die controller can generate an increment signal. The increment signal can signal the increment of the operation to the next operation. In one embodiment, the on-die controller provides an increment signal as an input to the address generator 310. The address rollover detector 320 can determine when the increment of the counter causes the counter to restart at the initial address. The address reversal detector 320 can generate an output indicating reversal. In one embodiment, the output is a binary (true/false) signal indicating the flip condition.In one embodiment, the address flip detector 320 provides its output to the XOR logic 330. In one embodiment, the ECC control logic also provides a reset signal at the beginning as an input to the XOR logic 330. The XOR logic 330 can perform an XOR ("exclusive OR") operation on two input signals, and output a binary output when either condition is true. In one embodiment, if any of the conditions is true, the system 300 resets the error counter 340. Therefore, consider the following conditions as an example: in which the counter is reset if the system is initialized; or if the address of the internal address generator is flipped to start again at the initial address, the counter is reset.In addition to the reset operation, the error counter 340 can receive an error detection signal as an input. Error detection logic (not specifically shown) detects errors in memory locations and can generate a binary signal indicating the error. In one embodiment, the error counter 340 receives an error indication signal and increments the error count each time an error is detected. Therefore, the error counter 340 can accumulate errors, and the system 300 can reset the error count at the initial reset and address rollover conditions.The error threshold 350 represents the threshold number or reference number of errors expected for the memory device in which the system 300 is incorporated. The error threshold 350 can be set through manufacturing testing and does not change during the life of the memory device. The comparator 360 can determine the difference between the error threshold 350 and the error counter 340. The count in the error counter 340 that exceeds the error threshold 350 is provided to the result register 370. It will be understood that by storing only the difference between the error threshold 350 and the error counter 340, the system can report only the accumulated errors without exposing information related to internal errors. In one embodiment, the result register 370 can be read by the host system to determine the number of errors accumulated during the lifetime of the memory device, excluding the number of errors that existed during the manufacture of the device.In one embodiment, in addition to or as part of the result register 370, the system 300 also includes one or more rows of storage devices that indicate the highest amount of data with errors. In one embodiment, the system 300 stores one or more rows of address information and reports the address to the associated memory controller. In one embodiment, if it is determined that a certain row has the highest number of errors or is equal to the highest number of errors, the address generator 310 records the address of the row.Figure 4 is a flowchart of an embodiment of a process for managing internal ECC information, including generating internal addresses. The process 400 enables the ECC logic inside the memory device to manage internal ECC information, which can be selectively exposed to the associated memory controller/host. The manufacturer manufactures the memory device (402), and performs manufacturing testing on the device (404). In one embodiment, the test includes a test that uses internal ECC to determine the number of errors present in the new device, and can correct internal errors in its configuration when the device is deployed or incorporated into a computing device. The manufacturer can store the number of errors as a threshold or reference number of the memory device (408). The manufacturer can store this amount in a register or other memory or storage location within the memory device. The memory device will use the threshold number as a basis for determining how many errors accumulate over the life of the memory device. It will be understood that each memory device can have a different threshold based on individual testing of the device.In one embodiment, the threshold is applied to the range of memory locations to be tested. In one embodiment, the range of memory locations is the entire available address space for the memory device. In one embodiment, the range of memory locations is a subset of the available address space for the memory device. In one embodiment, during operation when the memory device is incorporated into the system, the memory device receives an access command (410). In one embodiment, such a command can include a command to perform an error test on a range of the address space (for example, the entire memory address space or a subset of the memory address space). In one embodiment, in response to the command and/or response to initiate an error detection test, the memory device resets the address counter that generates the internal address, and resets the error count (412). By resetting the error count before starting the error detection test, if the memory test is required to be executed multiple times, the memory device can prevent double counting errors.In one embodiment, the memory device generates an internal address for the operation to be executed to execute the requested command(s) (414). In one embodiment, the memory device determines whether an internally generated address flip has occurred (416). In one embodiment, in response to an address flip ("yes" branch of 416), the memory device can reset the error count (418). If an address rollover has not occurred (the "no" branch of 416), the address has not previously been checked for errors during the current loop of error counting. By internally generating addresses and detecting rollovers, the memory device can prevent the same error in the memory location from being counted twice, and provide a more accurate count of errors. Therefore, address flip detection can occur every time the selected memory address space is tested.In one embodiment, internal ECC logic within the memory device identifies and corrects the SBE in response to the requested command(s) (420). In one embodiment, the requested command is an error test command that triggers the memory device to sequentially pass through the identified range of memory locations or through all memory locations. In one embodiment, the error test command is controlled by the memory device, and the memory device reads each memory location within the range of the memory location. When reading the content of the location, the memory device can perform ECC on the content and perform SBE correction using known techniques.In one embodiment, the detection of errors is tracked on a per-line basis to determine the line or lines with the highest number of errors in the memory device. This per-line error tracking can be achieved by using multiple counters to track total errors and per-line errors. In one embodiment, in addition to storing the total accumulated errors, the highest number of errors per row is also stored. For example, the highest number of errors can be stored, and then a test or subsequent row can cause a comparison of the current row's error with the highest stored count. In one embodiment, the address information of one or more rows of the highest number detected as having errors is also stored. If the error count of the current line is equal to the highest count, the address information of the current line can be stored. If the error count of the current row is higher than the highest stored count, the count can be overwritten, and any address information can also be replaced by the address information of the current row. After passing through all rows in the range of addresses, the count should include the count of the highest number of errors per row, and can include one or more rows of address information to which the count is applied.The ECC logic can accumulate the total number of errors each time the test is executed. During the lifetime of a memory device, the number of errors may increase as the device ages. In one embodiment, if the last address in the range to be tested has not been reached (the "no" branch of 422), the memory device can increment the address and repeat the test for another memory location. In one embodiment, if the last address in the range to be tested has been reached (the "Yes" branch of 422), the ECC logic compares the number of SBEs corrected or detected during the operation test to the stored threshold (which indicates the (424). The ECC logic can store the difference between the currently detected error and the threshold number (426). The stored difference can be referred to as an error result, error count, or error report.In one embodiment, the ECC logic of the memory device stores the error result in a register accessible by the host. For example, the memory device can store the error result in a mode register, a multi-function register, or other registers or storage locations, which can be accessed by the host's memory controller or comparable logic. If there is a difference in the count, the memory device has formed more errors, and if the number of errors has become too large, the host system can be notified. For example, the memory device can store the difference in a register or other memory location accessible by the host device. In one embodiment, the memory device can be configured to periodically provide this amount to the host. Although the information related to the accumulated errors during the lifetime of the memory device is shared, because the memory device controls address generation and counters for error detection, it can reveal the accumulated errors since its manufacture without revealing the total error information. In one embodiment, the ECC logic of the memory device also reports the address information of the highest number of rows with errors and/or the count of errors for the highest number of rows with errors.In one embodiment, for example, when the loop or transfer through all addresses within the range of the memory location has been completed, the memory device records that the completion of the test of the range of the memory location has occurred. In one embodiment, the system resets the count of detected errors at the beginning or initiation of the memory subsystem. In one embodiment, the count of errors to be reset is an internal count of errors, which is compared with a threshold or baseline number of errors. In one embodiment, the error count is an error result that can be reset from the register or memory location where the result is stored.Figure 5 is a block diagram of an embodiment of a computing system in which a memory device that generates an internal address can be implemented. System 500 represents a computing device according to any of the embodiments described herein, and can be a laptop computer, desktop computer, server, game or entertainment control system, scanner, copier, printer, routing or switching device, or other electronic device. The system 500 includes a processor 520, which provides processing, operation management, and operation of instructions for the system 500. The processor 520 can include any type of microprocessor, central processing unit (CPU), processing core, or other processing hardware to provide processing for the system 500. The processor 520 controls the overall operation of the system 500, and can be or include one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSP), programmable controllers, application-specific integrated circuits (ASIC), programmable Logic device (PLD), etc. or a combination of such devices.The memory subsystem 530 represents the main memory of the system 500, and provides temporary storage for codes to be executed by the processor 520 or data values to be used in running routines. The memory subsystem 530 can include one or more memory devices, such as read only memory (ROM), flash memory, one or more types of random access memory (RAM) or other memory devices, or a combination of such devices. Among other things, the memory subsystem 530 also stores and hosts an operating system (OS) 536 to provide a software platform for the execution of instructions in the system 500. In addition, other instructions 538 are stored and executed from the memory subsystem 530 to provide the logic and processing of the system 500. The OS 536 and instructions 538 are executed by the processor 520. The memory subsystem 530 includes a memory device 532, where it stores data, instructions, programs, or other items. In one embodiment, the memory subsystem includes a memory controller 534, which is a memory controller that generates commands and issues commands to the memory device 532. It will be understood that the memory controller 534 can be a physical part of the processor 520.The processor 520 and the memory subsystem 530 are coupled to the bus/bus system 510. The bus 510 is an abstraction that represents any one or more independent physical buses, communication lines/interfaces, and/or point-to-point connections connected through appropriate bridges, adapters, and/or controllers. Therefore, the bus 510 can include, for example, a system bus, a peripheral component interconnect (PCI) bus, a hypertransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or electrical And one or more of the Institute of Electronic Engineers (IEEE) standard 1394 bus (commonly referred to as "FireWire"). The bus of the bus 510 can also correspond to an interface in the network interface 550.The system 500 also includes one or more input/output (I/O) interfaces 540, a network interface 550, one or more internal mass storage devices 560, and a peripheral interface 570 coupled to the bus 510. The I/O interface 540 can include one or more interface components through which the user interacts with the system 500 (for example, a video, audio, and/or alphanumeric interface). The network interface 550 provides the system 500 with the ability to communicate with remote devices (eg, servers, other computing devices) through one or more networks. The network interface 550 can include an Ethernet adapter, a wireless interconnection component, a USB (Universal Serial Bus), or other wired or wireless standards-based or proprietary interfaces.The storage device 560 can be or include any conventional media for storing large amounts of data in a non-volatile manner, such as one or more magnetic, solid-state, or optical-based magnetic or optical disks or a combination thereof. The storage device 560 stores codes or instructions and data 562 in a permanent state (ie, retains values despite the power interruption to the system 500). The storage device 560 can be generally regarded as a “memory”, but the memory 530 is a running or operating memory to provide instructions to the processor 520. Although the storage device 560 is non-volatile, the memory 530 can include volatile memory (ie, if power is interrupted to the system 500, the value or state of the data is uncertain).The peripheral interface 570 can include any hardware interface not specifically mentioned above. Peripherals generally refer to devices that are related to the system 500. A related connection is a connection in which the system 500 provides a software and/or hardware platform, operates on the platform, and the user interacts with the platform.In one embodiment, the memory subsystem 530 includes internal ECC logic 580, which represents the logic to manage the internal ECC for the memory 532 in accordance with any of the embodiments described herein. In one embodiment, the internal ECC 580 generates an address for performing a read pass to perform an error test. The internal ECC 580 can generate a relative count indicating how many errors have accumulated since the device was manufactured. Therefore, the number of errors can be exposed to show how many errors have been made since the device was manufactured. In one embodiment, the internal ECC 580 can include logic to reset an internal counter (which provides error accumulation information).Fig. 6 is a block diagram of an embodiment of a mobile device in which a memory device that generates an internal address can be implemented. The device 600 represents a mobile computing device, such as a computing tablet, a mobile phone or a smart phone, a wireless-enabled e-reader, a wearable computing device, or other mobile devices. It will be understood that some of the components are generally shown, but not all components of such a device are shown in device 600.The device 600 includes a processor 610, which performs main processing operations of the device 600. The processor 610 can include one or more physical devices, such as a microprocessor, an application processor, a microcontroller, a programmable logic device, or other processing components. The processing operations performed by the processor 610 include the operation of an operating platform or an operating system on which applications and/or device functions run. The processing operations include operations related to I/O (input/output) of a human user or other devices, operations related to power management, and/or operations related to connecting the device 600 to another device. Processing operations can also include operations related to audio I/O and/or display I/O.In one embodiment, the device 600 includes an audio subsystem 620, which represents hardware (eg, audio hardware and audio circuits) and software (eg, drivers, codecs) components associated with providing audio functions to the computing device. Audio functions can include speaker and/or headphone output and microphone input. The device for this type of function can be integrated into the device 600 or connected to the device 600. In one embodiment, the user interacts with the device 600 by providing audio commands (which are received and processed by the processor 610).The display subsystem 630 represents hardware (for example, display devices) and software (for example, drivers) components that provide visual and/or tactile displays for the user to interact with the computing device. The display subsystem 630 includes a display interface 632, which includes a specific screen or hardware device used to provide a display to the user. In one embodiment, the display interface 632 includes logic that is separate from the processor 610 and performs at least some processing related to display. In one embodiment, the display subsystem 630 includes a touch screen device that provides both output and input to the user. In one embodiment, the display subsystem 630 includes a high definition (HD) display, which provides output to the user. High definition can refer to a display having a pixel density of about 100 PPI (pixels per inch) or greater, and can include formats such as full HD (for example, 1080p), retina display, 4K (ultra high definition or UHD), and the like.The I/O controller 640 represents hardware devices and software components related to the interaction with the user. The I/O controller 640 can operate to manage hardware that is a component of the audio subsystem 620 and/or the display subsystem 630. In addition, the I/O controller 640 illustrates a connection point for an additional device connected to the device 600, through which the user can interact with the system. For example, the devices that can be attached to the device 600 may include a microphone device, a speaker or stereo system, a video system or other display device, a keyboard or keypad device, or other I/O device for use with a card reader or other device. Used in conjunction with specific applications.As mentioned above, the I/O controller 640 can interact with the audio subsystem 620 and/or the display subsystem 630. For example, input via a microphone or other audio device can provide input or commands for one or more applications or functions of the device 600. In addition, audio output can be provided as an alternative or supplement to display output. In another example, if the display subsystem includes a touch screen, the display device also serves as an input device, which can be managed at least in part by the I/O controller 640. There can also be additional buttons or switches on the device 600 to provide I/O functions managed by the I/O controller 640.In one embodiment, the I/O controller 640 manages devices such as accelerometers, cameras, light sensors or other environmental sensors, gyroscopes, global positioning system (GPS), or other hardware that can be included in the device 600. The input can be part of direct user interaction, as well as providing environmental input to the system to affect its operation (for example, filtering of noise, adjusting the display for brightness detection, applying flash or other features to the camera). In one embodiment, the device 600 includes a power management 650 that manages battery power usage, battery charging, and features related to power saving operations.The memory subsystem 660 includes a memory device(s) 662 for storing information in the device 600. The memory subsystem 660 can include non-volatile (if the interruption does not change the power state of the memory device) and/or volatile (if the interruption to the power state of the memory device is uncertain) memory devices. The memory 660 can store application data, user data, music, photos, documents or other data, and system data (whether long-term or temporary) related to the operation of the applications and functions of the system 600. In one embodiment, the memory subsystem 660 includes a memory controller 664 (which can also be considered an integral part of the control of the system 600, and can potentially be considered an integral part of the processor 610). The memory controller 664 includes a scheduler that generates commands and issues commands to the memory device 662.The connectivity 670 includes hardware devices (such as wireless and/or wired connectors and communication hardware) and software components (such as drivers, protocol stacks) to enable the device 600 to communicate with external devices. External devices may be independent devices (such as other computing devices, wireless access points, or base stations) and peripherals (such as headsets, printers, or other devices).Connectivity 670 can include multiple different types of connectivity. In summary, the device 600 is illustrated as having cellular connectivity 672 and wireless connectivity 674. Cellular connectivity 672 generally refers to those provided by wireless operators, such as via GSM (Global System for Mobile Communications) or change or derivative, CDMA (Code Division Multiple Access) or change or derivative, TDM (Time Division Multiplexing) or change or derivative. Cellular network connectivity provided by derivative, LTE (Long Term Evolution-also known as "4G") or other cellular service standards. Wireless connectivity 674 refers to wireless connectivity that is not cellular, and can include personal area networks (such as Bluetooth), local area networks (such as WiFi), and/or wide area networks (such as WiMax), or other wireless communications. Wireless communication refers to the transfer of data by using modulated electromagnetic radiation that passes through a non-solid medium. Wired communication is carried out via a solid communication medium.The peripheral connection 680 includes hardware interfaces and connectors, and software components (for example, a driver, a protocol stack) to perform peripheral connections. It will be understood that the device 600 can be a peripheral device ("to" 682) of other computing devices as well as having a peripheral device connected thereto ("from" 684). The device 600 generally has a "docking" connector for connecting to other computing devices for purposes such as managing (e.g., downloading and/or uploading, changing, synchronizing) the content on the device 600. In addition, the docking connector can allow the device 600 to connect to certain peripherals that allow the device 600 to control, for example, the output of content to audiovisual or other systems.In addition to a proprietary docking connector or other proprietary connection hardware, the device 600 can also make a peripheral connection 680 via a universal or standards-based connector. Generic types can include Universal Serial Bus (USB) connectors (which can include any of a number of different hardware interfaces), DisplayPort including MiniDisplayPort (MDP), High Definition Multimedia Interface (HDMI), FireWire, or other types.In one embodiment, the memory subsystem 660 includes internal ECC logic 690, which refers to the logic that manages the internal ECC for the memory 662 in accordance with any of the embodiments described herein. In one embodiment, the internal ECC 690 generates an address for performing a read pass to perform an error test. The internal ECC 690 can generate a relative count indicating how many errors have accumulated since the device was manufactured. Therefore, the number of errors can be exposed to show how many errors have been made since the device was manufactured. In one embodiment, the internal ECC 690 can include logic to reset an internal counter (which provides error accumulation information).In one aspect, a method for managing error correction information in a memory includes: performing error detection within a range of memory locations in a memory device; incrementing an internal count for each detected error; generating an indication of the internal count and The error result of the difference between the baseline number of errors preset by the memory device, the preset is based on the number of errors detected on the memory device before being incorporated into the system; and the error result is provided to the associated host of the system so that only the memory is exposed The number of errors accumulated after the device was integrated into the system.In one embodiment, performing error detection includes performing a series of operations for detecting errors in response to an error test command received from the host. In one embodiment, performing error detection on the range of memory locations includes performing error detection on the entire memory device. In one embodiment, performing error detection further includes generating addresses within the range of memory locations in the memory device. In one embodiment, incrementing the internal count for each detected error also includes incrementing the count for each single-bit error (SBE) detection and correction performed in the range of memory locations. In one embodiment, the baseline number of errors includes the number of errors detected during manufacturing testing of the memory device. In one embodiment, providing the error result further includes storing the error result in a register for the host to access. In one embodiment, providing an error result also includes indicating the highest number of rows with errors. In one embodiment, the row indicating the highest number of errors includes the address of the reporting row and the number of errors in the row. In one embodiment, it also includes resetting the internal count when the range of memory locations is complete.In one aspect, a memory device with internal error correction includes: error detection logic inside the memory device, which performs internal error detection on a range of memory locations; a counter, which increments an internal count for each detected error; and a comparator logic , Generating an error result indicating the difference between the internal count and the reference number of errors preset to the memory device, the preset being based on the number of errors detected on the memory device before being incorporated into the system; and a register, which stores the error report, For access by the associated host, without exposing the benchmark number.In one embodiment, the error detection logic that performs internal error detection can perform a series of operations for detecting errors in response to the error test command received from the associated host. In one embodiment, the error detection logic that performs internal error detection of the range of memory locations can perform error detection of the entire memory device. In one embodiment, the error detection logic that performs internal error detection can generate an address in the memory device internally for a range of memory locations. In one embodiment, the counter that increments the internal error count includes a counter that can detect and correct each single-bit error (SBE) performed in the range of memory locations that increments the internal count. In one embodiment, the baseline number of errors includes the number of errors detected during manufacturing testing of the memory device. In one embodiment, the counter can also reset the internal count when the range of memory locations is complete. In one embodiment, it also includes error detection logic that identifies the highest number of rows with errors and a storage device that stores an indication of the highest number of rows with errors. In one embodiment, the indication includes the address of the row and the number of errors in the row. In one embodiment, the storage device includes a register.In one aspect, a system includes: a memory controller, coupled to a memory device, the memory controller capable of generating an error detection command to the coupled memory device; a memory device, coupled to the memory controller, the storage device including input/output (I/ O) Logic, which receives commands from the memory controller; ECC (Error Correction Code) logic, which can execute ECC inside the memory device to correct single-bit errors (SBE) detected in the data fetched from the memory location range , Generate an error count indicating the number of corrected SBEs exceeding the reference number of SBEs preset to the memory device, and provide the error count to the memory controller without exposing the reference number.Wherein the memory device is a system of memory devices according to any embodiment of the aspects of the memory device set forth above. In one aspect, a device includes components that perform operations to execute a method for managing error correction information in accordance with any embodiment of the method aspect. In one aspect, a manufactured product includes a computer-readable storage medium having content stored thereon that, when accessed, causes the execution of operations of a method for managing error correction information in accordance with any embodiment of the method aspect.In one aspect, a second method for managing error correction information in a memory includes: fetching data in a memory device in response to a read access request from an associated memory controller; performing internal error correction in the memory device to correct Single-bit errors (SBE) detected in the fetched data; generating an error count indicating the number of corrected SBEs exceeding the reference number of SBEs preset for the memory device based on manufacturing testing; and providing the error count to the memory controller, In order to expose only the number of SBE accumulated after manufacturing.In one embodiment, the read access request is part of the error detection test routine generated by the memory controller. In one embodiment, performing internal error correction also includes internally generating an address in the memory device for fetching data. In one embodiment, providing the error count includes storing the error count in a register for access by the memory controller. In one embodiment, it also includes resetting the error count upon completion of the range of memory locations. In one embodiment, it also includes resetting the error count at the initiation of the memory device. In one embodiment, it further includes: identifying the row with the highest number of SBEs; and providing an indication of the row with the highest number of SBEs. In one embodiment, the indication includes the address of the row and the number of errors in the row.In one aspect, a memory device with internal error correction includes: logic inside the memory device, fetching data in response to a read access request from an associated memory controller, and performing internal error correction to correct the detected data in the fetched data Single bit error (SBE); a counter that increments an error count indicating the number of corrected SBEs exceeding the reference number of SBEs preset for the memory device based on the manufacturing test; and provides the error count to the memory controller so as to expose only after manufacturing The logic of the number of accumulated SBEs.The second memory device also includes features for operation according to any embodiment of the aspect of the second method. In one aspect, a device includes means for performing operations to execute a method for managing error correction information in accordance with any embodiment of the aspect of the second method. In one aspect, a manufactured product includes a computer-readable storage medium having content stored thereon, which, when accessed, causes execution of an operation of a method for managing error correction information in accordance with any embodiment of the aspect of the second method .In one aspect, a third method for managing error correction information in a memory includes: receiving a command from an associated memory controller in a memory device; generating an address within the memory device to execute an operation of running the command; executing an operation in the memory device Internal error correction of the memory device to correct the single-bit error (SBE) detected in the fetched data; generate an error count indicating the number of corrected SBEs exceeding the reference number of SBEs preset to the memory device based on the manufacturing test; and The memory controller provides error counts in order to expose only the number of SBEs accumulated after manufacturing.In one embodiment, performing error detection includes performing a series of operations for detecting errors in response to an error test command received from the host. In one embodiment, performing error detection on the range of memory locations includes performing error detection on the entire memory device. In one embodiment, performing error detection further includes generating addresses within the range of memory locations in the memory device. In one embodiment, generating the error count includes incrementing the count for each single-bit error (SBE) detection and correction performed in the range of memory locations that exceeds a reference. In one embodiment, generating an error count also includes incrementing the count for each SBE detection and correction performed, comparing the count with a baseline, and storing only the number of SBEs that exceed the baseline. In one embodiment, the baseline number of errors includes the number of errors detected during manufacturing testing of the memory device. In one embodiment, providing the error count also includes storing the error count in a register for the host to access. In one embodiment, it also includes resetting the error count upon completion of the range of memory locations. In one embodiment, it further includes: identifying the row with the highest number of SBEs; and providing an indication of the row with the highest number of SBEs. In one embodiment, the indication includes the address of the row and the number of errors in the row.In one aspect, a memory device with internal error correction includes: logic inside the memory device, receiving a command from an associated memory controller, generating an address within the memory device to execute the operation of running the command, and performing internal error correction to Correct the single-bit error (SBE) detected in the fetched data; a counter that generates an error count indicating the number of corrected SBEs exceeding the reference number of SBEs preset for the memory device based on the manufacturing test; and provides to the memory controller Error counting in order to expose only the logic of the number of SBEs accumulated after manufacturing.The third memory device also includes features for operation in accordance with any embodiment of the aspect of the third method. In one aspect, a device includes means for performing operations to execute a method for managing error correction information in accordance with any embodiment of the aspect of the third method. In one aspect, a manufactured product includes a computer-readable storage medium having content stored thereon, which, when accessed, causes execution of an operation of a method for managing error correction information in accordance with any embodiment of the aspect of the third method .The flowcharts illustrated herein provide examples of the sequence of various process actions. The flowchart can indicate the operations to be executed by the software or firmware routines as well as the physical operations. In one embodiment, the flowchart can illustrate the state of a finite state machine (FSM) (which can be implemented in hardware and/or software). Although shown in a specific sequence or order, unless otherwise specified, the order of actions can be modified. Therefore, the illustrated embodiment should be understood only as an example, and the process can be performed in a different order, and some actions can be performed in parallel. In addition, in various embodiments, one or more actions can be omitted; therefore, not all actions are required in every embodiment. Other process flows are possible.To the extent that various operations or functions are described herein, they can be described or defined as software codes, instructions, configurations, and/or data. The content can be directly executable (in "object" or "executable" form) source code or differential code ("incremental" or "patch" code). The software content of the embodiments described herein can be provided via a manufactured product on which the content is stored or via a method of operating a communication interface to send data via the communication interface. A machine-readable storage medium enables a machine to perform the described function or operation, and includes any mechanism for storing information in a form accessible by the machine (such as a computing device, an electronic system, etc.), such as recordable/non-recordable media (such as read-only Memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). The communication interface includes any mechanism for interfacing with any of hard-wired, wireless, optical and other media to communicate with another device, such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, and so on. The communication interface can be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide data signals describing the content of the software. The communication interface can be accessed via one or more commands or signals sent to the communication interface.The various components described herein can be components for performing the described operations or functions. The components described herein include software, hardware, or a combination of these. Components can be implemented as software modules, hardware modules, dedicated hardware (such as dedicated hardware, application specific integrated circuits (ASIC), digital signal processors (DSP), etc.), embedded controllers, hard-wired circuit systems, and the like.In addition to the content described herein, various modifications can be made to the disclosed embodiments and implementations of the present invention without departing from their scope. Therefore, the descriptions and examples herein should be construed as illustrative rather than restrictive. The scope of the present invention should only be measured with reference to the following claims. |
A determination can be made of a type of memory access workload for an application. A determination can be made whether the memory access workload for the application is associated with sequential read operations. The data associated with the application can be stored at one of a cache of a first type or another cache of a second type based on the determination of whether the memory workload for the application is associated with sequential read operations. |
CLAIMSWhat is claimed is:1. A method comprising:determining a memory access workload for an application;determining whether the memory access workload for the application is associated with sequential read operations; andstoring, by a processing device, data associated with the application at one of a cache of a first type or another cache of a second type based on the determination of whether the memory workload for the application is associated with sequential read operations.2. The method of claim 1, further comprising storing the data associated with the application at the cache of the first type in response to the memory access workload being associated with sequential read operations, wherein the first type is read-only.3. The method of claim 1, further comprising determining whether the memory access workload is associated with write and read operations.4. The method of claim 3, further comprising storing the data associated with the application at the another cache of the second type in response to the memory access workload being associated with write and read operations, wherein the second type is write- read.5. The method of claim 1, wherein determining the memory access workload for the application comprises:receiving a plurality of memory access requests from the application;determining a pattern based on the plurality of memory access requests; and determining the memory access workload for the application based on the pattern.6. The method of claim 1, further comprising:receiving the data associated with the application in one or more write requests to write the data to a memory component, wherein the one or more write requests have a fixed data size.7. The method of claim 6, further comprising:storing the data associated with the application at one or more sectors of a cache line of the cache of the second type to accumulate the data in the cache line in response to determining that memory access workload for the application is associated with write and read operations, wherein each of the one or more sectors have the fixed data size.8. The method of claim 7, further comprisingdetermining when a cumulative data size of the one or more sectors storing the data associated with the application satisfies a threshold condition; andresponsive to determining that the cumulative data size of the one or more sectors storing the data associated with the application satisfies the threshold condition, transmitting a request to store the data at the memory component.9. The method of claim 1, further comprising:receiving a command to preload the cache of the first type or the another cache of the second type with the data associated with the application; andloading the cache of the first type or the another cache of the second type with the data associated with the application prior to a request to access the data being received from the application.10. A system comprising:a memory device; anda processing device, operatively coupled with the memory device, to:determine a memory access workload for an application;determine whether the memory access workload for the application is associated with sequential read operations; andstore data associated with the application at one of a cache of a first type or another cache of a second type based on the determination of whether the memory workload for the application is associated with sequential read operations.11. The system of claim 10, wherein the processing device is further to store the data associated with the application at the cache of the first type in response to the memory access workload being associated with sequential read operations, wherein the first type is read-only.12. The system of claim 10, wherein the processing device is further to determine whether the memory access workload is associated with write and read operations.13. The system of claim 12, wherein the processing device is further to store the data associated with the application at the another cache of the second type in response to the memory access workload being associated with write and read operations, wherein the second type is write-read.14. The system of claim 10, wherein to determine the memory access workload for the application, the processing device is further to:receive a plurality of memory access requests from the application;determine a pattern based on the plurality of memory access requests; anddetermine the memory access workload for the application based on the pattern.15. The system of claim 10, wherein the processing device is further to:receive the data associated with the application in one or more write requests to write the data to a memory component, wherein the one or more write requests have a fixed data size.16. The system of claim 15, wherein the processing device is further to:store the data associated with the application at one or more sectors of a cache line of the cache of the second type to accumulate the data in the cache line based on a determination of whether the memory workload for the application is associated with write and read operations, wherein each of the one or more sectors have the fixed data size;determine when a cumulative data size of the one or more sectors storing the data associated with the application satisfies a threshold condition; andresponsive to determining that the cumulative data size of the one or more sectors storing the data associated with the application satisfies the threshold condition, transmit a request to store the data at the memory component.17. A non-transitory machine-readable medium storing instructions that, when executed by a processing device, cause the processing device to:receive a plurality of requests to access data at a memory component, wherein each of the plurality of requests specifies a fixed size of data;
store data of each of the plurality of requests into a respective sector of a plurality of sectors of a cache line of a cache to accumulate the data in the cache line, wherein each respective sector of the plurality of sectors of the cache line stores the data at the fixed size; determine whether a cumulative data size of the plurality of sectors storing the data of the plurality of requests satisfies a threshold condition; andresponsive to determining that the cumulative data size of the plurality of sectors of the cache line storing the data satisfies the threshold condition, transmit a request to store the data at the memory component.18. The non-transitory machine-readable medium of claim 17, wherein the fixed size of data is specified by a protocol used by a host system to interface with a memory sub-system comprising the memory component.19. The non-transitory machine-readable medium of claim 17, wherein the threshold condition corresponds to the cumulative data size satisfying a data size parameter specified for accessing the memory component.20. The non-transitory machine-readable media of claim 17, wherein the processing device is further to:receive a command to preload the cache with other data associated with an application; andwrite the other data associated with the application to the cache prior to receiving the plurality of requests to access the data at the memory component. |
SEPARATE READ-ONLY CACHE AND WRITE-READ CACHE IN A MEMORYSUB-SYSTEMTECHNICAL FIELD[001] Embodiments of the disclosure relate generally to memory sub-systems, and more specifically, relate to separate read-only cache and write-read cache in a memory sub-system.BACKGROUND[002] A memory sub-system can be a storage system, such as a solid-state drive (SSD), or a hard disk drive (HDD). A memory sub-system can be a memory module, such as a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile dual in-line memory module (NVDIMM). A memory sub-system can include one or more memory components that store data. The memory components can be, for example, non volatile memory components and volatile memory components. In general, a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components.BRIEF DESCRIPTION OF THE DRAWINGS[003] The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure.[004] FIG. 1 illustrates an example computing environment that includes a memory sub-system in accordance with some embodiments of the present disclosure.[005] FIG. 2 illustrates an example caching component and local memory of the memory sub-system in accordance with some embodiments of the present disclosure.[006] FIG. 3 is a flow diagram of an example method to use a separate read-only cache and write-read cache based on a determined memory access workload of an application in accordance with some embodiments of the present disclosure.[007] FIG. 4 is a flow diagram of an example method to use sectors having fixed data sizes in a cache line to accumulate data in a cache in accordance with some embodiments of the present disclosure.[008] FIG. 5 illustrates an example read-only cache and a write-read cache in accordance with some embodiments of the present disclosure.[009] FIG. 6 is a flow diagram of an example method to store a read request for data that is not present in a cache in an outstanding command queue in accordance with some embodiments of the present disclosure.
[0010] FIG. 7 is a flow diagram of an example method to execute the requests stored in an outstanding command queue in accordance with some embodiments of the present disclosure.[0011] FIG. 8 illustrates an example read-only outstanding command queues, write-read outstanding command queues, a read-only content-addressable memory, and a read-only content-addressable memory in accordance with some embodiments of the present disclosure.[0012] FIG. 9 is a flow diagram of an example method to determine a schedule to execute requests in a memory sub-system in accordance with some embodiments of the present disclosure.[0013] FIG. 10 is a flow diagram of another example method to determine a schedule to execute requests in a memory sub-system in accordance with some embodiments of the present disclosure.[0014] FIG. 11 illustrates an example of using a priority scheduler to determine a schedule to execute requests based on priority indicators in accordance with someembodiments of the present disclosure.[0015] FIG. 12 is a block diagram of an example computer system in whichembodiments of the present disclosure may operate.DETAILED DESCRIPTION[0016] Aspects of the present disclosure are directed to separate read-only cache and write-read cache in a memory sub-system. A memory sub-system is also hereinafter referred to as a“memory device.” An example of a memory sub-system is a storage device that is coupled to a central processing unit (CPU) via a peripheral interconnect (e.g., an input/output bus, a storage area network). Examples of storage devices include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, and a hard disk drive (HDD). Another example of a memory sub-system is a memory module that is coupled to the CPU via a memory bus. Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), a non-volatile dual in-line memory module (NVDIMM), etc. In some embodiments, the memory sub-system can be a hybrid memory/storage sub system. In general, a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub system and can request data to be retrieved from the memory sub-system.[0017] The memory sub-system can include multiple memory components that can store data from the host system. In some host systems, the performance of applications executing
on the host system can highly depend on the speed at which data can be accessed in a memory sub-system. To accelerate data access, conventional memory sub-systems use spatial and temporal locality of memory access patterns to optimize performance. These memory sub-systems can use higher performance and lower capacity media, referred to as caches, to store data that is accessed frequently (temporal locality) or data located in a memory region that has recently been accessed (spatial locality).[0018] Each of the memory components can be associated with a protocol that specifies the size of a management unit used by the memory component and/or the preferred sizes for requests to access data stored at the management unit. For example, a protocol for one memory component can specify that 512 kilobyte (KB) size requests be performed on the memory component. An application executing on a host system can initially request to read 512 KB of data from the memory component, but the 512 KB request is typically broken up into smaller granularity requests (e.g., eight 64 KB requests) due to a protocol of a bus used to communicate between the host system and the memory sub-system. The conventional memory sub-system can perform the smaller granularity requests to obtain the data from the memory component, which can then be stored in a cache, and/or returned to the requesting application. Executing the smaller granularity requests on a memory component that is capable of handling larger granularity requests can lead to faster wear of the memory component and a lower endurance as more read operations will be performed at the memory component.[0019] Additionally, some applications that execute on a host system can use a memory sub-system as a main memory. In such an instance, the address space generally has separate memory address regions for reading data and writing data. In conventional memory sub systems, a single cache that is capable of writing and reading data can be used, which may not be desirable for different memory access workloads. For example, the read and write request latencies can be different and using a single cache can decrease performance of the memory sub-system when an application is writing and reading to different address spaces.[0020] The different types of memory access workloads can be sequential (in-order) and random (out-of-order) accesses. For example, an application can request to read original data from an address, write different data to the address, and read the different data from the address. If the requests are not handled in order properly, there can be data hazards such as the memory sub-system returning the wrong data to the application (e.g., returning the original data in response to a read request for the different data before the different data has been written).
[0021] Further, in some instances, applications can request to access data at different addresses. The data can be located at the same or different memory components. The latency with returning the data at different addresses from the same or different memory components can vary based on various factors, such as the speed of the memory component, the size of the data requested, and the like. A conventional memory sub-system typically waits until the data at the address of a request that was received first is returned from the memory components without considering whether data at a different address of another request is returned from the memory components faster. That is, the data at the different address can sit idle after being returned from the memory components until the data at the address of the request received first is stored in a cache. This can reduce data throughput in the memory sub-system.[0022] Aspects of the present disclosure address the above and other deficiencies by using a separate read-only cache and write-read cache in a memory sub-system. A separate read-only cache and write-read cache in a memory sub-system front-end can provide applications executing on host systems different spaces to read from and write to. For example, applications can request certain virtual addresses that are translated to logical addresses by a host operating system. The logical address can be translated to a physical address that can be maintained in different spaces to read from and write to using the separate read-only cache and write-read cache. The separate read-only and write-read caches can be located between the host system and the media components, also referred to as a“backing store,” of the memory sub-system. The read-only cache can be used for sequential read requests for data in the memory components and the write-read cache can be used to handle read and write requests for data in the media components. The separate caches can improve performance of the memory sub-system by reading/writing data faster than accessing the slower backing store for every request. Further, the separate caches improve endurance of the backing store by reducing the number of requests to the backing store.[0023] In some embodiments, a memory sub-system can detect a memory access workload, such as sequential memory access workloads or random memory access workloads. Sequential memory access workloads can refer to read requests occurring one after the other to the same or sequential addresses. The data requested in the sequential memory access workloads can be populated in the read-only cache for faster access than using the backing store every time.[0024] Random memory access workloads can refer to writes and reads occurring randomly. Certain applications can use random memory access workloads. The data
associated with the random write and read requests can be populated in the write-read cache. For example, data requested to be written to the backing store can be initially written to the write-read cache, when the data is requested to be read, the write-read cache can return the written data without having to access the backing store.[0025] Each of the read-only cache and the write-read cache can use a respective content- addressable memory (CAM) to determine if data that is associated with requests received from the host system are present in the read-only cache and/or the write-read cache. For example, the memory sub-system can use the CAMs to determine whether requested data including tags are stored in the read-only and/or write-read caches. A data request has an address specifying the location of the requested data. The address can be broken up into portions, such as an offset that identifies a particular location within a cache line, a set that identifies the set that contains the requested data, and a tag that includes one or more bits of the address that can be saved in each cache line with its data to distinguish different addresses that can be placed in a set. The CAM corresponding to the read-only cache or the write-read cache that are to store the requested data can store the tag for the requested data to enable faster lookup than searching the cache itself when the requests are received.[0026] Further, as discussed above, the host system can provide a request for data (e.g., 512 bytes) by breaking the request into small granularity requests of 64 bytes based on the protocol used by a memory bus that communicatively couples the host system to the memory sub-system. In some embodiments, each of the read-only cache and the write-read cache use sectors to aggregate the smaller granularity requests to a larger granularity of the cache line (e.g., aggregates eight 64 bytes requests to achieve the 512 byte cache line size). The sectors can have a fixed size that is specified by a memory access protocol used by the host system and the size of a management unit of the memory component in the backing store that stores the data. For example, if the size of the management unit is 512 bytes in the memory component and the protocol specifies using 64 byte requests, then the sectors can have a fixed data size of 64 bytes and the cache line can include eight sectors to equal the 512 bytes of the management unit. In some instances, the management unit can be 128 bytes, for example, and just two sectors having a fixed data size of 64 bytes can be used. The number of sectors of the write-read cache can be larger than the number of sectors for the read-only cache because it is desirable to perform fewer writes to the backing store to improve endurance of the backing store. The memory sub-system can execute one request for 512 byte to the backing store, instead of eight 64 byte requests, to reduce the number of requests that are made to the backing store having large management units in memory components, thereby improving
endurance of the memory components.[0027] In some embodiments, the read-only cache and/or the write-read cache can be preloaded with data prior to receiving memory access requests from the host system. For example, the read-only cache and/or the write-read cache can be preloaded duringinitialization of an application executing on the host system. A memory protocol can include semantics to enable an application to send preload instructions to the memory sub-system to preload the read-only cache and/or the write-read cache with desired data. One or more read requests can be generated by the memory sub-system to obtain the data from the backing store. As described below, outstanding command queues can be used to store the requests in the order in which they are generated and priority scheduling can be performed to determine a schedule of executing the requests. Fill operations can be generated to store the data obtained from the backing store in one or more sectors of a cache line in the read-only cache and/or the write-read cache. The applications can send the preload instructions based on the data that the applications typically use during execution or the data that the application plans to use.[0028] Further, outstanding command queues can be used to store read requests and write requests to prevent data hazards and enhance the quality of service of accessing data in the memory sub-system. The outstanding command queues can improve request traffic throughput based on different types of traffic in the memory sub-system. For example, the memory sub-system can use control logic and the outstanding command queues to provide in-order accesses for data requested at the same cache line and out-of-order accesses to data requested at different cache lines. A separate outstanding command queue can be used for the read-only cache and the write-read cache. Each cache line of the read-only outstanding command queue can correspond to a respective cache line in the read-only cache, and each cache line of the write-read outstanding command queue can correspond to a respective cache line in the write-read cache. There can be a fewer number of queues in each of the outstanding command queues than the number of cache lines in the read-only cache and the write-read cache.[0029] In general, requests can be received from the host system. Both a read-only content addressable memory (CAM) and a write-read CAM can be searched to determine if a matching tag associated with an address included in the request is present in the CAMs. If a matching tag is found, the data can be returned from the corresponding cache line for a read request or the data can be written to the cache line for a write request. If the matching tag is not found in either CAM, the read-only outstanding command queue and the write-read
outstanding command queue can be searched for the matching tag. If the matching tag is found in either of the outstanding command queues, then there are pending requests for the cache line assigned to the tag and the received request is stored in the queue behind the other requests for the data at the address. If the matching tag is not found in either of the outstanding command queues, a queue can be selected as the desired outstanding command queue and the tag of the request can be assigned to the selected outstanding command queue. Further, the memory sub-system can set a block bit to block the selected outstanding command queue and store the request in the selected outstanding command queue. The requests can process in the order in which the requests are received in the same queue. There can be out-of-order access to the different cache lines based on when requests are received and by using the block bit to block and unblock different outstanding command queues assigned to the different cache lines, as described in further detail below.[0030] In some embodiments, to further improve performance and quality of service of the memory sub-system, a priority scheduler can be used with a priority queue to determine a schedule of when to execute requests and fill operations. As described above, the outstanding command queues can queue misses for read requests for data and misses for write requests for data in the caches. A priority scheduler can determine a schedule of when to execute the requests based on when the requests are received. The priority scheduler can generate and assign priority indicators (e.g., tokens having a priority value) to the requests to maintain the order for the requests and the fill operations that are generated to store the data obtained from the backing store at cache lines of the cache.[0031] For example, for read request misses, the priority scheduler can generate a priority indicator having a higher priority value for a fill operation associated with the particular read request that can be assigned when the data associated with the particular read request is obtained from the backing store. When the requests are stored in the outstanding command queues and the schedule for execution is determined, the priority scheduler can relay the requests to be stored in the priority queue. The requests can be processed in the order in which they are stored in the priority queue to obtain data associated with the requests from the backing store or write data associated with the requests to the backing store. The data that is returned from the backing store can be stored in a fill queue with a fill operation that is assigned a priority indicator. The priority indicators can specify to perform fill operations in the fill queue first and can be used to regulate the processing of the requests through the outstanding command queues.[0032] As described further below, there are certain instances when requests for data
stored at different cache lines can be executed out of order. That is, one request to read data can be executed from the outstanding command queue but another request in the same outstanding command queue for the same data can be blocked to allow execution of yet another request in a different outstanding command queue. In such instances, the requests can be executed out of order based on the priority indicators that are assigned to the requests and the fill operations associated with the requests. Performing the requests out of order between the outstanding command queues can be done to prevent applications from having to wait on data that is obtained from the backing store. Such a technique can improve the quality of service of returning data to the host system, thereby improving performance of the memory sub-system.[0033] Advantages of the present disclosure include, but are not limited to, improved endurance of the memory components by using sectored cache lines to accumulate requests so that the number of requests performed on the memory components can be reduced. Also, using separate read-only and write-read caches can provide separate spaces for reading data from and writing data to for applications executing on the host system. The separate spaces can improve performance of accessing data for the applications by detecting the type of memory access workload used by the applications and selecting an appropriate cache to fulfill the memory accesses for the applications. Additionally, the quality of service and performance of the memory sub-system can be improved by using the outstanding command queues and priority scheduler to determine a schedule of executing the requests.[0034] FIG. 1 illustrates an example computing environment 100 that includes a memory sub-system 110 in accordance with some embodiments of the present disclosure. The memory sub-system 110 can include media, such as memory components 112A to 112N. The memory components 112A to 112N can be volatile memory components, non-volatile memory components, or a combination of such. In some embodiments, the memory sub system is a storage system. An example of a storage system is a SSD. In some embodiments, the memory sub-system 110 is a hybrid memory/storage sub-system. In general, the computing environment 100 can include a host system 120 that uses the memory sub-system 110. For example, the host system 120 can write data to the memory sub-system 110 and read data from the memory sub-system 110.[0035] The host system 120 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device. The host system 120 can include or be coupled to the memory sub-system 110 so that the host system 120 can read data from or write data to the
memory sub-system 110. The host system 120 can be coupled to the memory sub-system 110 via a physical host interface. As used herein,“coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. The physical host interface can be used to transmit data between the host system 120 and the memory sub -system 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access the memory components 112A to 112N when the memory sub -system 110 is coupled with the host system 120 by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 110 and the host system 120.[0036] The memory components 112A to 112N can include any combination of the different types of non-volatile memory components and/or volatile memory components. An example of non-volatile memory components includes a negative-and (NAND) type flash memory. Each of the memory components 112A to 112N can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)). In some embodiments, a particular memory component can include both an SLC portion and a MLC portion of memory cells. Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the host system 120. Although non-volatile memory components such as NAND type flash memory are described, the memory components 112A to 112N can be based on any other type of memory such as a volatile memory. In some embodiments, the memory components 112A to 112N can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. A cross-point array of non volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash- based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell
being previously erased. Furthermore, the memory cells of the memory components 112A to 112N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.[0037] The memory system controller 115 (hereinafter referred to as“controller”) can communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N and other such operations. The controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The controller 115 can include a processor (processing device) 117 configured to execute instructions stored in local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110, including handling communications between the memory sub-system 110 and the host system 120. In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, a memory sub-system 110 may not include a controller 115, and may instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system).[0038] In general, the controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112A to 112N. The controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory components 112A to 112N. The controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory components 112A to 112N as well as convert responses associated with the memory components 112A to 112N into information for the host system 120.
[0039] The memory sub-system 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the controller 115 and decode the address to access the memory components 112A to 112N.[0024] The memory sub-system 110 includes a caching component 113 that can use a separate read-only cache and write-read cache in a memory sub-system. In someembodiments, the controller 115 includes at least a portion of the caching component 113.For example, the controller 115 can include a processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein. In some embodiments, the caching component 113 is part of the host system 110, an application, or an operating system.[0025] The caching component 113 can use a separate read-only cache and write-read cache in the memory sub-system 110. The read-only cache can be used for sequential read requests for data in the memory components and the write-read cache can be used to handle read and write requests for data in the media components. The separate caches can improve performance of the memory sub-system by reading/writing data faster than accessing the slower backing store every time. Further, the separate caches improve endurance of the backing store by reducing the number of requests to the backing store by using sectors in cache lines. In some embodiments, the caching component 113 can detect a memory access workload, such as sequential memory access workloads or random memory access workloads. The data requested in the sequential memory access workloads can be populated in the read-only cache for faster access than using the backing store every time. The data associated with the random write and read requests can be populated in the write-read cache. In some embodiments, the caching component 113 can receive preload instructions from one or more applications executing on the host system 120 and preload the read-only cache and/or the write-read cache to improve quality of service.[0026] Further, the caching component 113 can use outstanding command queues to store read requests and write requests to prevent data hazards and enhance the quality of service of accessing data in the memory sub-system. The outstanding command queues can improve request traffic throughput based on different types of traffic in memory sub-systems. The controller can use control logic and the outstanding command queues to provide in-order accesses for data requested at the same cache line and out-of-order accesses to data requested at different cache lines.
[0027] In some embodiments, to further improve performance and quality of service of the memory sub-system, the caching component 113 can use a priority scheduler with a priority queue to determine a schedule of when to execute requests and fill operations. As described above, the outstanding command queues can queue misses for read requests for data and misses for write requests for data in the caches. A priority scheduler can determine a schedule of when to execute the requests based on when the requests are received. The priority scheduler can generate and assign priority indicators (e.g., tokens having a priority value) to the requests to maintain the order for the requests and the fill operations that are generated to store the data obtained from the backing store at cache lines of the cache.[0028] FIG. 2 illustrates an example caching component 113 and local memory 119 of the memory sub-system 110 in accordance with some embodiments of the present disclosure. As depicted, the local memory 119 can include a separate read-only cache 200 and a write- read cache 202. The caching component 113 can include a read-only content-addressable memory (CAM) 204 for the read-only cache 200, a write-read CAM 206 for the write-read cache 202, read-only outstanding command queues 208, and a write-read outstanding command queues 210. The read-only outstanding command queues 208 and the write-read outstanding command queues 210 can be first-in, first-out (FIFO) queues. The structure and contents of the read-only CAM 204, the write-read CAM 206, the read-only outstanding command queues 208, and the write-read outstanding command queues 210 is discussed further below. The caching component 113 also includes a priority scheduler 212 that determines a schedule for executing requests and/or fill operations using priority indicators (e.g., numerical tokens). The caching component 113 can include a state machine that also determines the number of read requests that are needed for a size of the cache line of the read-only cache 200 or write-read cache 202 that is to be filled with data from the backing store. The priority scheduler 212 can also include arbitration logic that determines the order in which requests and/or fill operations are to execute. The arbitration logic can specify scheduling requests and/or fill operations in the order in which the operations are received. One purpose of the arbitration logic can be to not keep applications waiting if the data is obtained in the caching component 113 from the backing store. As such, the priority scheduler 212 can assign a higher priority to fill operations and data. Additional functionality of the priority scheduler 212 is discussed below.[0029] The caching component 113 also includes various queues that are used for different purposes. The queues can be first-in, first-out (FIFO) queues. As such, the queues can be used to process requests, operations, and/or data in the order in which the requests,
operations, and/or data are received and stored in the various queues. The caching component 113 can include a fill queue 214, a hit queue 216, an evict queue 218, a priority queue 220, and a pend queue 222. The fill queue 214 can store data obtained from the backing store and fill operations generated for the data. The fill operations can be generated when a read request is received and the requested data is not found (cache miss) in either read-only cache 200 or write-read cache 202. The hit queue 216 can store the requests for data that is found (cache hit) in the read-only cache 200 or the write-read cache 202.[0030] The evict queue 218 can be used to evict data from the read-only cache 200 and/or the write-read cache 202 as desired. For example, when the read-only cache 200 and/or the write-read cache 202 are full (every cache line includes at least some valid data in one or more sectors), an eviction policy such as least recently used can be used to select the cache line with data that is least recently used to evict. The data of the selected cache line can be read out of the read-only cache 200 and/or write-read cache 202 and stored in the evict queue 218. The selected cache line can then be invalidated by setting a valid bit to an invalid state. The invalid cache line can be used to store subsequent data.[0031] The priority queue 220 can store requests to execute on the backing store. The priority scheduler 212 can assign priority indicators to each request that is received and/or fill operation that is generated for the requests when the requests are received. The priority scheduler 212 can use the priority indicators to determine a schedule of executing the requests and/or fill operations. Based on the determined schedule, the priority scheduler 212 stores the request in the priority queue 220 to be executed no the backing store in the order the requests are stored in the priority queue 220. The pend queue 222 can store requests that are received for data not found in the caches 200 and 202 when there are no available read only outstanding command queues 208 or write-read outstanding command queues 210 available.[0032] The read-only cache 200 and write-read cache 202 included in the local memory 119 can provide faster access to data stored in the slower memory components of the backing store. The read-only cache 200 and write-read cache 202 can be high-performance, lower- capacity media that store data that is accessed frequently (temporal locality) by applications of the host system 120 or data located in a memory region that has recently been accessed (spatial locality). An application binary or paged software system using the memory sub system as the address space can have separate memory address regions for reading data from and writing data to by using the read-only cache 200 and the write-read cache 202. There can be numerous cache lines in each of the read-only cache 200 and the write-read cache 202.
Each cache line can include one or more sectors that have a fixed size, as discussed further below.[0033] For a read request, the caching component 113 searches the read-only CAM 204 and write-read CAM 206 to determine if a matching tag is found. Finding a matching tag indicates that the data is stored at a cache line of the read-only cache 200 or the write-read cache 202 depending at which CAM 204 or 206 the matching tag is found. If there is a hit, meaning that the matching tag is found in one of the CAMs204 or 206, then the request is executed relatively quickly as compared to accessing the backing store. If there is a miss, meaning that the matching tag is not found in one of the CAMs 204 or 206, then the read only outstanding command queues 208 and the write-read outstanding command queues 210 are searched for the matching tag. If there is a hit, and the matching tag is found in one of the outstanding command queues 208 or 210, then the request is stored in the outstanding command queue that is assigned the matching tag. If there is a miss in the outstanding command queues 208 and 210, then one of the outstanding command queues 208 or 210 can be selected and assigned the tag included in the address of the request. The outstanding command queues 208 and 210 can prevent data hazards by enabling processing of requests in the order in which the requests are received for a cache line. Further, the outstanding command queues 208 and 210 can improve quality of service and performance by enabling performing requests out of order for different cache lines when data is obtained faster for a request received subsequent to a firs request.[0034] A read-only outstanding command queue 208 or a write-read outstanding command queue 210 can be selected based on the type of memory access workload currently used by the application or based on the type of request. For example, if the memory access workload is sequential, then a read-only outstanding command queue can be selected. If the memory access workload is random, then a write-read outstanding command queue can be selected. If the request is to write data, then a write-read outstanding command queue can be selected. In any instance, an outstanding command queue that has a valid bit set to an invalid state can be selected and the tag of the request can be assigned to the selected outstanding command queue 208 or 210. Each queue in the outstanding command queues 208 and 210 can correspond to a single cache line in either of the caches 200 or 202 at a given time. The valid bit for the selected queue in the outstanding command queues 208 or 210 can be set to a valid state when the tag is assigned. If every outstanding command queue is being used as indicated by having a valid bit set to a valid state, then the request can be stored in the pend queue 222 until an outstanding command queue in the read-only outstanding command
queues 208 or the write-read outstanding command queues 210 becomes invalid.[0035] For a write request, the caching component 113 can search the read-only CAM 204 and invalidate the cache line if the cache line includes data for the address being requested. The caching component 113 can identify an empty, invalid cache line in the write- read cache using the write-read CAM 206. The data can be written to the selected cache line in the write-read cache 202. A dirty bit in the write-read CAM 206 can be set for the cache line to indicate that data is written to that cache line. The writing of data to the cache can be performed faster than writing the data to the slower backing store. Subsequent write requests can write data to the same or different cache lines and the dirty bit can be set in the write-read CAM 206 for the cache line at which the subsequent write request is performed. Further, the subsequent data associated with the write request can be made invalid if found in the read only cache 200. During operation, when is the memory sub-system determines to flush either of the caches 200 or 202, the dirty cache lines can be identified and queued to the evict queue 218 to be sent to the backing store.[0036] FIG. 3 is a flow diagram of an example method 300 to use a separate read-only cache and write-read cache based on a determined memory access workload of an application in accordance with some embodiments of the present disclosure. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 300 is performed by the caching component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0037] At operation 310, the processing device determines a memory access workload for an application. The processing device can determine the memory access workload for the application by receiving a set of memory access requests from the application, determining a pattern based on the set of memory access requests, and determining the memory access workload for the application based on the pattern. For example, if the same or sequential addresses in a similar address region are being requested to be read, the processing device can determine that the memory access workload is sequential and the read-only cache should be
used to store the data associated with the request. Further, if the pattern is indicative of sequential read requests or operations being received one after the other, then the processing device can determine that the memory access workload is sequential and the read-only cache should be used to store the data associated with the request. If the pattern indicates that random read requests and write requests are being received from the application, then the processing device can determine that the memory access workload is random for the application and the write-read cache should be used to store the data associated with the request. In some embodiments, the write-read cache is used to store data associated with any write requests.[0038] At operation 320, the processing device determines whether the memory access workload for the application is associated with sequential read operations. For example, a determination can be made as to whether the memory access workload for the application is sequential or random as described above. At operation 330, the processing device stores data associated with the application at one of a cache of a first type (read-only) or another cache of a second type (write-read) based on the determination of whether the memory workload for the application is associated with sequential read operations. The processing device stores the data associated with the application at the cache of the first type when the memory access workload is associated with sequential read operations. In some embodiments, if the processing device determines that the memory access workload is associated with write and read operations, then the processing device can store the data associated with the application at the cache of the second type.[0039] The processing device can determine if the data is present in either the read-only cache or the write-read cache by searching the respective read-only CAM and write-read CAM. If the data is present in a cache line of either cache, the read request can be executed and the data can be returned to the application. If the data is not present, the read-only outstanding command queue and the write-read outstanding command queue can be searched for the tag associated with the address of the requested data. If the matching tag is not found in the read-only outstanding command queues, the read request can be stored in a queue of the read-only outstanding command queue and executed to obtain the data associated with the request from the backing store. If the matching tag is found in a read-only outstanding command queue, then one or more requests for the cache line are stored in the outstanding command queue and the current request is stored behind the other requests in the read-only outstanding command. The current request will be executed after the other requests for the particular cache line based on a schedule determined by the priority scheduler. Further details
with respect to the operation of the outstanding command queues and the priority scheduler are discussed below.[0040] In some embodiments, the processing device can receive the data associated with the application in one or more requests to write the data to a memory component. The one or more write requests can have a fixed data size. The fixed data size is specified by a memory semantic of the protocol used to communicate between the host system and the memory sub system via a bus. The processing device can store the data associated with the application at one or more sectors of a cache line of the cache of the second type to accumulate data in the cache line based on a determination of whether the memory access workload for the application is associated with write and read operations. Each of the one or more sectors have the fixed data size. The processing device can determine when a cumulative data size of the one or more sectors storing the data associated with the application satisfies a threshold condition. Responsive to determining that the cumulative data size of the one or more sectors storing the data associated with the application satisfies the threshold condition, the processing device can transmit a request to store the cumulative data at the memory component. A write request can be sent to the backing store to write the accumulated data in the cache line when each sector in the cache line includes valid data. In this way, instead of issuing eight write requests to the backing store, just one write request for the cache line can be issued to the backing store. Using this technique can improve the endurance of the memory components in the backing store by performing fewer write operations.[0041] Further, read requests can also be received from an application and the read requests can each have the fixed data size. The cache lines in the read-only memory can be broken up into one or more sectors that each have the fixed data size. When data requested to be read is already present in either of the read-only cache or the write-read cache, the read request can be performed to read the data from the appropriate cache line storing the requested data. When there is a cache miss and neither the read-only cache nor the write-read cache stores the requested data, the read requests can be processed using the outstanding command queues. The priority scheduler can determine a number of read requests to perform based on the size (e.g., two 64 byte sectors) of the cache line. For example, if just one read request for 64 bytes is received, and the cache line size is 128 bytes, the priority scheduler can determine that two read requests for 64 bytes (128 bytes total) are to be performed to return the full data to store in the cache line associated with the request.[0042] In some embodiments, the processing device can receive a command or instruction from an application to preload the read-only cache or the write-read cache with
the data associated with the application. Such data can be data that is to be used by or operated on by the application. The processing device can preload the read-only cache or the write-read cache with the data associated with the application before any requests to access the data are received from the application. The instruction can be associated with the memory semantic used in the protocol to communicate between the host system and the memory sub system. To process the preload instruction, the processing device can generate a suitable number of read requests for the data using a state machine in the priority scheduler. The processing device can store the generated read requests in the read-only outstanding command queue or the write-read outstanding command queue to be executed on the backing store. When the data associated with the read requests is obtained from the backing store, the data can be stored in one or more cache lines of the read-only cache or the write-read cache.[0043] FIG. 4 is a flow diagram of an example method 400 to use sectors having fixed data sizes in a cache line to accumulate data in a cache in accordance with someembodiments of the present disclosure. The method 400 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 400 is performed by the caching component 113 of FIG. 1.Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0044] At operation 410, the processing device receives a set of requests to access data at a memory component. Each of the requests can specify a fixed size of data. The fixed size of data is specified by a memory access protocol used by the host system to interface with the memory sub-system including one or more memory components in the backing store. The requests can be to write data to the backing store.[0045] At operation 420, the processing device stores data of each of requests into a respective sector of a set of sectors of a cache line of a cache to accumulate data in the cache line. Each respective sector of the set of sectors of the cache line stores cache data at the fixed size. The particular cache line that is selected can be in a write-read cache and can be selected by identifying a cache line that is invalid. In other words, a cache line that does not have any
sectors including valid bits set or dirty bits set can be selected initially to store the data of a first request. The first write request can be stored in the write-read outstanding command queue and the tag of the write request can be assigned to one of the outstanding command queues. The outstanding command queue selected can correspond to a cache line at which the data will be written. The processing device can execute the write request in the outstanding command queue to write the data to a sector of the corresponding cache line. Further an entry in the write-read CAM can be created with the tag of the write request. Subsequent requests to write data with a matching tag that is found in the write-read CAM can be stored in the hit queue and then executed to write the data in other sectors. Whenever a sector is written to, the valid bit of the sector can be set to a state indicating valid data is stored. Further, the dirty bit of the sector can be set indicating that data is being written to that sector.[0046] At operation 430, the processing device determines when a cumulative data size of the set of sectors storing data for each of the requests satisfies a threshold condition. The threshold condition can include the cumulative data size satisfying a data size parameter specified for accessing the memory component. For example, a data size parameter for data access requests for a memory component can be set to larger granularities than the data size of the requests received from the host. In one example, the data size parameter can be 512 bytes and the data size of the sectors can be 64 bytes. The threshold condition can be satisfied when 512 bytes of data are accumulated in eight sectors in a cache line.[0047] At operation 440, responsive to determining that the cumulative data size of the set of sectors satisfies the threshold condition, the processing device transmits a request to store the accumulated data at the memory component. The data can remain in the cache line in case the application seeks to quickly access the data. For example, the application can read the data out of the cache line of the write-read cache.[0048] In some embodiments, the processing device can receive a command or instruction to preload data in the cache (e.g., the read-only cache and/or the write-read cache) with other data associated with the application. The processing device can preload the cache with the other data associated with the application prior to receiving the plurality of requests to access the data the memory component. The application can send the instructions to the memory sub-system if the application determines that the data is going to be used frequently by the application.[0049] FIG. 5 illustrates an example read-only cache 200 and a write-read cache 202 in accordance with some embodiments of the present disclosure. The separate read-only cache 200 and the write-read cache 202 can provide separate address spaces for applications or
paged systems to read data from and write data to, which can improve performance of the memory sub -system. The read-only cache 200 and the write-read cache 202 include numerous cache lines 500 and 504, respectively. Although just four cache lines are depicted in each of the caches 200 and 202, it should be understood that there can be many more cache lines included (e.g., hundreds, thousands, etc.). A total size of each of the caches 200 and 202 can be any suitable amount, such as 32 kilobytes.[0050] As depicted, a cache line 500 in the read-only cache 200 includes two sectors 502. Each of the sectors has a fixed size that can be equal to the data size of the requests that are sent from the host system. The data size of the requests can be specified by memory semantics of the protocol used to interface via the bus between the host system and the memory sub-system. In one example, the sectors can each be 64 bytes and a total data size of the cache line 500 can be 128 bytes. Further, a cache line 504 in the write-read cache 202 includes more sectors 506 than the read-only cache 200 because it is desirable to perform write operations on the backing store less often than read operations to improve the endurance of the memory components in the backing store. In the depicted example, the write-read cache 202 includes eight sectors that also have the fixed data size (e.g., 64 bytes). The fixed data size can also be equal to the data size of the requests received from the host system. In one example, the fixed data size of each sector of a cache line 504 in the write- read cache 202 can be 64 bytes. The write-read cache 202 can accumulate data for eight write requests until a cumulative data size for the eight sectors 506 satisfies a threshold condition. The threshold condition can be that the cumulative data size satisfies a data size parameter specified for accessing the memory component. The data size parameter can be a data size of a management unit of the memory component, for example 512 bytes. Responsive to determining that the cumulative data size of the set of sectors 506 of the cache line 504 storing each of the data of the requests satisfies the threshold condition, the caching component can transmit a write request to store the cumulative data at the backing store.[0051] FIG. 6 is a flow diagram of an example method 600 to store a read request for data that is not present in a cache in an outstanding command queue in accordance with some embodiments of the present disclosure. The method 600 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 600 is performed by the caching component 113 of FIG. 1.Although shown in a particular sequence or order, unless otherwise specified, the order of the
processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0052] At operation 610, the processing device receives a request to read data stored at a memory sub-system. The request to read data can be sent from an application executing on the host system. The request can include an address from which to read the data in the memory sub-system. An identifier, referred to as a“tag”, can be extracted from the address. The tag can be a subset of bits of the address that can be used to identify the location of the data at the address in the memory sub-system.[0053] At operation 620, the processing device determines whether the data is stored at a cache of the memory sub-system. In some embodiments, the processing device searches for the tag associated with the requested data in a read-only CAM and a write-read CAM. The read-only CAM and the write-read CAM can include tags corresponding to the data stored at every cache line in the respective CAM. The processing device can use a comparator in each of the read-only CAM and the write-read CAM to determine whether a matching tag is found for the requested address from which to read data. Determining whether the data is stored in the cache can include determining whether a valid bit is set to a valid state for the data in the CAM in which the matching tag is found. Responsive to determining that the data is stored at the cache of the memory sub-system, the processing device can store the request at another queue (e.g., a hit queue) used to manage execution of requests for data that is present in the cache.[0054] At operation 630, responsive to determining that the data is not stored at the cache of the memory sub-system, the processing device determines a queue of a set of queues to store the request with other read requests for the data stored at the memory sub -system. In some embodiments, the cache can refer to the read-only cache or the write-read cache. The set of queues can include the read-only outstanding command queue or the write-read outstanding command queue depending on which cache is selected to service the request. These queues can be used to store cache misses (e.g., read misses and write misses). As discussed above, the memory access workload can dictate which cache to use to service the request. If the memory access workload includes sequential read operations, then the read only cache and the read-only outstanding command queues can be used to service the request. If the memory access workload includes random read and write operations, then the write-
read cache and the write-read outstanding command queues can be used to service the request.[0055] The processing device can determine the queue by determining if any queue in the set of queues are associated with the identifier of the request. The processing device can search the read-only outstanding command queue and the write-read outstanding command queue for a queue that is assigned the identifier of the request. If there are no queues assigned the identifier, the processing device can select a queue that has a valid bit set to an invalid state and/or a block bit set to an unblocked state from the appropriate set of queues. The processing device can store the request in the queue in the invalid state, assign the tag to the queue, set the valid bit to a valid state, and set the block bit to a blocked state. If every queue in the appropriate set of queues is being used (have valid bits set to valid state and block bit set to blocked state), then the request is stored in a pend queue until a queue becomes invalid in the appropriate set of queues.[0056] If one of the queues in the set of queues is assigned the identifier and is valid, then there are other requests that have been received for the same address that are already stored in the queue. At operation 640, the processing device stores the request at the determined queue with the other read requests for the data stored at the memory sub-system. Each queue of the set of queues corresponds to a respective cache line of the cache. The queue corresponds to the respective cache line by assigning the tag of the request to the queue and also to an entry in the appropriate CAM that corresponds to the cache line storing the data of the request in the appropriate cache.[0057] The request can be assigned a priority indicator and relayed to a priority queue by a priority scheduler when the request is stored in the queue, as discussed further below. The priority scheduler can determine a number of requests to generate for the request based on size of the cache line. For example, if the request is for 64 bytes but the cache line size is 128 bytes, then the priority scheduler can determine to generate two requests of 64 bytes to read data out of the backing store to fill the entire cache line with valid data. The priority scheduler can increment a read counter and a fill counter when the request is stored in the priority queue. The requests can be executed on the backing store to obtain the desired data and the read counter can be decremented.[0058] The data obtained from the backing store can be stored in another queue (fill queue) with a fill operation. The processing device can assign the fill operation a priority indicator and execute the fill operation to store the data to the appropriate cache line in the cache. The processing device can set the block bit for the queue storing the requests to
unblocked state and can decrement the fill counter. A CAM entry can be generated for the cache line storing the data and the tag can be assigned to the CAM entry. The processing device can execute the requests in the queue to either read the data from the cache line or write the data to the cache line. Further, after the requests in the queue are executed, the processing device can invalidate the queue by setting the valid bit to an invalid state and un assigning the tag. The queue can then be reused for the same or another cache line based on subsequent requests that are received.[0059] In some embodiments, the processing device can receive a write request to write data to the backing store. The processing device can obtain a tag from the request and search the write-read CAM and the write-read outstanding command queues for the tag. If a matching tag is found in the write-read CAM, then the data in the request is written into the cache line corresponding to the tag. The processing device can select one or more invalid sectors in the cache line to which to write the data. When the data is written into the one or more sectors, the valid bits and dirty bits of the one or more sectors can be set by the processing device in the write-read CAM entry corresponding to the cache line including the one or more sectors.[0060] If a matching tag is not found in the write-read CAM but is found in a write-read outstanding command queue, then other requests including the tag are stored in the identified queue that is assigned the matching tag. The processing device can store the write request in the queue assigned the matching tag and the request can be processed similar in the order in which it is received to write the data to one or more sectors of the corresponding cache line in the write-read cache. For example, the priority scheduler can generate a priority indicator for the write request and assign the priority indicator to the write request. The priority scheduler can store the write request in the priority queue, and the write request can be executed when it reaches the front of the queue to write the data to the cache line. Storing the write request in the outstanding command queue can prevent data hazards from occurring by not allowing the write request to execute before other requests that were received before the write request.[0061] FIG. 7 is a flow diagram of an example method 700 to execute the requests stored in an outstanding command queue in accordance with some embodiments of the present disclosure. The method 700 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 700 is performed by the caching component 113 of FIG. 1. Although shown in a particular
sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0062] At operation 710, the processing device determines that data requested by a set of read operations has been retrieved from a memory component of a memory sub-system. The data retrieved from the memory component can be associated with a fill operation that is generated and the data and the fill operation can be stored in a fill queue.[0063] At block 720, the processing device executes the one or more fill operations to store the data at a cache line of a cache of the memory sub-system. A fill operation can be generated when the data is retrieved from the backing store. The fill operation can be stored in the fill queue with associated data when the fill operation is generated. The fill operations can be executed in the order that they are stored in the fill queue. Executing the fill operation can include removing the data from the fill queue and storing the data at the appropriate cache line in the cache (e.g., read-only cache or the write-read cache). The processing device can decrement a fill counter for each of the one or more fill operations executed. In response to executing the one or more fill operations to store the data at the cache line, the processing device can set a block bit associated with the determined queue to an unblocked state to enable execution of the requests stored at the determined queue.[0064] At operation 730, the processing device determines a queue of a set of queues that corresponds to the data that has been requested by the set of read operations. Each queue of the set of queues corresponds to a respective cache line of a set of cache lines of a cache of the memory sub-system. The cache can be a read-only cache and/or a write-read cache, and the set of queues can be the read-only outstanding command queue and/or the write-read outstanding command queue. Determining that the queue corresponds to the data that was requested can include determining if the queue is assigned an identifier (e.g., a tag) associated with the data.[0065] At operation 740, in response to executing the one or more fill operations to store the data at the cache line, the processing device executes the set of read operations stored at the determined queue in an order in which the set of read operations have been received by the memory sub-system. Using an outstanding command queue for storing requests enables in-order access to a cache line corresponding to the outstanding command queue, which can prevent data hazards in the memory sub-system. The requests can be assigned priority
indicators by the priority scheduler, which can be based on the order in which the requests are received by the memory sub-system, as described further below. The read operations can read the data stored at the cache line and return the data to the application that sent the request.[0066] FIG. 8 illustrates examples of read-only outstanding command queues 208, write- read outstanding command queues 210, a read-only content-addressable memory 204, and a read-only content-addressable memory 206 in accordance with some embodiments of the present disclosure. As depicted, the read-only outstanding command queues 208 can include multiple entries and each entry can include fields for a tag, a queue counter, a queue for the requests, a read counter and valid bit, and a fill counter and valid bit.[0067] The tag field stores the tag obtained from the request received from the host system. The queue counter can track the number of requests that are stored in the entries in the queue. The queue counter (qc) can be incremented when additional requests are stored in the queue and decremented when the requests are executed and removed from the queue. The queue for the requests can have any suitable number of entries. In one example, the number of entries in the queue is equal to the number of sectors in a cache line of the read-only cache. There can be a block bit that is set for a request when the request is stored in the queue.[0068] The read counter (R) can track the number of read operations that are to be performed to obtain the requested data from the backing store. The read counter can be incremented when the number of read operations are determine to retrieve the data from the backing store and can be decremented when the read operations are performed on the backing store to obtain the data. The valid bit for the read counter can indicate whether the data associated with the read is valid or invalid. The fill counter (F) can track the number of fill operations to execute to store the requested data in the cache line corresponding to the queue storing the request. The fill counter can be incremented when the fill operations are generated and decremented when the fill operations are executed. The valid bit for the fill counter can indicate whether the data associated with the fill operation is valid or invalid.[0069] The write-read outstanding command queues 210 can include multiple entries and each entry can include fields for a tag, a queue counter, a queue for the requests, an evict counter (E) and valid bit, and a write-back counter (WB) and valid bit. The tag field stores the tag obtained from the request received from the host system. The queue counter can track the number of requests that are stored in the entries in the queue. The queue counter can be incremented when additional requests are stored in the queue and decremented when the requests are executed and removed from the queue. The queue for the requests can have any suitable number of entries. In one example, the number of entries in the queue is equal to the
number of sectors in a cache line of the write-read cache. There can be a block bit that is set for a request when the request is stored in the queue.[0070] The evict counter can track the number of eviction operations that are to be performed to remove data from the write-read cache. The evict counter can be incremented when data of a cache line is selected to be evicted and decremented when the data in the cache line is evicted from the cache. The valid bit for the evict counter can indicate whether the data associated with the eviction is valid or invalid. The write-back counter can track the number of write operations to execute to write the data in a cache line corresponding to the queue to the backing store. The write-back counter can be incremented when write requests are stored in the queue and decremented when the write requests are executed. The valid bit for the write-back counter can indicate whether the data associated with the write operation is valid or invalid.[0071] The read-only CAM 204 can include multiple entries and each entry can include fields for a tag, valid bits for each sector, dirty bits for each sector, and an address. The tag field stores the tag obtained from the request. The valid bit for each sector can be set when the sector of the cache line corresponding to the entry stores valid data. The dirty bit for each sector can be set when data is being stored at the sector. The address field can store the address included in the request.[0072] The write-read CAM 206 can include multiple entries and each entry can include fields for a tag, valid bits for each sector, dirty bits for each sector, and an address. The tag field stores the tag obtained from the request. The valid bit for each sector can be set when the sector of the cache line corresponding to the entry stores valid data. The dirty bit for each sector can be set when data is being stored at the sector. The address field can store the address included in the request.[0073] When a request is received to access (e.g., read or write) data, a tag can be obtained from an address of data included in the request. The processing device can search the read-only outstanding command queues 208, the write-read outstanding command queues 210, the read-only CAM 204, and the write-read CAM 206 for a matching tag. If either the read-only CAM 204 or the write-read CAM 206 includes a matching tag, then there is a cache hit and the request can be stored in a hit queue to be executed. For example, for a read request cache hit, the data stored at the cache line corresponding to the entry in the CAM 204 or 206 having the matching tag can be returned to the requesting application. For a write request cache hit, the data in the request can be written to the cache line corresponding to the entry in the write-read CAM 206 having the matching tag. The dirty bits in the write-read
CAM 206 for the sectors to which the data are written can be set when the writing commences. The valid bits in the write-read CAM 206 for the sectors can be set when the data is written to the sectors.[0074] If the matching tag is not found in the read-only CAM 204 or the write-read CAM 206, but is found in the read-only outstanding command queues 206, then the request can be stored in an empty entry in the read-only outstanding command queue corresponding to the matching tag. The queue counter can be incremented, the read counter, and the fill counter can be incremented when the request is stored in the read-only outstanding command queue.[0075] If the matching tag is not found in the read-only CAM 204 or the write-read CAM 206, but is found in the write-read outstanding command queues 210, then the request can be stored in an empty entry in the write-read outstanding command queue corresponding to the matching tag. The queue counter can be incremented and the write-back counter can be incremented when the request is stored in the write-read outstanding command queue.[0076] If the matching tag is not found in any of the read-only CAM 204, the write-read CAM 206, the read-only outstanding command queues 206, or the write-read outstanding command queues 210, then a queue is selected from the read-only outstanding command queues 204 or the write-read outstanding command queues 210 based on the memory access workload used by the application. If the memory access workload is using sequential read operations, then the read-only outstanding command queues 208 are selected to be used to store the request. An entry in the read-only command queues 208 that includes valid bits set to the invalid state, is not assigned a tag, and is not blocked can be selected to store the read request. The read request can be stored at a read-only outstanding command queue, the tag of the request can be stored in the tag field, a block bit can be set for the request in the queue, the queue counter can be incremented, the read counter can be incremented, the fill counter can be incremented, and/or the valid bit can be set for the read counter.[0077] If the memory access workload is using random write and read operations, then the write-read outstanding command queues 210 are selected to be used to store the request. An entry in the read-only command queues 208 that includes valid bits set to the invalid state, is not assigned a tag, and is not blocked can be selected to store the write request. The write request can be stored at a write-read outstanding command queue, the tag of the request can be stored in the tag field, a block bit can be set for the request in the queue, the queue counter can be incremented, the write-back counter can be incremented, and the valid bit can be set for the write-back counter.[0078] FIG. 9 is a flow diagram of an example method 900 to determine a schedule to
execute requests in a memory sub-system in accordance with some embodiments of the present disclosure. The method 900 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 900 is performed by the caching component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0079] At operation 910, the processing device receives a request to read data stored at a memory sub-system. The request to read the data can be received from an application executing on the host system. The request can include an address of the memory sub-system from which to read the data.[0080] At operation 920, the processing device determines whether the data is stored at a cache of the memory sub-system. The memory sub-system can include a separate read-only cache and a write-read cache. The processing device can determine whether the data is stored at the cache by obtaining a tag from the address included in the request. The processing device can search a read-only CAM and a write-read CAM to determine whether a matching tag is included in either CAM. If there is not a matching tag found, then the processing device determines that the data is not stored at either the read-only cache or the write-read cache.[0081] The processing device can determine that the tag is also not included in the read only outstanding command queue or the write-read outstanding command queue by searching both for a matching tag. As described above, the processing device can select a queue from the read-only outstanding command queues or the write-read outstanding command queues. The processing device can execute a state machine included in the priority scheduler or implemented separately to determine the number of requests needed to obtain the data based on the size of the cache line in the appropriate cache. The processing device can store the one or more requests in the selected outstanding command queue that is used to store requests to read from or write to an address associated with the data. The processing device can determine that a fill operation will be used for the read request to store the data obtained from the backing store to the cache. A priority scheduler can generate priority indicators (e.g., tokens having numerical values) for the read request and the fill operation. The processing
device can employ a policy that specifies that fill operations have priority indicators with higher values to enable the fill operations to perform before the read requests. The priority indicators can be generated and assigned to read requests and fill operations in the order in which the read requests are received.[0082] At operation 930, responsive to determining that the data is not stored at the cache of the memory sub-system, the processing device obtains the data from a memory component of the memory sub-system. The processing device can obtain the data from the memory component by storing the read request in a priority queue and executing the read request to obtain the data from the memory component. The fill operation can be generated when the data is obtained from the memory component.[0083] At operation 940, the processing device assigns a first priority indicator (e.g., a token with a value of“1”) to the fill operation associated with the data that is obtained from the memory component. The fill operation and the data obtained from the memory component can be stored in a fill queue.[0084] At operation 950, the processing device assigns a second priority indicator (e.g., a token with a value of“2”) to the request to read the data. The first priority indicator assigned to the fill operation can have a higher priority value than the second priority indicator assigned to the request to read the data.[0085] At operation 960, the processing device schedules an order of executing the fill operation and the request to read the data based on the first priority indicator and the second priority indicator. The priority schedule can use arbitration logic to determine the schedule. If no other requests have been received, the processing device can use the schedule to execute the fill operation to remove the data from the fill queue and store the data in a cache line corresponding to the queue where the read request is stored. Further, the processing device can execute the read request to read the data in the cache line and return the data to the requesting application.[0086] In some embodiments, while the request to read the data is stored in an outstanding command queue (e.g., read-only or write-read), the processing device can receive a second request to read the data stored at the memory sub-system. The processing device can determine whether an identifier (tag) associated with the second request to read the data is assigned the outstanding command queue. Responsive to determining that the identifier associated with the second request to read the data is assigned to the outstanding command queue, the processing device can assign a third priority indicator the second request, and store the second request in the outstanding command queue in an entry behind the initial request to
read the data.[0087] In some embodiments, the processing device can receive a third request to write other data to the address associated with the data at the memory sub-system. The processing device can determine whether an identifier associated with the third request to write the other data is assigned to the queue. Responsive to determining that the identifier associated with the third request to write the other data is assigned to the queue, the processing device can assign a fourth priority indicator to the third request and store the write request in an entry behind the second request. The processing device can determine a schedule of executing the fill operation, the request to read the data , the second request to read the data, and the third request to write the other data based on the first priority indicator, the second priority indicator, the third priority indicator, and the fourth priority indicator. The schedule can reflect an order in which the request, the second request, and the third request were received in the outstanding command queue. If no other requests are received, the schedule can be used to execute the fill operation, the request to read the data, the second request to read the data, and the third request to write the other data.[0088] In some embodiments, the processing device can receive a second request to read other data stored at the memory sub-system. The processing device can determine whether the other data is stored at the cache of the memory sub -system by searching the read-only CAM and the write-read CAM for a tag matching the tag included in the second request. If the data is not stored at the cache and the processing device determines that the matching tag is also not found in the read-only outstanding command queue or the write-read outstanding command queue, a second outstanding command queue can be selected to store the second request to read the other data. The second outstanding command queue that stores the second request to read the other data can be different than the outstanding command queue used to store the request to read the data. Responsive to determining that the other data is not stored at the cache of the memory sub-system, the processing device can obtain the other data from the memory component of the memory sub-system.[0089] The processing device can determine that a second fill operation will be used to store the requested data obtained from the memory component at the appropriate cache. The processing device can generate a priority indicator for the second fill operation and the second request to read the data. The second fill operation can be generated and the third priority indicator can be assigned to the second fill operation. A fourth priority indicator can be generated and assigned to the second request to read the other data. The processing device can determine a schedule of executing the fill operation, the request to read the data, the
second fill operation, and the second request to read the other data based at least on the first priority indicator, the second priority indicator, the third priority indicator, and the fourth priority indicator.[0090] The processing device can execute, based on the determined schedule, the request to read the data and the second request to read the other data in a different order than in which the request to read the data and the second request to read the other data were received. For example, in some instances, even though the request to obtain the data was sent to the backing store first, the other data requested by the second request can return faster from the backing store. In such an instance, the processing device can determine to process the second fill operation for the other data first and the second request to read the other data before the fill operation and the request to read the data. The fill operation can store the data in a cache line corresponding to the outstanding command queue and the second fill operation can store the other data in a second cache line corresponding to the second outstanding command queue. The request to read the data can read the data from the cache line and return the data to an application that sent the request. The second request to read the data can read the other data from the second cache line and return the other data to an application that sent the second request.[0091] In some embodiments, after the fill operation, read requests, and/or write requests are executed, the priority indicators can be reused and reassigned to subsequent fill operations, read requests, and/or write requests. The processing device can set a limit on the number of priority indicators generated. The limit can be any suitable number and can be dynamically configured to enable efficient request throughput.[0092] FIG. 10 is a flow diagram of another example method 1000 to determine a schedule to execute requests in a memory sub-system in accordance with some embodiments of the present disclosure. The method 1000 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 1000 is performed by the caching component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows
are possible.[0093] At operation 1010, the processing device receives a set of requests to access data stored at a memory sub-system. The set of requests can be received from one or more applications executing on the host system. The requests can include address at which to access the data. If the data included in the set of requests is not present in either the read-only cache or the write-read cache, the processing device determines which one or more outstanding command queues (e.g., read-only or write-read) at which to store the set of requests. If the tag included in the addresses of the set of requests are the same, then the same outstanding command queue can be used to store the set of requests. If the tags included in the addresses of the set of requests are different, then more than one outstanding command queue can be used to store the set of requests. For example, a separate outstanding command queue can be assigned a respective tag.[0094] At operation 1020, the processing device assigns a set of priority indicators to the set of requests. The priority indicators can be generated by the processing device and can include numerical values, for example. The priority indicators can reflect the order in which the set of requests were received by the memory sub-system. The priority indicators can be assigned to the set of requests that are stored in the one or more outstanding command queues. In some instances, when the requests are read requests, there can be fill operations generated that are also assigned priority indicators, as described above.[0095] At operation 1030, the processing device determines an order to execute the set of requests based on the set of priority indicators assigned to the set of requests to access the data. For example, the order can be sequential if the priority indicators are numerical values, such as 1, 2, 3, 4, 5, 6, etc. If there are read requests in the set of requests, the processing device can use a state machine to determine a number of one or more read request for each respect request in the set of requests to read the data based on a size of the cache line in the cache. The processing device can store the one or more read requests in a priority queue based on the order. The processing device can execute the one or more requests stored in the priority queue to read the data from the one or more memory components. The processing device can store a fill operation and the data in a fill queue responsive to obtaining the data from the one or more memory components. The processing device can perform the fill operation to remove the data from the fill queue and store the data in a cache line of a cache of the memory sub-system.[0096] At operation 1040, responsive to obtaining the data from one or more memory components of the memory sub-system, the processing device executes the set of requests
based on the determined order. In some embodiments, when there are fill operations stored in the fill queue, the processing device can execute the fill operations prior to executing the read requests corresponding to the fill operations because the priority indicators assigned to fill operations can have higher priority values than the priority indicators assigned to the corresponding read requests.[0097] In some embodiments, a set of second requests to access other data stored at the memory sub-system can be received. The processing device can assign a set of second priority indicators to the set of second requests. The set of second priority indicators can have higher priority values than the set of priority indicators when the other data is obtained from the one or more memory components before the data is obtained from the one or more memory components. The processing device can determine the order to execute the set of requests and the set of second requests based on the set of priority indicators and the set of second priority indicators.[0098] FIG. 11 illustrates an example of using a priority scheduler to determine a schedule to execute requests based on priority indicators in accordance with someembodiments of the present disclosure. In the depicted example, the processing device can determine that the memory access workload for an application includes sequential read requests and that the read-only cache is to be used for read requests received from the application. The processing device can receive a first read requests from the application and search the read-only CAM and the write-read CAM to determine whether the tag of the request is found. If the matching tag is found in either CAM, then the first read request can be sent to the hit queue and the first read request can be processed in the order it is received in the hit queue to return the data to the application. If the matching tag is not found in either CAM, the processing device can determine to use the read-only outstanding command queue 208 because the application is using sequential read requests type of memory access workload.[0099] In the depicted example, the processing device obtained the tag“123” for the first read request and searched the read-only outstanding command queues 208 for a matching tag. The processing device did not find a matching tag and selected entry 1100 to store the first read request. The processing device can set a block bit associated with the read-only outstanding command queue in the request field of entry 1100 to a value indicating the read only outstanding command queue is blocked (e.g., no requests in the read-only outstanding command queue can be executed while blocked). The processing device can increment the queue counter to“1”. The processing device assigned tag“123” to the tag field for the entry
1100. The processing device can determine that a fill operation will be generated for the data associated with the first read request that is returned from the backing store. The priority scheduler 212 can generate priority indicators for the first read request (“2”) and the fill operation (“1”). The value of the priority indicator for the fill operation corresponding to the first read request can have a higher priority to enable storing data obtained from the backing store in the cache before performing the first read request. The priority scheduler 212 can assign the priority indicator“2” to the first read request in the outstanding command queue in the requests field of the entry 1100. The processing device can also increment the read-from counter to“1” and the fill counter to“1”, as depicted.[00100] The processing device can receive a second read request including a tag“234” and can search the read-only outstanding command queues 208 for a matching tag. The processing device did not find a matching tag in the read-only outstanding command queues 208 and selected entry 1102 to store the second read request. The processing device can set a block bit associated with the read-only outstanding command queue in the request field of entry 1102 to a value indicating the read-only outstanding command queue is blocked (e.g., no requests in the read-only outstanding command queue can be executed while blocked).The processing device assigned the tag“234” to the tag field in the entry 1102. The priority scheduler 212 can determine that a fill operation will be generated for the data associated with the second read request that is returned from the backing store. The priority scheduler 212 can generate priority indicators for the second read request (“4”) and the fill operation (“3”). The priority scheduler 212 can assign the priority indicator (“4”) to the second read request in the outstanding command queue in the requests field of the entry 1102.[00101] The processing device can receive a third read request including the tag“123” and can search the read-only outstanding command queues 208 for a matching tag. The processing device found the matching tag“123” in the entry 1100. As such, the processing device can store the third read request in the read-only outstanding command queue in the request field of the entry 1100. The processing device can increment the queue counter to “2”, as depicted. The priority scheduler 212 can determine that a fill operation is already going to be generated for the data associated with the second read request having the tag “123” so another fill operation does not need to be generated and assigned a priority indicator. The priority scheduler 212 can generate a priority indicator for just the third read request (“5”) and can assign the priority indicator“5” to the third read request in the outstanding command queue in the requests field of the entry 1100.
[00102] The priority scheduler 212 can use the priority queue 220 to store the read requests and execute the read requests in the order in which the read requests are stored to obtain data from the backing store. As depicted, the first read request assigned priority indicator“2” is stored in the priority queue 220 first and the second read request assigned priority indicator“4” is stored in the priority queue 220 second because its priority indicator has a lesser priority value than the first read request. Further, the third read request may not be stored in the priority queue 220 because the first read request having the same tag can obtain the data from the backing store at the address corresponding to the same tag.[00103] The processing device can perform the first read request to obtain the data corresponding to the tag“123” from the backing store. After performing the first read request, the processing device can decrement the read-from counter to“0” in the entry 1100. A first fill operation for the data obtained from the first read request can be generated and stored in the fill queue 214 with the data obtained from the backing store. The priority scheduler 212 can assign the priority indicator“1” to the first fill operation corresponding to the first read request.[00104] The processing device can perform the second read request to obtain the data corresponding to the tag“234” from the backing store. After performing the second read request, the processing device can decrement the read-from counter to“0” in the entry 1102. A second fill operation for the data obtained from the second read request can be generated and stored in the fill queue 214 with the data obtained from the backing store. The priority scheduler 212 can assign the priority indicator“3” to the second fill operation corresponding to the second read request.[00105] The priority scheduler 212 can determine a schedule for executing the read requests and the fill operations based on the priority indicators assigned to the read requests and the fill operations. The schedule can be sequential based on the numerical values. In one example, the schedule is execute the first fill operation having priority indicator“1”, the first read request having priority indicator“2”, the second fill operation having priority indicator “3”, the second read request having priority indicator“4”, and the third read request having priority indicator“5”.[00106] The processing device can perform the first fill operation by removing the data from the fill queue 214 and storing the data to a cache line corresponding to the tag“123” in the read-only cache. The processing device can decrement the fill counter to“0” in entry 1100. The priority scheduler 212 can obtain the priority indicator“1” and reuse it for subsequent read requests and/or fill operations. The processing device can unblock the read-
only outstanding command queue by setting a value of a block bit associated with the read only outstanding command queue to a value indicating an unblocked state. The processing device can execute the first read request having the next priority indicator“2” while the outstanding command queue in the entry 1100 is unblocked to return the data from the cache line corresponding to tag“123” to the application that sent the first read request. The processing device can decrement the queue counter to“1”. The priority scheduler 212 can obtain the priority indicator“2” and reuse it for subsequent read requests and/or fill operations.[00107] The processing device can search for the read request or the fill operation having the next priority indicator (e.g.,“3”) and can determine that the second fill operation is assigned the next priority indicator. The second fill operation can be assigned the next priority instead of the third read request because the second read request associated with the second fill operation was received before the third read request. The processing device can set the block bit corresponding to the read-only outstanding command queue to a value indicating a blocked state to prevent the third read request from executing.[00108] The processing device can perform the second fill operation by removing the data from the fill queue 214 and storing the data to a cache line corresponding to the tag“234” in the read-only cache. The processing device can decrement the fill counter in the entry 1102. The priority scheduler 212 can obtain the priority indicator“3” and reuse it for subsequent read requests and/or fill operations. The processing device can unblock the read-only outstanding command queue by setting a value of a block bit associated with the read-only outstanding command queue to a value indicating an unblocked state. The processing device can execute the second read request having the next priority indicator“4” while the outstanding command queue in the entry 1100 is unblocked to return the data at the cache line corresponding to the tag“234” to the application that sent the second read request. The priority scheduler 212 can obtain the priority indicator“4” and reuse it for subsequent read requests and/or fill operations. The queue counter of entry 1102 can be decremented to“0” after the second read request is performed.[00109] The processing device can search for the next priority indicator“5”, which is assigned to the third read request. The processing device can set the block bit associated with the outstanding command queue of entry 1100 to an unblocked state. The processing device can execute the third read request while the outstanding command queue of entry 1100 is unblocked to return data at the cache line corresponding to tag“123” to the application that sent the third request. The priority scheduler 212 can obtain the priority indicator“5” and
reuse it for subsequent read requests and/or fill operations. The queue counter of entry 1100 can be decremented to“0” after the third read request is performed.[00110] As can be appreciated, the requests can be performed out-of-order between outstanding command queues that correspond to different cache lines. For example, the first read request having priority indicator“2” was performed in the queue of entry 1100, the second read request having priority indicator“4” was performed in the queue of entry 1102, and then the third read request having priority indicator“5” was performed in the queue of entry 1100. This can provide the benefit of improved quality of service so applications do not have to wait on other requests to complete execution before receiving requested data if the requested data is available. Also, the requests can be performed in-order based on when the requests are received for the same cache line. As depicted, the third request was received after the first request to read data from the cache line corresponding to the same tag and the third request is stored after the first request in the queue. Using a first-in, first-out outstanding command queue can ensure the requests are processed in the order in which they are received.[00111] FIG. 12 illustrates an example machine of a computer system 1200 within which a set of instructions, for causing the machine to perform any one or more of themethodologies discussed herein, can be executed. In some embodiments, the computer system 1200 can correspond to a host system (e.g., the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system 110 of FIG. 1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the caching component 113 of FIG. 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[00112] The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions(sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
[00113] The example computer system 1200 includes a processing device 1202, a main memory 1204 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 1206 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 1218, which communicate with each other via a bus 1230.[00114] Processing device 1202 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word(VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1202 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 1202 is configured to execute instructions 1226 for performing the operations and steps discussed herein. The computer system 1200 can further include a network interface device 1208 to communicate over the network 1220.[00115] The data storage system 1218 can include a machine-readable storage medium 1224 (also known as a computer-readable medium) on which is stored one or more sets of instructions 1226 or software embodying any one or more of the methodologies or functions described herein. The instructions 1226 can also reside, completely or at least partially, within the main memory 1204 and/or within the processing device 1202 during execution thereof by the computer system 1200, the main memory 1204 and the processing device 1202 also constituting machine-readable storage media. The machine-readable storage medium 1224, data storage system 1218, and/or main memory 1204 can correspond to the memory sub-system 110 of FIG. 1.[00116] In one embodiment, the instructions 1226 include instructions to implement functionality corresponding to a caching component (e.g., the caching component 113 of FIG. 1). While the machine-readable storage medium 1224 is shown in an example embodiment to be a single medium, the term“machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
The term“machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[00117] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[00118] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.[00119] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.[00120] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated
that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.[00121] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.[00122] In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
A memory may include two electrodes and phase change material having an amorphous reset state and a partially crystalized set state, coupled between the two electrodes. The phase change material in the set state may have a highly nonlinear current-voltage response in a subthreshold voltage region. The phase change material may be an alloy of indium, antimony, and tellurium. |
CLAIMS What is claimed is: 1. A memory comprising: two electrodes; and phase change material having an amorphous reset state and a partially crystalized set state, the phase change material being coupled between the two electrodes; wherein the phase change material in the set state has a highly nonlinear current- voltage response in a subthreshold voltage region. 2. The memory of claim 1, wherein the phase change material comprises indium, germanium and tellurium. 3. The memory of claim 1, wherein the phase change material comprises indium, antimony, and tellurium. 4. The memory of claim 1, wherein the phase change material in the set state has a resistance of more than about 200 h£l if the voltage between the two electrodes is less than about 1.5 V. 5. The memory of claim 1, wherein a current of less than about 1 μΑ flows through the phase change material in the set state if the voltage between the two electrodes is less than about 1.5 V. 6. The memory of claim 1, wherein a resistance of the phase change material in the set state reduces by more than an order of magnitude if a voltage across the two electrodes is increased to a threshold voltage. 7. The memory of claim 1, wherein a pulse of current of less than about 200 μΑ through the phase change material, for less than about 100 ns, changes the phase change material from the set state to the reset state. 8. The memory of claim 1 , wherein the subthreshold voltage region comprises voltage levels between about 0 V and about 2 V. 9. The memory of claim 1, further comprising an access device, coupled between a control line and one of the two electrodes. 10. The memory of claim 9, wherein the access device is an ovonic threshold switch or a semiconductor diode. 11. A system comprising: a processor to generate memory control commands; and at least one memory, coupled to the processor, to respond to the memory control commands, the at least one memory comprising: two electrodes; and phase change material having an amorphous reset state and a partially crystalized set state, the phase change material being coupled between the two electrodes; wherein the phase change material in the set state has a highly nonlinear current- voltage response in a subthreshold voltage region. 12. The system of claim 11, wherein the phase change material comprises indium, antimony, and tellurium. 13. The system of claim 11, wherein the phase change material in the set state has a resistance of more than about 200 h£l if the voltage between the two electrodes is less than about 1.5 V. 14. The system of claim 11, wherein a resistance of the phase change material in the set state reduces by more than an order of magnitude if a voltage across the two electrodes is increased to a threshold voltage. 15. The system of claim 11, wherein a pulse of current of less than about 200 μΑ through the phase change material, for less than about 100 ns, changes the phase change material from the set state to the reset state. 16. The system of claim 11, wherein the subthreshold voltage region comprises voltage levels between about 0 V and about 2 V. 17. The system of claim 11, the at least one memory further comprises an access device, coupled between a control line and one of the two electrodes. 18. The system of claim 17, wherein the access device is an ovonic threshold switch or a semiconductor diode. 19. The system of claim 11, further comprising; I/O circuitry, coupled to the processor, to communicate with an external device. 20. A memory element comprising: two electrodes; and phase change material having an amorphous reset state and a partially crystalized set state, coupled between the two electrodes; wherein the phase change material, by atomic percentage, comprises between about 25% and about 40% indium (In); between about 1%> and about 15%> antimony (Sb); and between about 50%> and about 70%> tellurium (Te). 21. The memory element of claim 20, further comprising an access device, coupled between a control line and one of the two electrodes. 22. The memory element of claim 21, wherein the access device is an ovonic threshold switch or a semiconductor diode. |
LOW POWER PHASE CHANGE MEMORY CELL BACKGROUND Technical Field The present subject matter relates to semiconductor phase change memory, and more specifically, to a low power phase change memory cell to use in a phase change memory with switch (PCMS) semiconductor memory. Background Art Memory for computers or other electronic devices can include blocks of memory cells integrated into a larger integrated circuit or stand-alone integrated circuits. There are many different types of memory including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), flash memory and phase change memory. Phase change memory devices utilize materials that have different electrical properties in their crystalline and amorphous phases. Each phase change memory cell may be programmed by putting the material in the memory cell into either a crystalline phase or an amorphous phase providing non-volatile memory that does not require power to retain its contents. Phase change memories are often programmed using heat generated by an electrical current to control the state of the phase change material. Phase change memory cells may be made from chalcogenide materials. Chalcogenide materials include at least one element from group 16 (also known as Group VI A) of the periodic table, such as sulfur (S), selenium (Se), and tellurium (Te). Chalcogenide phase change material, when heated to a temperature above its melting point and allowed to cool quickly, will remain in an amorphous glass-like state with a high electrical resistance. The chalcogenide phase change material, when heated to a temperature above its glass transition temperature T gbut below the melting point, will transform into a crystalline phase with a much lower resistance. This difference in the material properties between the amorphous and crystalline phases of chalcogenide materials may be used to create a phase change memory device. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute part of the specification, illustrate various embodiments. Together with the general description, the drawings serve to explain various principles. In the drawings: FIG. 1A and IB show a cross-sectional diagram of an embodiment of a phase change memory element in the reset state and set state, respectively; FIG. 2 shows a graph of current- voltage response of a phase change material useful for embodiments; FIG. 3 shows an array of phase change memory cells including access devices and associated circuitry for various embodiments; and FIG. 4 shows an embodiment of a system utilizing an embodiment of phase change memory. DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, well known methods, procedures and components have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present concepts. A number of descriptive terms and phrases are used in describing the various embodiments of this disclosure. These descriptive terms and phrases are used to convey a generally agreed upon meaning to those skilled in the art unless a different definition is given in this specification. Reference now is made in detail to the examples illustrated in the accompanying drawings and discussed below. FIG. 1A and IB show a cross-sectional diagram of an embodiment of a phase change memory element in the reset state 100A and set state 100B, respectively. The phase change memory element 100A/B may be a part of a cross-point memory array, and may be fabricated on a semiconductor substrate 101 that may include various layers, patterns, doping levels, or other materials and may include circuitry, conductors, and/or insulators. A first electrode 111 may be created on the substrate 101 and may be separated by an insulating layer 102 such as an oxide from other conductors or circuitry in some embodiments. A layer of phase change material 120 may be deposited over the first electrode 111. The phase change material may be in a non-conductive or highly resistive amorphous reset state so that the phase change material of the memory element 121/122, is insulated from neighboring memory elements. In some embodiments the phase change material may be patterned, with little if any phase change material outside of the memory cell area, but some embodiments may have large areas of a die covered by an unpatterned layer of phase change material, taking advantage of the non-conductive state of the amorphous phase change material to insulate the memory cells or other elements from each other. A second electrode 112, which may be separated from other conductors by an insulating layer 103, may be deposited on top of the phase change material 120. In other embodiments, the layout of the memory cell may be horizontal instead of vertical, with the two electrodes on opposite sides of the phase change material of the memory cell. The phase change material 120 has an amorphous reset state which may be the state of the phase change material 120 as it is deposited on the memory device. The phase change material 120 may be essentially non-conductive in the amorphous reset state, which may be defined as having a resistance of greater than about 1,000 mega-ohms (ΜΩ) for typical geometries of the phase change material. The thickness of the phase change material 120 may vary between embodiments but may be between about 30 nanometers (nm) and 100 nm in some embodiments. The area of the electrodes 111, 112 may also affect the resistance and the area of the electrodes may vary between embodiments, but some embodiments may have electrodes that are between about 10 nm and about 100 nm on a side. The section of the phase change material 121 positioned between the two electrodes 111,112 may be in the amorphous state in FIG. 1A, so that the resistance between the two electrodes 111,112 may be greater than about 1,000 ΜΩ. Traditional phase change materials may form a fully crystalized state which may have a relatively low resistance, such as below 1,000 ohms at typical geometries, and may not have a threshold voltage. The threshold voltage may be defined as a voltage at which the resistivity of the phase change material dramatically changes. The phase change materials described herein may partially crystalize and may have a threshold voltage, similar to an ovonic threshold switch (OTS), so the phase change material 120 also has a partially crystalized set state. In FIG. IB, the memory 100B is in the set state, so the section of the phase change material 122 positioned between the two electrodes 111,112 is in the partially crystalized state. In the partially crystalized state, the phase change material 122 may have a highly nonlinear current- voltage response at subthreshold voltage levels, as is shown in FIG. 2. The resistance of the phase change material between the two electrodes 111,112 may be greater than about 100-200 kilo-ohms (kH), and more than 1 ΜΩ in some embodiments, at typical geometries at subthreshold voltage levels, such as less than about 1.5 V for some phase change materials 120 in the partially crystalized set state. Various material compositions may be used for the phase change material 120. The inventors believe that a wide range of phase change materials may be suitable for the embodiments described herein. The phase change material may be a chalcogenide material and may include tellurium. In some embodiments, the phase change material may be an alloy of indium (In), germanium (Ge) and tellurium (Te), which may be referred to as an IGT alloy, although other elements may be included for some IGT alloys. In some embodiments the phase change material may be an alloy of indium (In), antimony (Sb), and tellurium (Te), which may be referred to as an 1ST alloy, although other elements may be included for some 1ST alloys. 1ST alloys that may be suitable for embodiments may include 1ST alloys with atomic percentages of between about 25% and about 40% indium (In), between about 1% and about 15% antimony (Sb), and between about 50% and about 70% tellurium (Te). The phase change material 120 may be changed from the amorphous reset state to the partially crystalized state by heating the phase change material to a specific temperature for a predefined period of time. The phase change materials described herein may use less power to reset the phase change material from the set state to the reset state due to due to self-heating effects caused by the relatively high resistance of the phase change material 120 in the set state. In traditional phase change materials, a programming current of in excess of 1 milli-amp (mA) may be used to convert the phase change material from the set state back to the reset state due to the low resistance of traditional phase change materials in the fully crystalized set state. The phase change material 120 may use less than 200 micro-amps (μΑ) of current, and in some cases less than 100 μΑ of current, to change the phase change material of the memory in the partially crystallized set state 122, to the amorphous reset state 121. This may be due to the much higher resistance of the phase change material in the set state 122 as compared to traditional phase change materials. In at least one embodiment, a pulse of current of less than about 200 μΑ through the phase change material, for less than about 100 ns, may change the phase change material from the set state to the reset state, which may be much less power than traditional phase change materials may use to perform a reset. The higher resistance of the set state in the phase change material 120 may also reduce leakage current of the phase change memory cell. This may allow lower power devices to be fabricated and/or larger memory cells may be constructed. The higher threshold voltage of the phase change material in the set state 122 as compared to traditional phase change materials may also increase blocking margin for embodiments using a phase change memory with switch (PCMS) architecture. FIG. 2 shows a graph 200 of current- voltage response of a phase change material useful for embodiments. The graph 200 shows a voltage level on the x-axis using a linear scale and a current level on the y-axis using a logarithmic scale. Data was collected at several places in an array of phase change memory on a test chip using an exemplary 1ST alloy as described above, and is shown in the graph 200. The reset response curve 201 represents the amount of current that was found to flow through the phase change material at a given voltage level if the phase change material was in the amorphous reset state. The set response curve 210 represents the amount of current that was found to flow through the phase change material at a given voltage level if the phase change material was in the partially crystalized set state. The set response curve 210 of the phase change material in the partially crystalized set state is highly nonlinear. A linear response is shown by the curve 231 , which would be a straight line if it were plotted on a graph with a linear x-axis and a linear y-axis. For the purposes of this disclosure and claims, a current voltage response may be considered to be highly nonlinear in the subthreshold region 230 if the curve departs more than about -50% or about +100% from a linear response at one or more voltage levels inthe subthreshold region 230. This means that the resistance of the phase change material, which may be defined as voltage/current, is strongly dependent on voltage. The subthreshold voltage region, which may vary between phase change materials, is a range of voltages that are below an amount of voltage that may be required to activate the ovonic switch response of the phase change material in the partially crystalized set state, which may also be referred to as a threshold voltage. So a subthreshold voltage level may be any voltage in range between about 0 volts (V) and the threshold voltage of a phase change material. In the example shown, the subthreshold region 230 may be a voltage range of about 0 V to about 2 V. So the linear response curve 231 represents a linear response from the origin to a point on the set response curve 210 at about 2 V. It can be easily seen that the set response curve 210 departs dramatically from the linear response curve. For example, at about 0.25 V, the set response curve 210 yields a current of about 6x10 ~10amps (A), while the linear response curve 231 yields a current of about 2x10 ~8A, so at that point, the set response curve departs from the linear response curve 231 by about -97%. A high resistance of the phase change material in the partially crystalized set state may be a characteristic of a suitable material. The resistance may be measured at any point in the subthreshold region 230 and the characteristic of a high resistance may vary between embodiments. But in at least some embodiments, a resistance over 100 may be considered a high resistance, with some embodiments of phase change materials having a set state resistance of 1 mega-ohm (ΜΩ) or higher. The high resistance may be exhibited over an entire subthreshold voltage rage, but in at least one embodiment, the resistance of the phase change material in the set state may be greater than about 200 k . at voltages of less than about 1.5 V, which is represented by 200 ΙίΩ curve 220 on the graph 200, which may represent the amount of current that would flow through a 200 kI2 resistor at a given voltage. So a material with a set response curve 210 that is below the 200 ΙίΩ curve 220 may have a high resistance in the set state. In other embodiments, the resistance of the phase change material in the set state may be greater than about 1 ΜΩ, so less than about 1 uA of current may flow through the phase change material if the voltage across the material is about 1.5 V. If the voltage across the phase change material in the set state is increased beyond the subthreshold range to a threshold voltage, the resistance of the phase change material in the set state may quickly reduce dramatically. In various embodiments the reduction of the resistance may be more than an order of magnitude if the voltage reaches a threshold voltage. The threshold voltage may vary in embodiments, but may be in a range of about 1.5 V to about 3 volts, depending on the composition of the phase change material. Once the threshold voltage is reached and the resistance drops, the current may rise dramatically, which may cause the voltage to reduce due to limits of the voltage source and/or source resistance, which may be referred to as snapback. The resistance may stay at a low level until the voltage across the phase change material drops below a holding voltage. One the voltage drops below the holding voltage, the resistance may rise to its former high value. FIG. 3 shows an array 300 of phase change memory cells including access devices, or phase change memory with switch (PCMS) cells 331-334, and associated circuitry 314, 315 for various embodiments. The array 300 shows four PCMS cells 331-334 with two word lines 341, 342 and two bit lines 351, 352, although most embodiments may contain a much greater number of cells and associated word lines and bit lines. PCMS cell 331 may be representative of other PCMS cells 332-334. PCMS 331 may include two electrodes that may be referred to as a first electrode 311 and a second electrode 312, phase change material 320 coupled between the two electrodes 311, 312, and an access device (or switch) 325. The phase change material 320 may have a highly nonlinear IV curve at subthreshold voltage levels as shown in FIG. 2 and may have an amorphous reset state and a partially crystalized set state. The access device 325 may be any type of device suitable for integration into the array 300, including, but not limited to, an ovonic threshold switch (OTS), a transistor, a semiconductor diode, or other device that is capable of regulating the current that passes through the phase change material 320. An OTS may be made of a chalcogenide alloy that does not exhibit an amorphous to crystalline phase change and which undergoes rapid, electric field initiated change in electrical conductivity that persists only so long as a holding voltage is present. Some embodiments may not include an access device as a part of the memory cells in the array 300. The access device 325 may be electrically coupled between the word line 341 and the first electrode 311, or the access device 325 may be electrically coupled between the second electrode 312 and the bit line 351, depending on the embodiment. Row circuitry 314 may drive the word lines 341, 342 and column circuitry 315 may be coupled to the bit lines 351, 352. A particular combination of word line 341, 342 and bit line 352, 352 may select a particular PCMS cell to be read. For example, to select PCMS cell 331, word line 341 and bit line 351 may be used. Some embodiments may have an additional set of control lines for programming the memory cells by changing the phase change material between the amorphous reset state and the partially crystalized set state. The additional set of control lines may be parallel to the bit lines 351, 352 that couple directly to the second electrodes of a column or cells, such as the second electrode 312 of cell 331 and the second electrode of cell 333. In other embodiments, the first electrode 311 may be a heater element and may have other control lines may be coupled to the first electrode 311 to allow current to flow through the first electrode 311 to heat the phase change material 320. The row circuitry 314 and/or column circuitry 315 may implement several functions, depending on the embodiment. Different embodiments may implement the various functions in either the row circuitry 314 of the column circuitry 315 or may utilize both the row circuitry 314 and column circuitry 315 to implement a function. Circuitry to provide appropriate voltage and/or current to the word lines 341, 342, bit lines 351, 352, and/or other control lines, may be implemented in the row circuitry 314 and/or column circuitry 315, so that the various memory cells, such as PCMS cell 331 , may be written to and read. Reading may be accomplished by applying a demarcation voltage across the memory cell and determining whether or not current flows through the memory cell or by comparing the resistance of the phase change material 320 to a known resistance. Data may be written to the memory cells by heating the phase change material 320 to an appropriate temperature to change the material from the amorphous reset state to the partially crystalized set state or from the partially crystalized set state to the amorphous reset state. Details of the implementations may vary widely, depending on the embodiment, and should be easily understood by one of ordinary skill in the art. FIG. 4 is a block diagram of an embodiment of an electronic system 400 that includes a a memory device 410 having a memory array 417 using a three dimensional NAND flash memory with self-aligned select gates. A processor 401 is coupled to the memory device 410 with contra 1/address lines 403 and data lines 404. In some embodiments, data and control may utilize the same lines. The processor 401 may be an external microprocessor, microcontroller, or some other type of external controlling circuitry. In some embodiments, the processor 401 may be integrated in the same package or even on the same die as the memory device 410. In some embodiments, the processor 401 may be integrated with the control circuitry 411, allowing some of the same circuitry to be used for both functions. The processor 401 may have external memory, such as random access memory (RAM) and read only memory (ROM), used for program storage and intermediate data, or it may have internal RAM or ROM. In some embodiments, the processor may use the memory device 410 for program or data storage. A program running on the processor 401 may implement many different functions including, but not limited to, an operating system, a file system, defective chunk remapping, and error management. In some embodiments an external connection 402 is provided. The external connection 402 is coupled to the processor 401 and allows the processor 401 to communicate to external devices. Additional circuitry may be used to couple the external connection 402 to the processor 401. If the electronic system 400 is a storage system, the external connection 402 may be used to provide an external device with non- volatile storage. The electronic system 400 may be a solid-state drive (SSD), a USB thumb drive, a secure digital card (SD Card), or any other type of storage system. The external connection 402 may be used to connect to a computer or other intelligent device such as a cell phone or digital camera using a standard or proprietary communication protocol. Examples of computer communication protocols that the external connection may be compatible with include, but are not limited to, any version of the following protocols: Universal Serial Bus (USB), Serial Advanced Technology Attachment (SAT A), Small Computer System Interconnect (SCSI), Fibre Channel, Parallel Advanced Technology Attachment (PAT A), Integrated Drive Electronics (IDE), Ethernet, IEEE- 1394, Secure Digital Card interface (SD Card), Compact Flash interface, Memory Stick interface, Peripheral Component Interconnect (PCI) or PCI Express. If the electronic system 400 is a computing system, such as a mobile telephone, a tablet, a notebook computer, a set-top box, or some other type of computing system, the external connection 402 may be a network connection such as, but not limited to, any version of the following protocols: Institute of Electrical and Electronic Engineers (IEEE) 802.3, IEEE 802.11, Data Over Cable Service Interface Specification (DOCSIS), digital television standards such as Digital Video Broadcasting (DVB) - Terrestrial, DVB-Cable, and Advanced Television Committee Standard (ATSC), and mobile telephone communication protocols such as Global System for Mobile Communication (GSM), protocols based on code division multiple access (CDMA) such as CDMA2000, and Long Term Evolution (LTE). The memory device 410 may include an array 417 of phase change memory cells. The memory cells may be fabricated using low power phase change material as described above. Address lines and control lines 403 may be received and decoded by control circuitry 411, I/O circuitry 413 and address circuitry 412 which may provide control to the memory array 417. I/O circuitry 413 may couple to the data lines 404 allowing data to be received from and sent to the processor 401. Data read from the memory array 417 may be temporarily stored in read buffers 419. Data to be written to the memory array 417 may be temporarily stored in write buffers 418 before being transferred to the memory array 417. The system illustrated in Figure 4 has been simplified to facilitate a basic understanding of the features of the memory. Many different embodiments are possible including using a single processor 402 to control a plurality of memory devices 410 to provide for more storage space. Additional functions, such as a video graphics controller driving a display, and other devices for human oriented I/O may be included in some embodiments. Unless otherwise indicated, all numbers expressing quantities of elements, optical characteristic properties, and so forth used in the specification and claims are to be understood as being modified in all instances by the term "about." The recitation of numerical ranges by endpoints includes all numbers subsumed within that range (e.g. 1 to 5 includes 1, 2.78, 3.33, and 5). Numbers should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. As used in this specification and the appended claims, the singular forms "a", "an", and "the" include plural referents unless the content clearly dictates otherwise. Furthermore, as used in this specification and the appended claims, the term "or" is generally employed in its sense including "and/or" unless the content clearly dictates otherwise. As used herein, the term "coupled" includes direct and indirect connections. Moreover, where first and second devices are coupled, intervening devices including active devices may be located there between. Examples of various embodiments are described in the following paragraphs: An example of a memory may include two electrodes, and phase change material having an amorphous reset state and a partially crystalized set state, the phase change material being coupled between the two electrodes. The phase change material in the set state has a highly nonlinear current- voltage response in a subthreshold voltage region. In some examples of the memory, the phase change material may include indium, germanium and tellurium. In some examples of the memory, the phase change material may include indium, antimony, and tellurium. In some examples of the memory, the phase change material in the set state has a resistance of more than about 200 h£l if the voltage between the two electrodes is less than about 1.5 V. In some examples of the memory, a current of less than about 1 μΑ flows through the phase change material in the set state if the voltage between the two electrodes is less than about 1.5 V. In some examples of the memory, a resistance of the phase change material in the set state reduces by more than an order of magnitude if a voltage across the two electrodes is increased to a threshold voltage. In some examples of the memory, a pulse of current of less than about 200 μΑ through the phase change material, for less than about 100 ns, changes the phase change material from the set state to the reset state. In some examples of the memory, the subthreshold voltage region may include voltage levels between about 0 V and about 2 V. Some example memories may include an access device, coupled between a control line and one of the two electrodes. In some examples of the memory, the access device is an ovonic threshold switch or a semiconductor diode. Any combination of the examples of this paragraph may be used in embodiments. An example memory element may include two electrodes, and phase change material having an amorphous reset state and a partially crystalized set state, coupled between the two electrodes. The phase change material, by atomic percentage, may include between about 25% and about 40%> indium (In), between about 1% and about 15% antimony (Sb), and between about 50% and about 70% tellurium (Te). Some example memory elements may also include an access device, coupled between a control line and one of the two electrodes. In some example memory devices, the access device is an ovonic threshold switch or a semiconductor diode. An example system may include a processor to generate memory control commands, and at least one memory, coupled to the processor, to respond to the memory control commands. Some example systems may also include I/O circuitry, coupled to the processor, to communicate with an external device. Any combination of the examples of this paragraph and the previous two paragraphs may be used in embodiments. The description of the various embodiments provided above is illustrative in nature and is not intended to limit this disclosure, its application, or uses. Thus, different variations beyond those described herein are intended to be within the scope of embodiments. Such variations are not to be regarded as a departure from the intended scope of this disclosure. As such, the breadth and scope of the present disclosure should not be limited by the above-described exemplary embodiments, but should be defined only in accordance with the following claims and equivalents thereof. |
Semiconductor devices having group III-V material active regions and graded gate dielectrics and methods of fabricating such devices are described. In an example, a semiconductor device includes a group III-V material channel region disposed above a substrate. A gate stack is disposed on the group III-V material channel region. The gate stack includes a graded high-k gate dielectric layer disposed directly between the III-V material channel region and a gate electrode. The graded high-k gate dielectric layer has a lower dielectric constant proximate the III-V material channel region and has a higher dielectric constant proximate the gate electrode. Source/drain regions are disposed on either side of the gate stack. |
An integrated circuit structure, comprising:a nanowire channel structure comprising indium, gallium and arsenic;a gate dielectric on and surrounding the nanowire channel structure, the gate dielectric comprising hafnium, aluminum and oxygen; anda gate electrode on the gate dielectric, the gate electrode comprising a metal, wherein the gate dielectric has a greatest concentration of hafnium proximate the gate electrode and distal from the nanowire channel structure, and wherein the gate dielectric has a greatest concentration of aluminum proximate the nanowire channel structure and distal from the gate electrode.The integrated circuit structure of claim 1, further comprising:a source contact adjacent a first side of the gate electrode; anda drain contact adjacent a second side of the gate electrode opposite the first side of the gate electrode.The integrated circuit structure of claim 2, further comprising:a first dielectric spacer between the source contact and the first side of the gate electrode; anda second dielectric spacer between the drain contact and the second side of the gate electrode.An integrated circuit structure, comprising:a fin-FET channel structure comprising indium, gallium and arsenic;a gate dielectric on a top and sidewalls of the fin-FET channel structure, the gate dielectric comprising hafnium, aluminum and oxygen; anda gate electrode on the gate dielectric, the gate electrode comprising a metal, wherein the gate dielectric has a greatest concentration of hafnium proximate the gate electrode and distal from the fin-FET channel structure, and wherein the gate dielectric has a greatest concentration of aluminum proximate the fin-FET channel structure and distal from the gate electrode.The integrated circuit structure of claim 4, further comprising:a source contact adjacent a first side of the gate electrode; anda drain contact adjacent a second side of the gate electrode opposite the first side of the gate electrode.The integrated circuit structure of claim 5, further comprising:a first dielectric spacer between the source contact and the first side of the gate electrode; anda second dielectric spacer between the drain contact and the second side of the gate electrode.An integrated circuit structure, comprising:a nanowire channel structure comprising indium, gallium and arsenic;a gate dielectric on and only partially surrounding the nanowire channel structure, the gate dielectric comprising hafnium, aluminum and oxygen; anda gate electrode on the gate dielectric, the gate electrode comprising a metal, wherein the gate dielectric has a greatest concentration of hafnium proximate the gate electrode and distal from the nanowire channel structure, and wherein the gate dielectric has a greatest concentration of aluminum proximate the nanowire channel structure and distal from the gate electrode.The integrated circuit structure of claim 7, further comprising:a source contact adjacent a first side of the gate electrode; anda drain contact adjacent a second side of the gate electrode opposite the first side of the gate electrode.The integrated circuit structure of claim 8, further comprising:a first dielectric spacer between the source contact and the first side of the gate electrode; anda second dielectric spacer between the drain contact and the second side of the gate electrode.An integrated circuit structure, comprising:a nanowire channel structure comprising indium, gallium and arsenic;a gate dielectric on and completely surrounding the nanowire channel structure, the gate dielectric comprising hafnium, aluminum and oxygen; anda gate electrode on the gate dielectric, the gate electrode comprising a metal, wherein the gate dielectric has a greatest concentration of hafnium proximate the gate electrode and distal from the nanowire channel structure, and wherein the gate dielectric has a greatest concentration of aluminum proximate the nanowire channel structure and distal from the gate electrode.The integrated circuit structure of claim 10, further comprising:a source contact adjacent a first side of the gate electrode; anda drain contact adjacent a second side of the gate electrode opposite the first side of the gate electrode.The integrated circuit structure of claim 11, further comprising:a first dielectric spacer between the source contact and the first side of the gate electrode; anda second dielectric spacer between the drain contact and the second side of the gate electrode.A method of fabricating an integrated circuit structure, the method comprising:forming a nanowire channel structure comprising indium, gallium and arsenic;forming a gate dielectric on and surrounding the nanowire channel structure, the gate dielectric comprising hafnium, aluminum and oxygen; andforming a gate electrode on the gate dielectric, the gate electrode comprising a metal, wherein the gate dielectric has a greatest concentration of hafnium proximate the gate electrode and distal from the nanowire channel structure, and wherein the gate dielectric has a greatest concentration of aluminum proximate the nanowire channel structure and distal from the gate electrode.The method of claim 13, further comprising:forming a source contact adjacent a first side of the gate electrode; andforming a drain contact adjacent a second side of the gate electrode opposite the first side of the gate electrode.The method of claim 14, further comprising:forming a first dielectric spacer between the source contact and the first side of the gate electrode; andforming a second dielectric spacer between the drain contact and the second side of the gate electrode.A method of fabricating an integrated circuit structure, the method comprising:forming a fin-FET channel structure comprising indium, gallium and arsenic;forming a gate dielectric on a top and sidewalls of the fin-FET channel structure, the gate dielectric comprising hafnium, aluminum and oxygen; andforming a gate electrode on the gate dielectric, the gate electrode comprising a metal, wherein the gate dielectric has a greatest concentration of hafnium proximate the gate electrode and distal from the fin-FET channel structure, and wherein the gate dielectric has a greatest concentration of aluminum proximate the fin-FET channel structure and distal from the gate electrode.The method of claim 16, further comprising:forming a source contact adjacent a first side of the gate electrode; andforming a drain contact adjacent a second side of the gate electrode opposite the first side of the gate electrode.The method of claim 17, further comprising:forming a first dielectric spacer between the source contact and the first side of the gate electrode; andforming a second dielectric spacer between the drain contact and the second side of the gate electrode.A method of fabricating an integrated circuit structure, the method comprising:forming a nanowire channel structure comprising indium, gallium and arsenic;forming a gate dielectric on and only partially surrounding the nanowire channel structure, the gate dielectric comprising hafnium, aluminum and oxygen; andforming a gate electrode on the gate dielectric, the gate electrode comprising a metal, wherein the gate dielectric has a greatest concentration of hafnium proximate the gate electrode and distal from the nanowire channel structure, and wherein the gate dielectric has a greatest concentration of aluminum proximate the nanowire channel structure and distal from the gate electrode.The method of claim 19, further comprising:forming a source contact adjacent a first side of the gate electrode; andforming a drain contact adjacent a second side of the gate electrode opposite the first side of the gate electrode.The method of claim 20, further comprising:forming a first dielectric spacer between the source contact and the first side of the gate electrode; andforming a second dielectric spacer between the drain contact and the second side of the gate electrode.A method of fabricating an integrated circuit structure, the method comprising:forming a nanowire channel structure comprising indium, gallium and arsenic;forming a gate dielectric on and completely surrounding the nanowire channel structure, the gate dielectric comprising hafnium, aluminum and oxygen; andforming a gate electrode on the gate dielectric, the gate electrode comprising a metal, wherein the gate dielectric has a greatest concentration of hafnium proximate the gate electrode and distal from the nanowire channel structure, and wherein the gate dielectric has a greatest concentration of aluminum proximate the nanowire channel structure and distal from the gate electrode.The method of claim 22, further comprising:forming a source contact adjacent a first side of the gate electrode; andforming a drain contact adjacent a second side of the gate electrode opposite the first side of the gate electrode.The method of claim 23, further comprising:forming a first dielectric spacer between the source contact and the first side of the gate electrode; andforming a second dielectric spacer between the drain contact and the second side of the gate electrode. |
TECHNICAL FIELDEmbodiments of the invention are in the field of semiconductor devices and, in particular, non-planar semiconductor devices having group III-V material active regions and graded gate dielectrics.BACKGROUNDFor the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory devices on a chip, lending to the fabrication of products with increased capacity. The drive for ever-more capacity, however, is not without issue. The necessity to optimize the performance of each device becomes increasingly significant.Semiconductor devices formed in epitaxially grown semiconductor hetero- structures, such as in group III-V material systems, offer exceptionally high carrier mobility in the transistor channels due to low effective mass along with reduced impurity scattering. Such devices provide high drive current performance and appear promising for future low power, high speed logic applications.However, significant improvements are still needed in the area of group III-V material-based devices.Additionally, in the manufacture of integrated circuit devices, multi-gate transistors, such as tri-gate transistors, have become more prevalent as device dimensions continue to scale down. Many different techniques have been attempted to reduce junction leakage of such transistors. However, significant improvements are still needed in the area of junction leakage suppression.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 A illustrates a cross-sectional view of a portion of a gate all-around non-planar semiconductor device having group III-V material active region and a cladding layer.Figure IB illustrates a cross-sectional view of a portion of a gate all-around non-planar semiconductor device having a group III-V material active region and a graded high-k gate dielectric layer directly thereon, in accordance with an embodiment of the present invention.Figure 2A is a plot of dielectric constant as a function of % Al incorporation for a TaAlOx dielectric layer, in accordance with an embodiment of the present invention.Figure 2B is a plot of C/A (in F/cm2) as a function of Vg (in Volts) for a TaSiOx dielectric layer.Figure 2C is a plot of C/A (in F/cm2) as a function of Vg (in Volts) for a TaAlOx dielectric layer, in accordance with an embodiment of the present invention.Figures 3A-3E illustrate cross-sectional views representing various operations in a method of fabricating a non-planar semiconductor device having a group III-V material active region with a graded gate dielectric, in accordance with an embodiment of the present invention.Figure 4 illustrates an angled view of a non-planar semiconductor device having a group III-V material active region with a graded gate dielectric, in accordance with an embodiment of the present invention.Figure 5A illustrates a three-dimensional cross-sectional view of a nanowire-based semiconductor structure having a graded gate dielectric, in accordance with an embodiment of the present invention.Figure 5B illustrates a cross-sectional channel view of the nanowire-based semiconductor structure of Figure 5A , as taken along the a-a' axis, in accordance with an embodiment of the present invention.Figure 5C illustrates a cross-sectional spacer view of the nanowire-based semiconductor structure of Figure 5A , as taken along the b-b' axis, in accordance with an embodiment of the present invention.Figure 6 illustrates a computing device in accordance with one implementation of the invention.DESCRIPTION OF THE EMBODIMENTSSemiconductor devices having group III-V material active regions and graded gate dielectrics and methods of fabricating such devices are described. In the following description, numerous specific details are set forth, such as specific integration and material regimes, in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known features, such as integrated circuit design layouts, are not described in detail in order to not unnecessarily obscure embodiments of the present invention. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.One or more embodiments described herein are directed to semiconductor devices, such as non-planar semiconductor devices, having group III-V material active regions with graded gate dielectrics. In particular, graded oxide/passivation features for group III-V material non-planar transistors are described. Embodiments may cover approaches for fabricating devices having one or more of a graded gate oxide, a III-V material channel, high-k gate dielectrics, high mobility channel regions, low off-state leakage, oxide Grading for high µeff, and may be applicable to transistors (such as metal oxide semiconductor field effect transistors (MOSFETs)) based on non-silicon channel configurations. In an embodiment, various approaches are provided for achieving dielectric constant grading for high quality oxide on high mobility channels.To provide general context for one or more embodiments described herein, past architectures for related devices may include or invoke a leakage path in a ni-V material based transistor. The leakage path may be below the gate electrode and through a larger band-gap bottom barrier since the larger band-gap material is in contact with a homogeneous high-k gate dielectric and may not be compatible with such a dielectric. Such contact with the high-k gate dielectric may result in an large density of interface traps and allow for a conduction path outside of the gate control of the device, thereby limiting the off-state leakage of the III-V transistor. Such issues may be enhanced in non-planar transistor structures.To provide a more specific context for for one or more embodiments described herein, fabrication of a gate dielectric directly on a channel region and especially in a thin body, gate all around architecture with novel channel materials such as III-V and Ge is challenging yet integral to achieving high performance, well controlled transistors. When the channel mobility is high and the dielectric constant of the oxide is large, there can be considerable mobility degradation due to phonon scattering between the dielectric and the channel region. The degradation may be worse the higher the mobility of the channel region and the higher the dielectric constant of the oxide. However, both are needed for continued scaling and performance enhancement. Accordingly, one or more embodiments described herein target new dielectric combinations that grade the dielectric constant from low near the channel interface to very high distal the channel interface, thereby achieving thin EOT and high effective dielectric constant. Nonetheless, the dielectric constant is maintained to a minimum near the channel where the interaction is strongest, improving both the overall mobility and oxide quality, and maintaining good channel control for high performance ultra scaled transistors.In accordance with an embodiment of the present invention, then, a dielectric material layer is graded such that the dielectric constant is low near a channel region and high near a metal gate to achieve higher mobility channels in high mobility material systems without sacrificing gate control or charge. In one such example, improved oxide-III-V channel characteristics are demonstrated as being beyond the best state of the art. In one embodiment, overall oxide thickness (charge) is maintained, but continuous dielectric constant grading is achieved by the introduction of a ternary oxide (e.g., TaAlOx as an example) where grading the levels of Ta and Al results in a dielectric constant that is low at the channel-oxide interface to high and the metal oxide interface. In an embodiment, the resultant transistor has improved Dit at the channel interface, and improved mobility because of the dielectric grading. The ternary oxide also can enable the freedom of engineering the dielectric constant in the gate region.As an example of a conventional approach, Figure 1 A illustrates a cross-sectional view of a portion of a gate all-around non-planar semiconductor device 100 having a group III-V material active region and a cladding layer thereon. Referring to Figure 1A , an InGaAs channel 106 has an InP cladding layer 107 disposed thereon. A homogeneous TaSiOx gate dielectric 122 and metal gate 124 make up the gate stack disposed on the InGaAs channel 106/InP cladding layer 107 pairing. For the example shown in Figure 1 A , experimental details of which are described below, challenges remain for such a gate-all-around device with respect to (1) Dit is still higher than Si-Hf02, and (2) a 30-60% mobility loss in cases where TaSiOx is formed directly on InGaAs, i.e. an encumbering cladding layer is needed.By contrast to Figure 1A , as an example of a cladding-free device, Figure IB illustrates a cross-sectional view of a portion of a gate all-around non-planar semiconductor device 200 having a group III-V material active region and a graded high-k gate dielectric layer directly thereon, in accordance with an embodiment of the present invention. In particular, semiconductor device 200 includes a III-V material channel region 206 (exemplary embodiment here is gate-all-around InGaAs) surrounded by a gate stack composed of a TaAlOx gate dielectric 220 and a metal gate electrode 224. In one embodiment, then, a new dielectric (TaAlOx) is situated directly between the gate and channel of device 200. In an embodiment, the Al and Ta ratios are graded within the TaAlOx gate dielectric layer 220 in order to provide a lower dielectric constant of approximately 8 (e.g., Al rich) at the channel interface which is graded to a higher dielectric constant (approximately 21, and even as high as 30) at the metal gate interface by increasing the Ta content. Embodiments may also or instead include graded materials of various combinations of dielectrics (e.g., LaAlOx, TiAlOx, HfAlOx, ZrAlOx, etc.). In an embodiment, advantages of such an arrangement include, but are not limited to, (1) lower K and better interface properties with High Al% to provide better mobility, (2) a dielectric constant that is readily gradable, e.g., from 8 to 21 , to enable thin EOT and high mobility without use of an intervening InP cladding layer. In another embodiment, the above described aluminum component is substituted with Si, which is graded throughout the film, e.g., such as a graded layer of TiSiOx.One or more embodiments described herein, then, enables direct dielectric growth on channel wire material without the need for cladding layer. This allows the fabrication of smaller dimensions, e.g., thin wires. In an embodiment, by grading the dielectric layer, a gradual transition of film composition is achieved that is smooth with dielectric changes occurring in a non-stepwise fashion. In an embodiment, increments of 2 can be made in the graded dielectric layer approximately every 2-3 Angstroms of deposited material.Referring again to Figure IB, then, in an embodiment, the graded high-k gate dielectric layer 220 is composed of MAlOx having a greater concentration of aluminum proximate the III-V material channel region and a lesser concentration of aluminum proximate the gate electrode. M is a metal such as, but not limited to Ta, Zr, Hf, Gd, La, or Ti. In one embodiment, M is Ta, the lower dielectric constant is approximately 8, and the higher dielectric constant is approximately 21. In one embodiment, the graded high-k gate dielectric layer has a thickness approximately in the range of 2 - 3.5 nanometers. In one embodiment, the III-V material channel region is composed of InGaAs, the graded high-k gate dielectric layer is composed of TaAlOx, and the gate electrode is a metal gate electrode. In an embodiment, the TaAlOx is formed by atomic layer deposition (ALD), where the Al is delivered by trimethylaluminum (TMA) or Et2MeAl, and the Ta is delivered by TaCls or Ta ethoxide. In one embodiment, the formation of TaAlOx is effectively viewed as inserting Al atoms into some O sites in Ta205. In an embodiment, aluminum is required in the graded dielectric, but the Ta may be substituted with Zr, Hf, Gd, La, or Ti.Figure 2A is a plot 150 of dielectric constant as a function of % Al incorporation for a TaAlOx dielectric layer, in accordance with an embodiment of the present invention. Referring to plot 150, the dielectric constant has been graded approximately from 8 to 20 by increasing the Ta content in the oxide. A lower dielectric constant at the interface decreases the optical phonon scattering cause by strong oxide bonds (high K oxides) and results in improved mobility in the channel.Figure 2B is a plot 160 of C/A (in F/cm2) as a function of Vg (in Volts) for a TaSiOx dielectric layer. By contrast, Figure 2C is a plot 170 of C/A (in F/cm2) as a function of Vg (in Volts) for a TaAlOx dielectric layer, in accordance with an embodiment of the present invention. Referring to plots 160 and 170 as taken together, in addition to the mobility enhancement achieved without need to resort to a dual layer oxide, the oxide quality of the TaAlOx-ID-V interface is improved over that of the state of the art TaSiOx. The C-V curves show reduced frequency dispersion for the TaAlOx dielectric as compared with TaSiOx. The improved oxide characteristics can provide an improved mobility and channel control.In one aspect, methods of fabricating a group III-V material-based semiconductor structure with a graded high-k gate dielectric layer are provided. For example, Figures 3A-3E illustrate cross-sectional views representing various operations in a method of fabricating a non-planar semiconductor device having a group III-V material active region with a graded gate dielectric, in accordance with an embodiment of the present invention. It is to be understood that like feature designations of Figures 3A-3E may be as described in association with Figure IB.Referring to Figure 3A , a bottom barrier layer 328 is formed above a substrate 302. A III-V material layer is then formed on bottom barrier layer 328 and patterned to form three-dimensional material body 206 with channel region 308. Alternatively, the III-V material layer may be formed after or during the trench formation described below in association with Figure 3C .Referring to Figure 3B , a hetero- structure 390, which may include a top barrier layer 326 and source and drain material region 310, is formed above the three-dimensional material body 206 (or above the III-V material layer, if not yet patterned).Referring to Figure 3C , a trench 312 is formed in hetero-structure 390 and partially into bottom barrier layer 328, exposing channel region 308. In an embodiment, trench 312 is formed by a dry or wet etch process.Referring to Figure 3D , a graded dielectric layer 220 is formed in trench 312 and surrounding channel region 308. Then, referring to Figure 3E , a gate electrode 224 is formed on the graded dielectric layer 220.Thus, Figure 3E illustrates a cross-sectional view of a non-planar semiconductor device 300 having a group III-V material active region with a graded gate dielectric layer, in accordance with an embodiment of the present invention. Referring again to Figure 3E , then the semiconductor device 300 includes a hetero-structure 304 disposed above the substrate 302. The hetero- structure 304 includes the three-dimensional group III-V material body 206 having the channel region 308. The source and drain material region 310 is disposed above the three-dimensional group ni-V material body 206. A trench is disposed in the source and drain material region 310, separating a source region 314 from a drain region 316, and exposing at least a portion of the channel region 308. A gate stack 318 is disposed in the trench and on the exposed portion of the channel region 308. The gate stack 218 includes the graded dielectric layer 220 and the gate electrode 224. Although depicted as T-shaped, gate electrode 224 may instead have the T-portions trimmed in order to reduce capacitance effects. It is to be appreciated that the gate stack 318 includes a portion below the channel region 308, as is depicted in Figure 3E .Referring again to Figure 3E , in an embodiment, the hetero- structure 304 further includes a top barrier layer 326 (shown by the dashed lines in Figure 3E ) disposed between the source and drain material region 310 and the three-dimensional group III-V material body 206. The trench is also disposed in the top barrier layer 326. In an embodiment, the hetero-structure 304 further includes the bottom barrier layer 328 disposed between the substrate 302 and the three-dimensional group III-V material body 206. In one such embodiment, the trench is also partially disposed in the bottom barrier layer 328, completely exposing the channel region 308. In that embodiment, the gate stack 318 completely surrounds the channel region 308, as indicated in Figure 3E .Substrate 302 may be composed of a material suitable for semiconductor device fabrication. In one embodiment, substrate 302 is a bulk substrate composed of a single crystal of a material which may include, but is not limited to, silicon, germanium, silicon-germanium or a III-V compound semiconductor material. In another embodiment, substrate 302 includes a bulk layer with a top epitaxial layer. In a specific embodiment, the bulk layer is composed of a single crystal of a material which may include, but is not limited to, silicon, germanium, silicon-germanium, a III-V compound semiconductor material or quartz, while the top epitaxial layer is composed of a single crystal layer which may include, but is not limited to, silicon, germanium, silicon-germanium or a III-V compound semiconductor material. In another embodiment, substrate 302 includes a top epitaxial layer on a middle insulator layer which is above a lower bulk layer. The top epitaxial layer is composed of a single crystal layer which may include, but is not limited to, silicon (e.g., to form a silicon-on- insulator (SOI) semiconductor substrate), germanium, silicon-germanium or a III-V compound semiconductor material. The insulator layer is composed of a material which may include, but is not limited to, silicon dioxide, silicon nitride or silicon oxy-nitride. The lower bulk layer is composed of a single crystal which may include, but is not limited to, silicon, germanium, silicon-germanium, a III-V compound semiconductor material or quartz. Substrate 302 may further include dopant impurity atoms.Hetero- structure 304 includes a stack of one or more crystalline semiconductor layers, such as a compositional buffer layer (not shown) with the bottom barrier layer 328 disposed thereon. The compositional buffer layer may be composed of a crystalline material suitable to provide a specific lattice structure onto which a bottom barrier layer may be formed with negligible dislocations. For example, in accordance with an embodiment of the present invention, the compositional buffer layer is used to change, by a gradient of lattice constants, the exposed growth surface of semiconductor hetero-structure 304 from the lattice structure of substrate 302 to one that is more compatible for epitaxial growth of high quality, low defect layers thereon. In one embodiment, the compositional buffer layer acts to provide a more suitable lattice constant for epitaxial growth instead of an incompatible lattice constant of substrate 302. In an embodiment, substrate 302 is composed of single-crystal silicon and the compositional buffer layer grades to a bottom barrier layer composed of a layer of InAlAs having a thickness of approximately 1 micron. In an alternative embodiment, the compositional buffer layer is omitted because the lattice constant of substrate 302 is suitable for the growth of a bottom barrier layer 328 for a quantum-well semiconductor device.The bottom barrier layer 328 may be composed of a material suitable to confine a wave-function in a quantum- well formed thereon. In accordance with an embodiment of the present invention, the bottom barrier layer 328 has a lattice constant suitably matched to the top lattice constant of the compositional buffer layer, e.g., the lattice constants are similar enough that dislocation formation in the bottom barrier layer 328 is negligible. In one embodiment, the bottom barrier layer 328 is composed of a layer of approximately Ino.65Alo.35As having a thickness of approximately 10 nanometers. In a specific embodiment, the bottom barrier layer 328 composed of the layer of approximately Ino.65Alo.35As is used for quantum confinement in an N-type semiconductor device. In another embodiment, the bottom barrier layer 328 is composed of a layer of approximately Ino.65Alo.35Sb having a thickness of approximately 10 nanometers. In a specific embodiment, the bottom barrier layer 328 composed of the layer of approximately Ino.65Alo.35Sb is used for quantum confinement in a P-type semiconductor device.The three-dimensional group III-V material body 206 may be composed of a material suitable to propagate a wave- function with low resistance. In accordance with an embodiment of the present invention, three-dimensional group III-V material body 206 has a lattice constant suitably matched to the lattice constant of the bottom barrier layer 328 of hetero-structure 304, e.g., the lattice constants are similar enough that dislocation formation in three-dimensional group III-V material body 206 is negligible. In an embodiment, three-dimensional group III-V material body 206 is composed of groups III (e.g. boron, aluminum, gallium or indium) and V (e.g. nitrogen, phosphorous, arsenic or antimony) elements. In one embodiment, three-dimensional group III-V material body 206 is composed of InAs, InSb, or InGaAs. The three-dimensional group III-V material body 206 may have a thickness suitable to propagate a substantial portion of a wave-function, e.g. suitable to inhibit a significant portion of the wave-function from entering the bottom barrier layer 328 of hetero-structure 304 or a top barrier layer (e.g., barrier layer 326) formed on three-dimensional group III-V material body 206. In an embodiment, three-dimensional group III-V material body 206 has a thickness (height) approximately in the range of 50 - 100 Angstroms. The width (dimension taken into the page as shown) may have approximately the same dimension, providing a three-dimensional wire-type feature.Top barrier layer 326 may be composed of a material suitable to confine a wave-function in a III-V material body/channel region formed there under. In accordance with an embodiment of the present invention, top barrier layer 326 has a lattice constant suitably matched to the lattice constant of channel region 206, e.g., the lattice constants are similar enough that dislocation formation in top barrier layer 326 is negligible. In one embodiment, top barrier layer 326 is composed of a layer of material such as, but not limited to, N-type InGaAs. Source and drain material region 310 may be doped group III-V material region, such a more heavily doped structure formed from the same or similar material as top barrier layer 326. In other embodiments, the composition of source and drain material region 310, aside from doping differences, differs from the material of top barrier layer 326.Semiconductor device 200 or 300 may be a semiconductor device incorporating a gate, a channel region and a pair of source/drain regions. In an embodiment, semiconductor device 200 or 300 is one such as, but not limited to, a MOS-FET or a Microelectromechanical System (MEMS). In one embodiment, semiconductor device 200 or 300 is a planar or three-dimensional MOS-FET and is an isolated device or is one device in a plurality of nested devices. As will be appreciated for a typical integrated circuit, both N- and P-channel transistors may be fabricated on a single substrate to form a CMOS integrated circuit. Furthermore, additional interconnect wiring may be fabricated in order to integrate such devices into an integrated circuit.The above described devices can be viewed as trench-based devices, where a gate wraps a channel region within a trench of a stack of III-V material layers. However, other devices may include a protruding III-V channel regions, such as in a tri-gate or FIN-FET based MOS-FETs. For example, Figure 4 illustrates an angled view of a non-planar semiconductor device having a group III-V material active region with a graded gate dielectric, in accordance with an embodiment of the present invention.Referring to Figure 4 , a semiconductor device 400 includes a hetero-structure 404 disposed above a substrate 302. The hetero- structure 404 includes a bottom barrier layer 328. A three-dimensional group III-V material body 206 with a channel region 308 is disposed above the bottom barrier layer 328. A gate stack 318 is disposed to surround at least a portion of the channel region 308. In an embodiment, not viewable from the perspective of Figure 4 , the gate stack completely surrounds the channel region 308. The gate stack 318 includes a gate electrode 224 and a graded gate dielectric layer 220. The gate stack may further include dielectric spacers 460.Source and drain regions 314/316 may be formed in or on portions of the three-dimensional group III-V material body 206 not surrounded by gate stack 318. Furthermore, a top barrier layer may be included in those regions as well. Also, isolation regions 470 may be included. Although depicted in Figure 4 as being somewhat aligned with the bottom of the bottom barrier layer 328, it is to be understood that the depth of the isolation regions 470 may vary. Also, although depicted in Figure 4 as being somewhat aligned with the top of the bottom barrier layer 328, it is to be understood that the height of the isolation regions 470 may vary. It is also to be understood that like feature designations of Figure 4 may be as described in association with Figures IB and 3A-3E.In another aspect, Figure 5A illustrates a three-dimensional cross-sectional view of a group III-V material nanowire-based semiconductor structure having a graded gate dielectric, in accordance with an embodiment of the present invention. Figure 5B illustrates a cross-sectional channel view of the group III-V material nanowire-based semiconductor structure of Figure 5A , as taken along the a-a' axis. Figure 5C illustrates a cross-sectional spacer view of the group III-V material nanowire-based semiconductor structure of Figure 5A , as taken along the b-b' axis.Referring to Figure 5A , a semiconductor device 500 includes one or more vertically stacked group III-V material nanowires (550 set) disposed above a substrate 302. Embodiments herein are targeted at both single wire devices and multiple wire devices. As an example, a three nanowire-based devices having nanowires 550A, 550B and 550C is shown for illustrative purposes. For convenience of description, nanowire 550A is used as an example where description is focused on only one of the nanowires. It is to be understood that where attributes of one nanowire are described, embodiments based on a plurality of nanowires may have the same attributes for each of the nanowires.At least the first nanowire 550 A includes a group III-V material channel region 308. The group III-V material channel region 208 has a length (L). Referring to Figure 5B , the group III-V material channel region 308 also has a perimeter orthogonal to the length (L). Referring to both Figures 5A and 5B , a gate electrode stack 318 surrounds the entire perimeter of each of the channel regions of each nanowire 550, including group III-V material channel region 308. The gate electrode stack 318 includes a gate electrode along with a graded gate dielectric layer disposed between the channel regions and the gate electrode (not individually shown). The group III-V material channel region 308 and the channel regions of the additional nanowires 550B and 550C are discrete in that they are completely surrounded by the gate electrode stack 318 without any intervening material such as underlying substrate material or overlying channel fabrication materials.Accordingly, in embodiments having a plurality of nanowires 550, the channel regions of the nanowires are also discrete relative to one another, as depicted in Figure 5B . Referring to Figures 5A-5C , a bottom barrier layer 328 is disposed above substrate 302. The bottom barrier layer 328 is further disposed below the one or more nanowires 550. In an embodiment, the group III-V material channel region 308 is completely surrounded by gate electrode 318, as depicted in Figure 5B .Referring again to Figure 5A , each of the nanowires 550 also includes source and drain regions 314 and 316 disposed in or on the nanowire on either side of the channel regions, including on either side of group III-V material channel region 308. In an embodiment, the source and drain regions 314/316 are embedded source and drain regions, e.g., at least a portion of the nanowires is removed and replaced with a source/drain material region. However, in another embodiment, the source and drain regions 314/316 are composed of, or at least include, portions of the one or more nanowires 550.A pair of contacts 570 is disposed over the source/drain regions 314/316. In an embodiment, the semiconductor device 500 further includes a pair of spacers 540. The spacers 540 are disposed between the gate electrode stack 318 and the pair of contacts 570. As described above, the channel regions and the source/drain regions are, in at least several embodiments, made to be discrete. However, not all regions of the nanowires 550 need be, or even can be made to be discrete. For example, referring to Figure 5C , nanowires 550A-550C are not discrete at the location under spacers 540. In one embodiment, the stack of nanowires 550A-550C have intervening semiconductor material 580 there between. In one embodiment, the bottom nanowire 550 A is still in contact with a portion of the bottom buffer layer 328, which is otherwise recessed for gate stack 318 formation ( Figure 5B ). Thus, in an embodiment, a portion of the plurality of vertically stacked nanowires 550 under one or both of the spacers 540 is non-discrete.It is to be understood that like feature designations of Figure 5A-5C may be as described in association with Figures IB, 3A-3E and 4. Also, although the device 500 described above is for a single device, a CMOS architecture may also be formed to include both NMOS and PMOS nanowire-based devices disposed on or above the same substrate. In an embodiment, the nanowires 550 may be sized as wires or ribbons, and may have squared-off or rounded corners.Advantages of one or more embodiments described above may include one or more of (1) a lower dielectric constant and better interface properties with high Al % for improved better mobility at the channel region, (2) the dielectric constant is readily gradable from 8 to 21 to enable thin EOT and high mobility without the use of a cladding layer such as InP, and (3) enabling of an extension of Moore's Law or increasing the performance of CMOS transistors. One benefit may include achieving high mobility, highly scaled transistors and to continue Moore's law and transistor improvements for high performance, low power microprocessors.Embodiments described above involving a single layer graded high-k gate dielectric may be distinguished from dual dielectric layer arrangements where two distinct dielectric films are fabricated, typically with a step in dielectric constant at the interface of the dual layers. Present embodiments may provide improved solutions, improved oxide qualities, and the ability to grade the dielectric constant to achieve desired charge and mobility enhancement. In accordance with embodiments described herein, a graded dielectric layer such as described above has been demonstrated on experimental capacitors to have improved interface qualities versus TaSiOx. On the same capacitors, the oxide has been shown to have a gradable dielectric constant by varying the Ta and Al content in the ternary oxide. Mobility has been demonstrated independently to improve with grading such that oxide thickness is maintained while at the same time lowering the dielectric constant only at the channel oxide interface that is most dominant for scattering.Thus, one or more embodiments described herein are targeted at III-V material active region arrangements integrated with graded gate dielectrics. Although described above with respect to benefits for non-planar and gate-all-around devices, benefits may also be achieved for planar devices without gate wraparound features. Thus, such arrangements may be included to form III-V material-based transistors such as planar devices, fin or tri-gate based devices, and gate all around devices, including nanowire-based devices. Embodiments described herein may be effective for junction isolation in metal-oxide-semiconductor field effect transistors (MOSFETs). It is to be understood that formation of materials such as the III-V material layers described herein may be performed by techniques such as, but not limited to, chemical vapor deposition (CVD) or molecular beam epitaxy (MBE), or other like processes.Figure 6 illustrates a computing device 600 in accordance with one implementation of the invention. The computing device 600 houses a board 602. The board 602 may include a number of components, including but not limited to a processor 604 and at least one communication chip 606. The processor 604 is physically and electrically coupled to the board 602. In some implementations the at least one communication chip 606 is also physically and electrically coupled to the board 602. In further implementations, the communication chip 606 is part of the processor 604.Depending on its applications, computing device 600 may include other components that may or may not be physically and electrically coupled to the board 602. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 606 enables wireless communications for the transfer of data to and from the computing device 600. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 606 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 600 may include a plurality of communication chips 606. For instance, a first communication chip 606 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 606 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 604 of the computing device 600 includes an integrated circuit die packaged within the processor 604. In some implementations of the invention, the integrated circuit die of the processor includes one or more devices, such as MOS-FET transistors built in accordance with implementations of the invention. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 606 also includes an integrated circuit die packaged within the communication chip 606. In accordance with another implementation of the invention, the integrated circuit die of the communication chip includes one or more devices, such as MOS-FET transistors built in accordance with implementations of the invention.In further implementations, another component housed within the computing device 600 may contain an integrated circuit die that includes one or more devices, such as MOS-FET transistors built in accordance with implementations of the invention.In various implementations, the computing device 600 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 600 may be any other electronic device that processes data.Thus, embodiments of the present invention include non-planar semiconductor devices having group III-V material active regions and graded gate dielectrics and methods of fabricating such devices.In an embodiment, a semiconductor device includes a group III-V material channel region disposed above a substrate. A gate stack is disposed on the group III-V material channel region. The gate stack includes a graded high-k gate dielectric layer disposed directly between the III-V material channel region and a gate electrode. The graded high-k gate dielectric layer has a lower dielectric constant proximate the III-V material channel region and has a higher dielectric constant proximate the gate electrode. Source/drain regions are disposed on either side of the gate stack.In one embodiment, the graded high-k gate dielectric layer is composed of MAlOx having a greater concentration of aluminum proximate the III-V material channel region and a lesser concentration of aluminum proximate the gate electrode. M is a metal such as, but not limited to Ta, Zr, Hf, Gd, La, or Ti.In one embodiment, M is Ta, the lower dielectric constant is approximately 8, and the higher dielectric constant is approximately 21.In one embodiment, the graded high-k gate dielectric layer has a thickness approximately in the range of 2 - 3.5 nanometers.In one embodiment, the III-V material channel region is composed ofInGaAs, the graded high-k gate dielectric layer is composed of TaAlOx, and the gate electrode is a metal gate electrode.In an embodiment, a semiconductor device includes a hetero-structure disposed above a substrate and having a three-dimensional group III-V material body with a channel region. A source and drain material region is disposed above the three-dimensional group III-V material body. A trench is disposed in the source and drain material region separating a source region from a drain region, and exposing at least a portion of the channel region. A gate stack is disposed in the trench and on the exposed portion of the channel region. The gate stack includes a graded high-k gate dielectric layer conformal with the trench and the channel region, and a gate electrode disposed on the graded high-k gate dielectric layer.In one embodiment, the graded high-k gate dielectric layer has a lower dielectric constant proximate the channel region and has a higher dielectric constant proximate the gate electrode.In one embodiment, the graded high-k gate dielectric layer is composed of MAlOx having a greater concentration of aluminum proximate the channel region and a lesser concentration of aluminum proximate the gate electrode. M is a metal such as, but not limited to, Ta, Zr, Hf, Gd, La, or Ti.In one embodiment, M is Ta, the lower dielectric constant is approximately 8, and the higher dielectric constant is approximately 21.In one embodiment, the graded high-k gate dielectric layer has a thickness approximately in the range of 2 - 3.5 nanometers.In one embodiment, the material channel region is composed of InGaAs, the graded high-k gate dielectric layer is composed of TaAlOx, and the gate electrode is a metal gate electrode.In one embodiment, the hetero-structure further includes a top barrier layer disposed between the source and drain material region and the three-dimensional group III-V material body. The trench is also disposed in the top barrier layer.In one embodiment, the hetero-structure further includes a bottom barrier layer disposed between the substrate and the three-dimensional group III-V material body.In one embodiment, the trench is also partially disposed in the bottom barrier layer, completely exposing the channel region, and the gate stack completely surrounds the channel region.In an embodiment, a semiconductor device includes a vertical arrangement of a plurality of group III-V material nanowires disposed above a substrate. A gate stack is disposed on and completely surrounds the channel region of each of the group III-V material nanowires. The gate stack includes a graded high-k gate dielectric layer disposed on each of the channel regions. A gate electrode is disposed on the graded high-k gate dielectric layer. Source and drain regions surround portions of each of the group III-V material nanowires, on either side of the gate stack.In one embodiment, the graded high-k gate dielectric layer has a lower dielectric constant proximate each of the channel regions and has a higher dielectric constant proximate the gate electrode.In one embodiment, the graded high-k gate dielectric layer is composed of MAlOx having a greater concentration of aluminum proximate the channel regions and a lesser concentration of aluminum proximate the gate electrode. M is a metal such as, but not limited to, Ta, Zr, Hf, Gd, La, or Ti.In one embodiment, M is Ta, the lower dielectric constant is approximately 8, and the higher dielectric constant is approximately 21.In one embodiment, the graded high-k gate dielectric layer has a thickness approximately in the range of 2 - 3.5 nanometers.In one embodiment, the channel regions are composed of InGaAs, the graded high-k gate dielectric layer is composed of TaAlOx, and the gate electrode is a metal gate electrode.In one embodiment, the semiconductor structure further includes a top barrier layer disposed between the source and drain regions and each of the group ni-V material nanowires.In one embodiment, the semiconductor structure further includes a bottom barrier layer disposed between the substrate and the bottom-most group III-V material nanowire. A bottom portion of the gate stack is disposed on the bottom barrier layer.Embodiments of the invention further include the following:1. A semiconductor device, comprising:a group III-V material channel region disposed above a substrate;a gate stack disposed on the group III-V material channel region, the gate stack comprising a graded high-k gate dielectric layer disposed directly between the III-V material channel region and a gate electrode, wherein the graded high-k gate dielectric layer has a lower dielectric constant proximate the III-V material channel region and has a higher dielectric constant proximate the gate electrode; andsource/drain regions disposed on either side of the gate stack.2. The semiconductor device of claim 1, wherein the graded high-k gate dielectric layer comprises MAlOx having a greater concentration of aluminum proximate the III-V material channel region and a lesser concentration of aluminum proximate the gate electrode, where M is selected from the group consisting of Ta, Zr, Hf, Gd, La, and Ti.3. The semiconductor device of claim 2, wherein M is Ta, and wherein the lower dielectric constant is approximately 8 and the higher dielectric constant is approximately 21.4. The semiconductor structure of claim 1, wherein the graded high-k gate dielectric layer has a thickness approximately in the range of 2 - 3.5 nanometers.5. The semiconductor device of claim 1, wherein the III-V material channel region comprises InGaAs, the graded high-k gate dielectric layer comprises TaAlOx, and the gate electrode is a metal gate electrode.6. A semiconductor device, comprising:a hetero- structure disposed above a substrate and comprising a three- dimensional group III-V material body with a channel region;a source and drain material region disposed above the three-dimensional group ni-V material body;a trench disposed in the source and drain material region separating a source region from a drain region, and exposing at least a portion of the channel region; anda gate stack disposed in the trench and on the exposed portion of the channel region, the gate stack comprising:a graded high-k gate dielectric layer conformal with the trench and the channel region; anda gate electrode disposed on the graded high-k gate dielectric layer.7. The semiconductor device of claim 6, wherein the graded high-k gate dielectric layer has a lower dielectric constant proximate the channel region and has a higher dielectric constant proximate the gate electrode.8. The semiconductor device of claim 7, wherein the graded high-k gate dielectric layer comprises MAlOx having a greater concentration of aluminum proximate the channel region and a lesser concentration of aluminum proximate the gate electrode, where M is selected from the group consisting of Ta, Zr, Hf, Gd, La, and Ti.9. The semiconductor device of claim 8, wherein M is Ta, and wherein the lower dielectric constant is approximately 8 and the higher dielectric constant is approximately 21.10. The semiconductor structure of claim 7, wherein the graded high-k gate dielectric layer has a thickness approximately in the range of 2 - 3.5 nanometers.11. The semiconductor device of claim 7, wherein the material channel region comprises InGaAs, the graded high-k gate dielectric layer comprises TaAlOx, and the gate electrode is a metal gate electrode.12. The semiconductor structure of claim 6, the hetero-structure further comprising: a top barrier layer disposed between the source and drain material region and the three-dimensional group III-V material body, wherein the trench is also disposed in the top barrier layer.13. The semiconductor structure of claim 6, the hetero-structure further comprising: a bottom barrier layer disposed between the substrate and the three-dimensional group III-V material body.14. The semiconductor structure of claim 13, wherein the trench is also partially disposed in the bottom barrier layer, completely exposing the channel region, and wherein the gate stack completely surrounds the channel region.15. A semiconductor device, comprising:a vertical arrangement of a plurality of group III-V material nanowires disposed above a substrate;a gate stack disposed on and completely surrounding a channel region of each of the group III-V material nanowires, the gate stack comprising:a graded high-k gate dielectric layer disposed on each of the channel regions; anda gate electrode disposed on the graded high-k gate dielectric layer; and source and drain regions surrounding portions of each of the group III-V material nanowires, on either side of the gate stack.16. The semiconductor device of claim 15, wherein the graded high-k gate dielectric layer has a lower dielectric constant proximate each of the channel regions and has a higher dielectric constant proximate the gate electrode.17. The semiconductor device of claim 16, wherein the graded high-k gate dielectric layer comprises MAlOx having a greater concentration of aluminum proximate the channel regions and a lesser concentration of aluminum proximate the gate electrode, where M is selected from the group consisting of Ta, Zr, Hf, Gd, La, and Ti.18. The semiconductor device of claim 17, wherein M is Ta, and wherein the lower dielectric constant is approximately 8 and the higher dielectric constant is approximately 21.19. The semiconductor structure of claim 16, wherein the graded high-k gate dielectric layer has a thickness approximately in the range of 2 - 3.5 nanometers.20. The semiconductor device of claim 16, wherein the channel regions comprise InGaAs, the graded high-k gate dielectric layer comprises TaAlOx, and the gate electrode is a metal gate electrode.21. The semiconductor structure of claim 15, further comprising:a top barrier layer disposed between the source and drain regions and each of the group ni-V material nanowires.22. The semiconductor structure of claim 15, further comprising:a bottom barrier layer disposed between the substrate and the bottom-most group III-V material nanowire, wherein a bottom portion of the gate stack is disposed on the bottom barrier layer. |
A standard cell IC may include a plurality of pMOS transistors each including a pMOS transistor drain, a pMOS transistor source, and a pMOS transistor gate. Each pMOS transistor drain and pMOS transistor source of the plurality of pMOS transistors may be coupled to a first voltage source. The standard cell IC may also include a plurality of nMOS transistors each including an nMOS transistor drain,an nMOS transistor source, and an nMOS transistor gate. Each nMOS transistor drain and nMOS transistor source of the plurality of nMOS transistors are coupled to a second voltage source lower than the first voltage source. |
1. A standard cell integrated circuit (IC), comprising:a plurality of p-type metal oxide semiconductor (MOS) (pMOS) transistors, each pMOS transistor in the plurality of pMOS transistors has a pMOS transistor drain, a pMOS transistor source and a pMOS transistor gate, the plurality of pMOS transistors Each pMOS transistor drain and pMOS transistor source in the plurality of pMOS transistors is coupled to a first voltage source, each pMOS transistor gate in the plurality of pMOS transistors is interconnected by a pMOS gate interconnection in the plurality of pMOS gate interconnections formed, each of the pMOS gate interconnects extending in a first direction and coupled to the first voltage source, wherein one or more of the pMOS gate interconnects an interconnect connected to a metal layer extending along the one or more pMOS gate interconnects for coupling to the first voltage source; andA plurality of n-type MOS (nMOS) transistors, each nMOS transistor in the plurality of nMOS transistors has an nMOS transistor drain, an nMOS transistor source and an nMOS transistor gate, each nMOS transistor in the plurality of nMOS transistors The drain and the source of the nMOS transistor are coupled to a second voltage source lower than the first voltage source, each nMOS transistor gate of the plurality of nMOS transistors being interconnected by an nMOS gate of the plurality of nMOS gates pole interconnects are formed, each of the nMOS gate interconnects extends in the first direction and is coupled to the second voltage source, wherein one or more of the nMOS gate interconnects The nMOS gate interconnects are connected to a metal layer-interconnect extending along the one or more nMOS gate interconnects for coupling to the second voltage source.2. The standard cell IC according to claim 1, further comprising:A first contact interconnect extending in a second direction orthogonal to the first direction and coupling the pMOS gate interconnects together, the first contact interconnect coupled to the first a voltage source; andA second contact interconnect extending in the second direction and coupling the nMOS gate interconnects together, the second contact interconnect coupled to the second voltage source.3. The standard cell IC of claim 2 , wherein each pMOS gate interconnect of the plurality of pMOS gate interconnects is interconnected with one nMOS gate interconnect of the plurality of nMOS gate interconnects are separated and collinear in the first direction.4. The standard cell IC according to claim 2, wherein the standard cell IC has n grids with a pitch p between the grids, and the width of the standard cell IC is n*p, the The grid extends in the first direction, and wherein the plurality of pMOS transistors includes n-3 transistors, and the plurality of nMOS transistors includes n-3 transistors, the standard cell IC further includes:a first dummy gate interconnect adjacent to a first side of the standard cell IC and extending across the standard cell IC in the first direction, the first dummy gate interconnect floating and the the first side extends along the first direction; anda second dummy gate interconnect adjacent to a second side of the standard cell IC and extending across the standard cell IC in the first direction, the second dummy gate interconnect being floating and the The second side extends along the first direction.5. The standard cell IC according to claim 4, wherein the first contact interconnection and the second contact interconnection are between the first dummy gate interconnection and the second dummy gate interconnection extending in the second direction.6. The standard cell IC according to claim 1, wherein the standard cell IC has n grids with a pitch p between the grids, and the width of the standard cell IC is n*p, the The grid extends in the first direction, and wherein the plurality of pMOS transistors includes n-1 transistors and the plurality of nMOS transistors includes n-1 transistors.7. A method of operating a standard cell integrated circuit (IC), comprising:operating a plurality of p-type metal oxide semiconductor (MOS) (pMOS) transistors, each pMOS transistor in the plurality of pMOS transistors having a pMOS transistor drain, a pMOS transistor source and a pMOS transistor gate, the plurality of pMOS transistors Each pMOS transistor drain and pMOS transistor source of the transistors is coupled to a first voltage source, and each pMOS transistor gate of the plurality of pMOS transistors is interconnected by a pMOS gate interconnection of the plurality of pMOS gate interconnections. Each of the pMOS gate interconnects extends in a first direction and is coupled to the first voltage source, wherein one or more of the pMOS gate interconnects an interconnect connected to a metal layer extending along the one or more pMOS gate interconnects for coupling to the first voltage source; andoperating a plurality of n-type MOS (nMOS) transistors, each nMOS transistor in the plurality of nMOS transistors having an nMOS transistor drain, an nMOS transistor source and an nMOS transistor gate, each nMOS transistor in the plurality of nMOS transistors The drain of the transistor and the source of the nMOS transistor are coupled to a second voltage source lower than the first voltage source, and each nMOS transistor gate in the plurality of nMOS transistors is interconnected by an nMOS transistor in the plurality of nMOS gates. Gate interconnects are formed, each of the nMOS gate interconnects extending in the first direction and coupled to the second voltage source, wherein one or more of the nMOS gate interconnects nMOS gate interconnects are connected to a metal layer-interconnect extending along the one or more nMOS gate interconnects for coupling to the second voltage source.8. The method of claim 7, wherein the standard cell IC further comprises:A first contact interconnect extending in a second direction orthogonal to the first direction and coupling the pMOS gate interconnects together, the first contact interconnect coupled to the first a voltage source; andA second contact interconnect extending in the second direction and coupling the nMOS gate interconnects together, the second contact interconnect coupled to the second voltage source.9. The method of claim 8 , wherein each pMOS gate interconnect of the plurality of pMOS gate interconnects is connected to one nMOS gate interconnect of the plurality of nMOS gate interconnects within the are separated and collinear in the first direction.10. The method according to claim 8, wherein the standard cell IC has n grids with a pitch p between the grids, and the width of the standard cell IC is n*p, the grid Extending in the first direction, and wherein the plurality of pMOS transistors includes n-3 transistors, and the plurality of nMOS transistors includes n-3 transistors, the standard cell IC further includes:a first dummy gate interconnect adjacent to a first side of the standard cell IC and extending across the standard cell IC in the first direction, the first dummy gate interconnect floating and the the first side extends along the first direction; anda second dummy gate interconnect adjacent to a second side of the standard cell IC and extending across the standard cell IC in the first direction, the second dummy gate interconnect being floating and the The second side extends along the first direction.11. The method of claim 10, wherein the first contact interconnection and the second contact interconnection are between the first dummy gate interconnection and the second dummy gate interconnection at The second direction extends upwards.12. The method according to claim 7, wherein the standard cell IC has n grids with a pitch p between the grids, and the width of the standard cell IC is n*p, the grid extending in the first direction, and wherein the plurality of pMOS transistors includes n-1 transistors and the plurality of nMOS transistors includes n-1 transistors. |
Standard Cell Architecture for Reduced Leakage Current and Increased Decoupling CapacitanceCross References to Related ApplicationsThis application claims the benefit of U.S. Patent Application No. 15/209,650, filed July 13, 2016, entitled "A STANDARD CELL ARCHITECTUREFOR REDUCED LEAKAGE CURRENT AND IMPROVED DECOUPLING CAPACITANCE," which is expressly incorporated herein by reference in its entirety .technical fieldThe present disclosure relates generally to a standard cell architecture and, more particularly, to a filled cell metal oxide semiconductor (MOS) integrated circuit (IC) standard cell architecture for reducing leakage current and increasing decoupling capacitance.Background techniqueThe standard cells of an IC implement digital logic. Application-specific ICs (ASICs), such as system-on-chip (SoC) devices, can contain thousands to millions of standard cells. A typical MOS IC device includes a stack of sequentially formed layers. Each layer may be stacked or overlaid on previous layers and patterned to form shapes that define transistors (eg, field effect transistors (FETs) and/or fin FETs (FinFETs)) and connect the transistors into circuits.As MOS IC devices are manufactured in smaller sizes, manufacturers find it more difficult to integrate a larger number of standard cell devices on a single chip. If every cell in a MOS IC device is used for a logic function and inter-cell routing is required (eg, 100% utilization), there may not be enough room for the inter-cell routing requirements between standard cells. To reduce utilization in MOS IC devices, engineering change order (ECO) cells, decoupling capacitor cells, and filler cells may be used. A utilization of about 70%-80% may provide enough space to allow required inter-cell routing between standard cells. Typically, most of the 20%-30% non-utilization can be obtained by using filler cells, since filler cells have less current leakage than decoupling capacitor cells and provide some decoupling capacitance. Filler cells (eg, rather than empty cells without any transistor patterns) may be necessary when forming power rails and/or n-doped wells continuously across the MOS IC device. IC simulators can be inaccurate in estimating the leakage of filled cells. There is a need for a fill cell that improves leakage current estimation in IC simulators. Additionally, there is an ongoing need to reduce the leakage current of the filled cells without significantly reducing the decoupling capacitance of the filled cells.Contents of the inventionIn an aspect of the present disclosure, a standard cell IC may include a plurality of p-type MOS (pMOS) transistors. Each pMOS transistor of the plurality of pMOS transistors may have a pMOS transistor drain, a pMOS transistor source, and a pMOS transistor gate. Each pMOS transistor drain and pMOS transistor source of the plurality of pMOS transistors may be coupled to a first voltage source. Each pMOS transistor gate of the plurality of pMOS transistors may be formed by a pMOS gate interconnect of the plurality of pMOS gate interconnects. Each of the pMOS gate interconnects may extend in a first direction and may be coupled to a first voltage source. A standard cell IC may also include a plurality of n-type MOS (nMOS) transistors. Each nMOS transistor of the plurality of nMOS transistors may have an nMOS transistor drain, an nMOS transistor source, and an nMOS transistor gate. Each nMOS transistor drain and nMOS transistor source of the plurality of nMOS transistors may be coupled to a second voltage source that is lower than the first voltage source. Each nMOS transistor gate of the plurality of nMOS transistors may be formed by an nMOS gate interconnect of the plurality of nMOS gate interconnects. Each of the nMOS gate interconnects can extend in a first direction and can be coupled to a second voltage source.In another aspect of the present disclosure, a method of operating a standard cell IC may include flowing a first current through a plurality of pMOS transistors. Each pMOS transistor of the plurality of pMOS transistors may have a pMOS transistor drain, a pMOS transistor source, and a pMOS transistor gate. Each pMOS transistor drain and pMOS transistor source of the plurality of pMOS transistors may be coupled to a first voltage source. Each pMOS transistor gate of the plurality of pMOS transistors may be formed by a pMOS gate interconnect of the plurality of pMOS gate interconnects. Each of the pMOS gate interconnects may extend in a first direction and may be coupled to a first voltage source. The method of operation may also include flowing the second current through the plurality of nMOS transistors. Each nMOS transistor of the plurality of nMOS transistors may have an nMOS transistor drain, an nMOS transistor source, and an nMOS transistor gate. Each nMOS transistor drain and nMOS transistor source of the plurality of nMOS transistors may be coupled to a second voltage source that is lower than the first voltage source. Each nMOS transistor gate of the plurality of nMOS transistors may be formed by an nMOS gate interconnect of the plurality of nMOS gate interconnects. Each of the nMOS gate interconnects can extend in a first direction and can be coupled to a second voltage source.Description of drawingsFIG. 1A is a diagram illustrating a plan view of an example filling unit.FIG. 1B is an example schematic diagram of the filling unit of FIG. 1A .FIG. 2 is a diagram illustrating a plan view of an exemplary populated cell having a standard cell architecture with reduced leakage current.3 is a diagram illustrating a plan view of an exemplary populated cell with a standard cell architecture with reduced leakage current.4 is a diagram illustrating a plan view of an exemplary filled cell with a standard cell architecture with reduced leakage current.Fig. 5 is an exemplary schematic diagram of the filling unit of Figs. 2-4.FIG. 6 is a diagram of a MOS IC device including standard cells and filled cells.7 is a flowchart of an exemplary method.Detailed waysThe detailed description set forth below in connection with the accompanying drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details to provide a thorough understanding of various concepts. It will be apparent, however, to one skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts. Devices and methods will be described in the following detailed description, and may be illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, elements, and the like.As MOS IC devices are manufactured in smaller sizes, manufacturers find it more difficult to integrate a larger number of standard cell devices on a single chip. If every cell in a MOS IC device is used for a logic function and inter-cell routing is required (eg, 100% utilization), there may not be enough room for the inter-cell routing requirements between standard cells. To reduce utilization in MOS IC devices, engineering change order (ECO) cells, decoupling capacitor cells, and filler cells may be used. A utilization of about 70%-80% may provide enough space to allow required inter-cell routing between standard cells. To achieve 20%-30% non-utilization, ECO units can be placed as needed. Additionally, a decoupling capacitor unit can be placed to obtain the necessary decoupling capacitance. Also, the remaining positions can be filled with filler cells. Typically, most of the 20%-30% non-utilization can be obtained by using filler cells, since filler cells have less current leakage than decoupling capacitor cells and provide some decoupling capacitance. Filler cells (eg, rather than empty cells without any transistor patterns) may be necessary when power rails and/or n-doped wells are formed continuously across the MOS IC device.FIG. 1A is an example diagram showing a plan view of an example standard cell architecture of a filler cell 100 . FIG. 1B is a schematic diagram 150 showing the source/drain/gate connections of the pMOS transistor ( 152 a ) and the nMOS transistor ( 152 b ) of the fill cell 100 shown in FIG. 1A . The filling unit 100 may be formed on a substrate 104 (eg, a silicon substrate). It should be understood that the example diagram of FIG. 1 is a representation of various masks that may be used to fabricate the features of the filled cell 100 . For example, each mask may correspond to various features (eg, interconnects, vias, etc.) to be configured in a particular layer of fill cell 100 . Therefore, for the convenience of illustration and understanding of the present disclosure, the exemplary diagram shown in FIG. 1 simultaneously shows multiple layers of the filling unit 100 in an overlapping manner.In the example configuration of FIG. 1A, a dummy gate interconnect 144a is formed over the cell boundary 106a, and another dummy gate interconnect 144b is formed over the cell boundary 106b. In addition, a floating gate interconnect 132 is formed between the dummy gate interconnects 144a, 144b. For example, each of the dummy gate interconnects 144a, 144b and the floating gate interconnect 132 may be formed along the boundary of each of n grids (n=7 in FIG. 1 ). . In the example shown in FIG. 1 , each of the n grids has a width of pitch p, and thus the filling unit 100 has a width of approximately n*p. Filler unit 100 includes n (eg, 7) gate interconnects including n-1 (eg, 6) gate interconnects 132, half of dummy gate interconnects 144a, and dummy gate half of interconnect 144b.Still referring to FIG. 1 , the dummy gate interconnects 144a, 144b and/or the floating gate interconnect 132 may be disposed in the POLY layer. In some processing techniques, dummy gate interconnects 144a, 144b and/or floating gate interconnect 132 may be formed of metal. However, in other processing technologies, dummy gate interconnects 144a, 144b and/or floating gate interconnect 132 may be entirely polysilicon, or may be polysilicon with a metal top layer. As shown in the upper right corner of FIG. 1 , dummy gate interconnects 144 a , 144 b and/or floating gate interconnect 132 extend in a first direction.Furthermore, in order to configure the gate interconnects in the filled cells 100 as floating gate interconnects 132 , the floating gate interconnects 132 are not coupled to a voltage source. Additionally, each of the pMOS source/drain regions 112a is coupled to the same voltage source 134a (e.g., Vdd), and each of the nMOS source/drain regions 112b is coupled to The electrode/drain regions are coupled to the same voltage source 134b (eg, Vss). For example, pMOS source/drain regions 112a each include a diffusion region 120a, a metal diffusion contact A (CA) interconnect 122a, and a via (via V0 ) connecting pMOS source/drain region 112a to voltage source 134a. 124a. Additionally, the nMOS source/drain regions 112b each include a diffusion region 120b, a CA interconnect 122b, and a via V0 124b connecting the nMOS source/drain region 112b to a voltage source 134b.The potential of the floating gate interconnection 132 may be between Vdd and Vss. In particular, the gate interconnection 132 may float at a voltage of approximately (Vdd−Vss)/2+Vss. When Vss is grounded, gate interconnect 132 may float at a voltage of approximately Vdd/2. Floating gate interconnect 132 at a voltage of approximately Vdd/2 (assuming Vss is grounded) creates a current leakage between each transistor gate and each transistor block/bulk (eg, substrate 104 ). During simulation of an IC, the simulator may assume that the transistor gate is at a voltage of Vdd or Vss, and therefore cannot estimate current leakage well. In the exemplary filled cells discussed below with reference to FIGS. 2-4 , the gate interconnects 232, 332, 432 may be coupled to either Vdd or Vss, and thus the voltage on the gate interconnect 132 is assumed to be a simulator of Vdd or Vss. Can be more accurate in estimating leakage current.FIG. 2 is an exemplary diagram showing a plan view of a standard cell architecture of a filling cell 200 . For example, the filling unit 200 may be formed on a substrate 204 (eg, a silicon substrate). It should be understood that the example diagram of FIG. 2 is a representation of various masks that may be used to fabricate the features of filling cell 200 . For example, each mask may correspond to various features (eg, interconnects, vias, etc.) to be configured in a particular layer of fill cell 200 . Therefore, for the convenience of illustration and understanding of the present disclosure, the exemplary diagram shown in FIG. 2 simultaneously shows multiple layers of the filling unit 200 in an overlapping manner.In the exemplary configuration of FIG. 2, the filled cell 200 includes a dummy gate interconnect 244a formed over the first cell boundary 206a and another dummy gate interconnect 244b formed over the second cell boundary 206b. For example, the first half of each of the dummy gate interconnects 244a, 244b may be located in the fill cell 200, and the second half of each of the dummy gate interconnects 244a, 244b may be located in the same In standard cells where the cell boundary 206a and the second cell boundary 206b are adjacent.In addition, the filling unit 200 includes a pMOS transistor 218a and an nMOS transistor 218b. The pMOS transistor 218a and the nMOS transistor 218b can be formed by a gate interconnect 232 (only one is labeled, but six gate interconnects 232 are shown) and source/drain regions on either side of the gate interconnect 232 composition. For example, the portion of gate interconnect 232 adjacent to the source/drain region located in pMOS transistor 218a forms the gate of pMOS transistor 218a. Similarly, the portion of gate interconnect 232 adjacent to the source/drain region located in nMOS transistor 218b forms the gate of nMOS transistor 218b.Each of the dummy gate interconnects 244a, 244b and the gate interconnect 232 may be along the boundary of each of the n grids (n=7 in FIG. 2). In the example shown in FIG. 2, each of the n grids has a width of pitch p, and thus the padding unit 200 has a width of approximately n*p. Filler unit 200 includes n (eg, 7) gate interconnects including n-1 (eg, 6) gate interconnects 232, half of dummy gate interconnects 244a, and dummy gate half of interconnect 244b.Still referring to FIG. 2, dummy gate interconnects 244a, 244b and/or gate interconnect 232 may be disposed in the POLY layer. In some processing techniques, dummy gate interconnects 244a, 244b and/or gate interconnect 232 may be formed of metal. However, in other processing technologies, dummy gate interconnects 244a, 244b and/or gate interconnect 232 may be entirely polysilicon, or may be polysilicon with a metal top layer. As shown in the upper right corner of FIG. 2 , dummy gate interconnects 244 a , 244 b and gate interconnect 232 extend in a first direction.In addition, each of the source/drain regions of pMOS transistors 218a are coupled to the same voltage source 236a (eg, Vdd). For example, the source/drain regions of pMOS transistor 218a each include diffusion region 220a, CA interconnect 222a, and via V0 224a connecting the source/drain region of pMOS transistor 218a to voltage source 236a.In addition, each of the source/drain regions of nMOS transistor 218b is coupled to the same voltage source 236b (eg, Vss). For example, the source/drain regions of nMOS transistor 218b each include diffusion region 220b, CA interconnect 222b, and via V0 224b connecting the source/drain region of nMOS transistor 218b to voltage source 236b.Gate interconnect 232 may be physically cut 246 . The physical cut 246 may include the physical cut of the gate interconnect 232 , and the portion of the dummy gate interconnect 244 a , 244 b located in the fill cell 200 . In other words, the physical cut 246 may not extend into standard cells adjacent to the first cell boundary 206a and the second cell boundary 206b.Additionally, to place the pMOS transistor 218a in an off state, a specific gate interconnect 232 of the pMOS transistor 218a may be connected to a voltage source 236a. For example, a metal POLY contact B (CB) interconnect 240a and a via V0 242a may be formed on a particular gate interconnect 232 and connected to a metal layer one (M1) interconnect 238a extending along the gate interconnect 232 .Furthermore, to place nMOS transistor 218b in an off state, a particular gate interconnect 232 of nMOS transistor 218b may be connected to a voltage source 236b. For example, a CB interconnect 240b and a via V0 242b may be formed on a particular gate interconnect 232 and connected to an M1 interconnect 238b extending along the gate interconnect 232 . Each of the CB interconnects 240a, 240b may extend in a second direction as shown in the upper right corner of FIG. 2 .Because the gate interconnect 232 is coupled to Vdd or Vss, simulation by an IC simulator can be more accurate in leakage current estimation than the filled cell 100 of FIG. 1 . Furthermore, the filling cell 200 may have reduced leakage current and negligible reduction in decoupling capacitance compared to the filling cell 100 shown in FIG. 1 .FIG. 3 is an exemplary diagram showing a plan view of a standard cell architecture of a filling cell 300 . For example, the filling unit 300 may be formed on a substrate 304 (eg, a silicon substrate). It should be understood that the example diagram of FIG. 3 is a representation of various masks that may be used to fabricate the features of filling cell 300 . For example, each mask may correspond to various features (eg, interconnects, vias, etc.) to be configured in a particular layer of fill cell 300 . Therefore, for the convenience of illustration and understanding of the present disclosure, the exemplary diagram shown in FIG. 3 simultaneously shows multiple layers of the filling unit 300 in an overlapping manner.In the exemplary configuration of FIG. 3 , the filling cell 300 includes a dummy gate interconnect 344 a forming a first cell boundary 306 a and another dummy gate interconnect 344 b formed over a second cell boundary 306 b. For example, the first half of each of dummy gate interconnects 344a, 344b may be located in fill cell 300, and the second half of each of dummy gate interconnects 344a, 344b Portions may be located in standard cells adjacent to the first cell boundary 306a and the second cell boundary 306b.In addition, the filling unit 300 includes a pMOS transistor 318a and an nMOS transistor 318b. The pMOS transistor 318 a and the nMOS transistor 318 b may consist of a gate interconnect 332 and source/drain regions formed on either side of the gate interconnect 332 . For example, the portion of gate interconnect 332 adjacent to the source/drain regions located in pMOS transistor 318a forms the gate of pMOS transistor 318a. Similarly, the portion of gate interconnect 332 adjacent to the source/drain regions located in nMOS transistor 318b forms the gate of nMOS transistor 318b.Each of the dummy gate interconnects 344a, 344b and the gate interconnect 332 may be along the boundary of each of the n gates (n=17 in FIG. 3). In the example shown in FIG. 3 , each of the n grids has a width of pitch p, and thus the padding unit 300 has a width of approximately n*p. Filler unit 300 includes n (eg, 17) gate interconnects including n-1 (eg, 16) gate interconnects 332, half of dummy gate interconnects 344a, and dummy gate half of interconnect 344b.Still referring to FIG. 3 , dummy gate interconnects 344 a , 344 b and/or gate interconnect 332 may be disposed in the POLY layer. In some processing techniques, dummy gate interconnects 344a, 344b and/or gate interconnect 332 may be formed of metal. However, in other processing technologies, dummy gate interconnects 344a, 344b and/or gate interconnect 332 may be entirely polysilicon, or may be polysilicon with a metal top layer. As shown in the upper right corner of FIG. 3 , dummy gate interconnects 344 a , 344 b and gate interconnect 332 extend in a first direction.In addition, each of the source/drain regions of pMOS transistor 318a is coupled to the same voltage source 336a (eg, Vdd). For example, the source/drain regions of pMOS transistor 318a each include diffusion region 320a, CA interconnect 322a, and via V0 324a connecting the source/drain region of pMOS transistor 318a to voltage source 336a.Additionally, each of the source/drain regions of nMOS transistor 318b is coupled to the same voltage source 336b (eg, Vss). For example, the source/drain regions of nMOS transistor 318b each include diffusion region 320b, CA interconnect 322b, and via V0 324b connecting the source/drain region of nMOS transistor 318b to voltage source 336b.Gate interconnect 332 may be physically cut 346 . The physical cut 346 may include the physical cut of the gate interconnect 332 and the entirety of each of the dummy gate interconnects 344a, 344b. In other words, the physical cut 346 may extend into standard cells adjacent to the first cell boundary 306a and the second cell boundary 306b.Additionally, to place the pMOS transistor 318a in an off state, a particular gate interconnect 332 may be connected to a voltage source 336a. For example, a CB interconnect 340a and a via V0 342a may be formed on a gate interconnect 332 and connected to an M1 level interconnect 338a extending along a particular gate interconnect 332 .Furthermore, to place the nMOS transistor 318b in an off state, a particular gate interconnect 332 may be connected to a voltage source 336b. For example, CB interconnect 340b and via V0 342b may be formed on gate interconnect 332 and connected to M1 layer interconnect 338b extending along gate interconnect 332 . Each of the CB interconnects 340a, 340b may extend in a second direction as shown in the upper right corner of FIG. 3 .Because the gate interconnect 332 is coupled to Vdd or Vss, simulation by an IC simulator can be more accurate in leakage current estimation than the filled cell 100 of FIG. 1 . Furthermore, the filling cell 300 may have reduced leakage current and negligible reduction in decoupling capacitance compared to the filling cell 100 shown in FIG. 1 .FIG. 4 is an exemplary diagram showing a plan view of a standard cell architecture of a filling cell 400 . For example, the filling unit 400 may be formed on a substrate 404 (eg, a silicon substrate). It should be understood that the example diagram of FIG. 4 is a representation of various masks that may be used to fabricate the features of filling cell 400 . For example, each mask may correspond to various features (eg, interconnects, vias, etc.) to be configured in a particular layer of fill cell 400 . Therefore, for ease of illustration and understanding of the present disclosure, the example shown in FIG. 4 simultaneously shows multiple layers of the filling unit 400 in an overlapping manner.In the exemplary configuration of FIG. 4, the filled cell 400 includes a dummy gate interconnect 444a formed over a first cell boundary 406a and another dummy gate interconnect 444b formed over a second cell boundary 406b. For example, the first half of each of dummy gate interconnects 444a, 444b may be located in fill cell 400, and the second half of each of dummy gate interconnects 444a, 444b Portions may be located in standard cells adjacent to the first cell boundary 406a and the second cell boundary 406b. Additionally, in order to configure a particular gate interconnect in the fill cell 400 as a floating gate interconnect 448a, 448b, the floating gate interconnect 448a, 448b is not coupled to a voltage source.Furthermore, the filling unit 400 includes a pMOS transistor 418a and an nMOS transistor 418b. The pMOS transistor 418 a and the nMOS transistor 418 b may consist of a gate interconnect 432 and source/drain regions on either side of the gate interconnect 432 . For example, the portion of gate interconnect 432 adjacent to the source/drain regions located in pMOS transistor 418a forms the gate of pMOS transistor 418a. Similarly, the portion of gate interconnect 432 adjacent to the source/drain regions located in nMOS transistor 418b forms the gate of nMOS transistor 418b.Each of dummy gate interconnects 444a, 444b, floating gate interconnects 448a, 448b, and gate interconnect 432 may be along each of n grids (n=17 in FIG. 4 ). A grid boundary is formed. In the example shown in FIG. 4, each of the n grids has a width of pitch p, and thus the padding unit 400 has a width of approximately n*p. The fill unit 400 includes n (eg, 17) gate interconnects, the n gate interconnects include n-3 (eg, 14) gate interconnects 432, n-15 (eg, 2) floating gates Pole interconnects 448a, 448b, half of dummy gate interconnect 444a, and half of dummy gate interconnect 444b.Still referring to FIG. 4 , dummy gate interconnects 444 a , 444 b , floating gate interconnects 448 a , 448 b and/or gate interconnect 432 may be disposed in the POLY layer. In some processing techniques, dummy gate interconnects 444a, 444b, floating gate interconnects 448a, 448b, and/or gate interconnect 432 may be formed of metal. However, in other processing technologies, dummy gate interconnects 444a, 444b, floating gate interconnects 448, 448b, and/or gate interconnect 432 may be entirely polysilicon, or may be polysilicon with a metal top layer. As shown in the upper right corner of FIG. 4 , the dummy gate interconnects 444 a , 444 b , the floating gate interconnects 448 a , 448 b and/or the gate interconnect 432 extend in a first direction.In addition, each of the source/drain regions of pMOS transistor 418a is coupled to the same voltage source 436a (eg, Vdd). For example, the source/drain regions of pMOS transistor 418a each include diffusion region 420a, CA interconnect 422a, and via V0 424a connecting the source/drain region of pMOS transistor 418a to voltage source 436a.Additionally, each of the source/drain regions of nMOS transistor 418b is coupled to the same voltage source 436b (eg, Vss). For example, the source/drain regions of nMOS transistor 418b each include diffusion region 420b, CA interconnect 422b, and via V0 424b connecting the source/drain region of nMOS transistor 418b to voltage source 436b.Gate interconnect 432 may be physically cut 446 . Additionally, to place the pMOS transistor 418a in an off state, a particular gate interconnect 432 may be connected to a voltage source 436a. For example, a CB interconnect 440a and a via V0 442a may be formed on a particular gate interconnect 432 and connected to an M1 level interconnect 438a extending along the particular gate interconnect 432 .Additionally, to place the nMOS transistor 418b in an off state, a particular gate interconnect 432 may be connected to a voltage source 436b. For example, a CB interconnect 440b and a via V0 442b may be formed on a particular gate interconnect 432 and connected to an M1 level interconnect 438b extending along the particular gate interconnect 432 . Each of the CB interconnects 440a, 440b may extend in a second direction as shown in the upper right corner of FIG. 4 .Because the gate interconnect 432 is coupled to Vdd or Vss, simulations of the IC simulator can be more accurate in leakage current estimation than the filled cell 100 of FIG. 1 . Furthermore, the filling cell 400 may have reduced leakage current and negligible reduction in decoupling capacitance compared to the filling cell 100 shown in FIG. 1 .5 is a schematic diagram 500 showing source/drain/gate connections for pMOS transistors (502a) 218a, 318a, 418a and nMOS transistors (502b) 218b, 318b, 418b. As shown in the diagram, the source/drain/gate of pMOS transistor 502a are all connected to Vdd. In addition, the source/drain/gate of the nMOS transistor 502b are all connected to Vss.FIG. 6 is a diagram 600 illustrating a MOS IC device including a plurality of standard cells 602 and a plurality of filled cells 604 . Filler cells 604 may be used to reduce utilization, to improve inter-cell routing and/or to provide additional electrical isolation between standard cells 602 . Fill cell 604 may provide some decoupling capacitance within MOS IC device 600 .In one aspect of the disclosure, a standard cell IC includes a plurality of pMOS transistors (218a, 318a, 418a). In one aspect, each pMOS transistor (218a, 318a, 418a) of the plurality of pMOS transistors (218a, 318a, 418a) includes a pMOS transistor drain (gate interconnect 232, 332, 432 on one side of each gate interconnect), pMOS transistor sources (located on each of gate interconnects 232, 332, 432 in pMOS transistors 218a, 318a, 418a on the other side), and the pMOS transistor gates (the portions of the gate interconnects 232, 332, 432 in the pMOS transistors 218a, 318a, 418a). In another aspect, each pMOS transistor drain of the plurality of pMOS transistors (218a, 318a, 418a) (each of the gate interconnects 232, 332, 432 in the pMOS transistors 218a, 318a, 418a interconnect) and the pMOS transistor source (located on the other side of each of the gate interconnects 232, 332, 432 in the pMOS transistors 218a, 318a, 418a) are coupled to the first A voltage source (236a, 336a, 436a). In another aspect, each pMOS transistor gate (the portion of the gate interconnect 232, 332, 432 located in the pMOS transistor 218a, 318a, 418a) of the plurality of pMOS transistors (218a, 318a, 418a) may be formed by a plurality of A pMOS gate interconnect (232, 332, 432) of a pMOS gate interconnect (232, 332, 432) is formed. In yet another aspect, each of the pMOS gate interconnects (232, 332, 432) extends in a first direction and is coupled to a first voltage source (236a, 336a, 436a).In another aspect, a standard cell IC includes a plurality of nMOS transistors (218b, 318b, 418b). In another aspect, each nMOS transistor (218b, 318b, 418b) of the plurality of nMOS transistors (218b, 318b, 418b) includes an nMOS transistor drain (gate interconnect 232 in nMOS transistor 218b, 318b, 418b) , 332, 432 on one side of each gate interconnect), nMOS transistor sources (each of gate interconnects 232, 332, 432 in nMOS transistors 218b, 318b, 418b on the other side) and nMOS transistor gates (gate interconnects 232, 332, 432 are located in the portion of nMOS transistors 218b, 318b, 418b). In addition, each nMOS transistor drain of the plurality of nMOS transistors (218b, 318b, 418b) (one of each of the gate interconnects 232, 332, 432 in the nMOS transistors 218b, 318b, 418b) side) and the nMOS transistor sources (located on the other side of each of the gate interconnects 232, 332, 432 in the nMOS transistors 218b, 318b, 418b) are coupled to a voltage source (236a, 336a, 436a) low second voltage source (236b, 336b, 436b). In addition, each nMOS transistor gate (the portion of gate interconnect 232, 332, 432 located in nMOS transistor 218b, 318b, 418b) of the plurality of nMOS transistors (218b, 318b, 418b) is interconnected by a plurality of nMOS gates. nMOS gate interconnects (232, 332, 432) connected (232, 332, 432) are formed. Additionally, each of the nMOS gate interconnects (232, 332, 432) extends in the first direction and is coupled to a second voltage source (236b, 336b, 436b).In another aspect, the standard cell IC further includes a first contact interconnect (240a) extending in a second direction orthogonal to the first direction and coupling the pMOS gate interconnects (232, 332, 432) together , 340a, 440a). A first contact interconnect (240a, 340a, 440a) is coupled to a first voltage source (236a, 336a, 436a). Additionally, a second contact interconnect (240b, 340b, 440b) extends in a second direction and couples the nMOS gate interconnects (232, 332, 432) together. In another aspect, the second contact interconnect (240b, 340b, 440b) is coupled to a second voltage source (236b, 336b, 436b).In another aspect, each of the pMOS gate interconnects (gate interconnects 232, 332, 432 in the portion of the pMOS transistors 218a, 318a, 418a) 432 in the portion of pMOS transistors 218a, 318a, 418a) and one nMOS gate interconnect (gate Pole interconnects 232 , 332 , 432 are located in portions of nMOS transistors 218 b , 318 b , 418 b ) separated in a first direction and collinear.Furthermore, a standard cell IC has n grids with a pitch p between the grids and a width of about n*p. In an aspect, the grid extends in a first direction and the plurality of pMOS transistors (218a, 318a) includes n-1 transistors and the plurality of nMOS transistors (218b, 318b) includes n-1 transistors.In another aspect, the plurality of pMOS transistors (418a) includes n-3 transistors, and the plurality of nMOS transistors (418b) includes n-3 transistors. Additionally, the standard cell IC includes a first dummy gate interconnect (448a) adjacent to the first side (406a) of the standard cell IC and extending across the standard cell IC in a first direction. Additionally, the first dummy gate (448a) is floating. Additionally, the standard cell IC includes a second dummy gate interconnect (448b) adjacent to the second side (406b) of the standard cell IC and extending across the standard cell IC in the first direction. In addition, the second dummy gate (448b) is floating.In yet another aspect, the first contact interconnection (440a) and the second contact interconnection (440b) are in the second direction between the first dummy gate interconnection (448a) and the second dummy gate interconnection (448b) Extend up.FIG. 7 is a flowchart 700 of an exemplary method. This exemplary method is that of a standard cell IC.At 702, a plurality of pMOS transistors are operated. In one aspect, each pMOS transistor of the plurality of pMOS transistors has a pMOS transistor drain, a pMOS transistor source, and a pMOS transistor gate. In another aspect, each pMOS transistor drain and pMOS transistor source of the plurality of pMOS transistors is coupled to a first voltage source. In another aspect, each pMOS transistor gate of the plurality of pMOS transistors is formed by a pMOS gate interconnect of the plurality of pMOS gate interconnects. In yet another aspect, each of the pMOS gate interconnects extends in a first direction and is coupled to a first voltage source.At 704, a plurality of nMOS transistors are operated. In one aspect, each nMOS transistor of the plurality of nMOS transistors has an nMOS transistor drain, an nMOS transistor source, and an nMOS transistor gate. In another aspect, each nMOS transistor drain and nMOS transistor source of the plurality of nMOS transistors is coupled to a second voltage source that is lower than the first voltage source. In another aspect, each nMOS transistor gate of the plurality of nMOS transistors is formed by an nMOS gate interconnect of the plurality of nMOS gate interconnects. In yet another aspect, each of the nMOS gate interconnects extends in the first direction and is coupled to a second voltage source.The standard cell IC further includes first means for operating a plurality of pMOS transistors. In one aspect, each pMOS transistor of the plurality of pMOS transistors has a pMOS transistor drain, a pMOS transistor source, and a pMOS transistor gate. In another aspect, each pMOS transistor drain and pMOS transistor source of the plurality of pMOS transistors is coupled to a first voltage source. In another aspect, each pMOS transistor gate of the plurality of pMOS transistors is formed by a pMOS gate interconnect of the plurality of pMOS gate interconnects. In yet another aspect, each of the pMOS gate interconnects extends in a first direction and is coupled to a first voltage source. The first means for operating the plurality of pMOS transistors includes a plurality of pMOS transistors, wherein each pMOS transistor drain and pMOS transistor source are coupled to a first voltage source. In addition, each pMOS transistor gate is formed by a pMOS gate interconnection of a plurality of pMOS gate interconnections. Furthermore, each of the pMOS gate interconnects extends in a first direction and is coupled to a first voltage source.The standard cell IC further includes a second component for operating a plurality of nMOS transistors. In one aspect, each nMOS transistor of the plurality of nMOS transistors has an nMOS transistor drain, an nMOS transistor source, and an nMOS transistor gate. In another aspect, each nMOS transistor drain and nMOS transistor source of the plurality of nMOS transistors is coupled to a second voltage source that is lower than the first voltage source. In yet another aspect, each nMOS transistor gate of the plurality of nMOS transistors is formed by an nMOS gate interconnect of the plurality of nMOS gate interconnects. Furthermore, each of the nMOS gate interconnects extends in the first direction and is coupled to a second voltage source. The second means for operating the plurality of nMOS transistors includes a plurality of nMOS transistors, wherein each nMOS transistor drain and nMOS transistor source of the plurality of nMOS transistors is coupled to a second voltage source lower than the first voltage source. In addition, each nMOS transistor gate is formed by an nMOS gate interconnection of a plurality of nMOS gate interconnections. Furthermore, each of the nMOS gate interconnects extends in the first direction and is coupled to a second voltage source.During a simulation of an IC including the filled cell 100 shown in FIG. 1A , and represented by the schematic diagram 150 of FIG. 1B , the simulator may assume that the transistor gates are at voltages of Vdd or Vss, and therefore cannot estimate current leakage well. For example, the simulator may estimate the leakage current of the fill cell 100 as 1.56 x 10-12A. The present disclosure provides for the connection of the gate interconnects to the fill cells 200, 300, 400 shown in FIGS. 2-4 and represented by the schematic diagram 500 of FIG. solution to the problem. By connecting the gate interconnects of the fill cells 200, 300, 400 to Vdd or Vss, simulation by an IC simulator may provide a more accurate estimate of the leakage current compared to the fill cell 100 of FIG. For example, the simulator may estimate the leakage current of the fill cells 200, 300, 400 as 1.91 x 10-13A.It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Also, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Accordingly, the claims are not intended to be limited to the aspects shown herein, but are intended to be consistent with the full scope of the claims in language where reference to a singular element is not intended to mean "one and only a", but intended to mean "one or more". The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term "some" means one or more. Combinations such as "at least one of A, B or C", "at least one of A, B and C" and "A, B, C or any combination thereof" include any combination of A, B and/or C, And can include multiples of A, B or C. In particular, combinations such as "at least one of A, B, or C", "at least one of A, B, and C" and "A, B, C, or any combination thereof" may be A only, B only , only C, A and B, A and C, B and C, or A and B and C, wherein any such combination may contain one or more members of A, B or C. All structural and functional equivalents to the elements described in the various aspects of this disclosure that are known or later come to be known by those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Furthermore, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is expressly recited in the claims. A claim element should not be construed as means-plus-function unless the element is explicitly recited using the phrase "means for". |
A synchronous flash memory includes an array of non-volatile memory cells. The memory device has a package configuration that is compatible with an SDRAM. In one embodiment, the synchronous memory device comprises an array of memory cells arranged in rows and columns. A clock connection is provided to receive an externally provided clock signal. The memory does not require a precharge time period during a time period between the first and second externally provided active commands. |
What is claimed is: 1. A method of operating a synchronous flash memory device, the method comprises: receiving a first active read command and a memory array first row address on a first clock signal transition; initiating a first memory read operation in response to the first active read command; receiving a read command and a memory array column address on a second clock transition that is a first predetermined number of clock transitions following the first clock signal transition; providing data on a data connection that was read from the memory array at the first row address, the data is provided on a third clock transition that is a second predetermined number of clock transitions following the second clock signal transition; receiving a second active read command and a memory array second row address on the third clock signal transition, and initiating a second memory read in response to the second active read command. 2. The method of claim 1 wherein the first predetermined number of clock transitions is one rising edge clock transition. 3. The method of claim 1 wherein the second predetermined number of clock transitions is either one, two, three or four rising edge clock transitions. 4. A method of reading data from a synchronous flash memory device coupled to a memory controller, the method comprising: transmitting a first active read command from the memory controller to the synchronous flash memory device; transmitting a first row address from the memory controller to the synchronous flash memory device in synchronization with the first active read command; initiating a first memory read operation in response to the first active read command and the first row address; transmitting a first read command from the memory controller to the synchronous flash memory device following the first active read command; transmitting a first column address from the memory controller to the synchronous flash memory device in synchronization with the first read command; transmitting data read from the synchronous flash memory device to the memory controller in synchronization with a clock transition; and transmitting a second active read command from the memory controller to the synchronous flash memory device in synchronization with the clock transition. |
TECHNICAL FIELD OF THE INVENTION The present invention relates generally to non-volatile memory devices and in particular the present invention relates to a synchronous non-volatile flash memory. BACKGROUND OF THE INVENTION Memory devices are typically provided as internal storage areas in the computer. The term memory identifies data storage that comes in the form of integrated circuit chips. There are several different types of memory. One type is RAM (random-access memory). This is typically used as main memory in a computer environment. RAM refers to read and write memory; that is, you can both write data into RAM and read data from RAM. This is in contrast to ROM, which permits you only to read data. Most RAM is volatile, which means that it requires a steady flow of electricity to maintain its contents. As soon as the power is turned off, whatever data was in RAM is lost. Computers almost always contain a small amount of read-only memory (ROM) that holds instructions for starting up the computer. Unlike RAM, ROM cannot be written to. An EEPROM (electrically erasable programmable read-only memory) is a special type non-volatile ROM that can be erased by exposing it to an electrical charge. Like other types of ROM, EEPROM is traditionally not as fast as RAM. EEPROM comprise a large number of memory cells having electrically isolated gates (floating gates). Data is stored in the memory cells in the form of charge on the floating gates. Charge is transported to or removed from the floating gates by programming and erase operations, respectively. Yet another type of non-volatile memory is a Flash memory. A Flash memory is a type of EEPROM that can be erased and reprogrammed in blocks instead of one byte at a time. Many modern PCS have their BIOS stored on a flash memory chip so that it can easily be updated if necessary. Such a BIOS is sometimes called a flash BIOS. Flash memory is also popular in modems because it enables the modem manufacturer to support new protocols as they become standardized. A typical Flash memory comprises a memory array that includes a large number of memory cells arranged in row and column fashion. Each of the memory cells includes a floating gate field-effect transistor capable of holding a charge. The cells are usually grouped into blocks. Each of the cells within a block can be electrically programmed in a random basis by charging the floating gate. The charge can be removed from the floating gate by a block erase operation. The data in a cell is determined by the presence or absence of the charge in the floating gate. A synchronous DRAM (SDRAM) is a type of DRAM that can run at much higher clock speeds than conventional DRAM memory. SDRAM synchronizes itself with a CPU's bus and is capable of running at 100 MHZ, about three times faster than conventional FPM (Fast Page Mode) RAM, and about twice as fast EDO (extended Data Output) DRAM and BEDO (Burst Extended Data Output) DRAM. SDRAM's can be accessed quickly, but are volatile. Many computer systems are designed to operate using SDRAM, but would benefit from non-volatile memory. For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for a non-volatile memory device that can operate in a manner similar to SDRAM operation. SUMMARY OF THE INVENTION The above-mentioned problems with memory devices and other problems are addressed by the present invention and will be understood by reading and studying the following specification. In one embodiment, the present invention provides a non-volatile synchronous flash memory that is compatible with existing SDRAM package pin assignments. It will be apparent from reading the detailed description that system designers with knowledge in SDRAM applications could easily implement the present invention to improve system operation. In one embodiment, a synchronous memory device comprises an array of memory cells arranged in rows and columns, a clock connection to receive an externally provided clock signal, and control circuitry to perform a first read operation of a first row of the array in response to a first externally provided active command, and perform a second read operation on a second row of the array in response to a second externally provided active command. The control circuitry does not require a precharge time period during a time period between the first and second externally provided active commands. In another embodiment, a synchronous memory device comprises an array of memory cells arranged in rows and columns, a clock connection to receive an externally provided clock signal, and control circuitry to perform a first read operation on a first row of the array to output data from the first row on an external connection during a clock signal transition. The control circuitry is adapted to receive a read active command on the clock signal transition to initiate a read operation of a second row of the array. A method of operating a synchronous flash memory device is also provided. The method comprises receiving a first active read command and a memory array first row address on a first clock signal transition, initiating a first memory read operation in response to the first active read command, and receiving a read command and a memory array column address on a second clock transition that is a first predetermined number of clock transitions following the first clock signal transition. The method further comprises providing data on a data connection that was read from the memory array at the first row address. The data is provided on a third clock transition that is a second predetermined number of clock transitions following the second clock signal transition. A second active read command and a memory array second row address are received on the third clock signal transition, and a second memory read is initiated in response to the second active read command. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A is a block diagram of a synchronous flash memory of the present invention; FIG. 1B is an integrated circuit pin interconnect diagram of one embodiment of the present invention; FIG. 1C is an integrated circuit interconnect bump grid array diagram of one embodiment of the present invention; FIG. 2 illustrates a mode register of one embodiment of the present invention; FIG. 3 illustrates read operations having a CAS latency of one, two and three clock cycles; FIG. 4 illustrates activating a specific row in a bank of the memory of one embodiment of the present invention; FIG. 5 illustrates timing between an active command and a read or write command; FIG. 6 illustrates a read command; FIG. 7 illustrates timing for consecutive read bursts of one embodiment of the present invention; FIG. 8 illustrates random read accesses within a page of one embodiment of the present invention; FIG. 9 illustrates a read operation followed by a write operation; FIG. 10 illustrates read burst operation that are terminated using a burst terminate command according to one embodiment of the present invention; FIG. 11 illustrates a write command; FIG. 12 illustrates a write followed by a read operation; FIG. 13 illustrates a power-down operation of one embodiment of the present invention; FIG. 14 illustrates a clock suspend operation during a burst read; FIG. 15 illustrates a memory address map of one embodiment of the memory having two boot sectors; FIG. 16 is a flow chart of a self-timed write sequence according to one embodiment of the present invention; FIG. 17 is a flow chart of a complete write status-check sequence according to one embodiment of the present invention; FIG. 18 is a flow chart of a self-timed block erase sequence according to one embodiment of the present invention; FIG. 19 is a flow chart of a complete block erase status-check sequence according to one embodiment of the present invention; FIG. 20 is a flow chart of a block protect sequence according to one embodiment of the present invention; FIG. 21 is a flow chart of a complete block status-check sequence according to one embodiment of the present invention; FIG. 22 is a flow chart of a device protect sequence according to one embodiment of the present invention; FIG. 23 is a flow chart of a block unprotect sequence according to one embodiment of the present invention; FIG. 24 illustrates the timing of an initialize and load mode register operation; FIG. 25 illustrates the timing of a clock suspend mode operation; FIG. 26 illustrates the timing of a burst read operation; FIG. 27 illustrates the timing of alternating bank read accesses; FIG. 28 illustrates the timing of a full-page burst read operation; FIG. 29 illustrates the timing of a burst read operation using a data mask signal; FIG. 30 illustrates the timing of a write operation followed by a read to a different bank; and FIG. 31 illustrates the timing of a write operation followed by a read to the same bank. DETAILED DESCRIPTION OF THE INVENTION In the following detailed description of present embodiments, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments in which the inventions may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical and electrical changes may be made without departing from the spirit and scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the claims. The following detailed description is divided into two major sections. The first section is an Interface Functional Description that details compatibility with an SDRAM memory. The second major section is a Functional Description that specifies flash architecture functional commands. Interface Functional Description Referring to FIG. 1A, a block diagram of one embodiment of the present invention is described. The memory device 100 includes an array of non-volatile flash memory cells 102. The array is arranged in a plurality of addressable banks. In one embodiment, the memory contains four memory banks 104, 106, 108 and 110. Each memory bank contains addressable sectors of memory cells. The data stored in the memory can be accessed using externally provided location addresses received by address register 112. The addresses are decoded using row address multiplexer circuitry 114. The addresses are also decoded using bank control logic 116 and row address latch and decode circuitry 118. To access an appropriate column of the memory, column address counter and latch circuitry 120 couples the received addresses to column decode circuitry 122. Circuit 124 provides input/output gating, data mask logic, read data latch circuitry and write driver circuitry. Data is input through data input registers 126 and output through data output registers 128. Command execution logic 130 is provided to control the basic operations of the memory device. A state machine 132 is also provided to control specific operations performed on the memory arrays and cells. A status register 134 and an identification register 136 can also be provided to output data. FIG. 1B illustrates an interconnect pin assignment of one embodiment of the present invention. The memory package 150 has 54 interconnect pins. The pin configuration is substantially similar to available SDRAM packages. Two interconnects specific to the present invention are RP# 152 and Vccp 154. Although the present invention may share interconnect labels that are appear the same as SDRAM's, the function of the signals provided on the interconnects are described herein and should not be equated to SDRAM's unless set forth herein. FIG. 1C illustrates one embodiment of a memory package 160 that has bump connections instead of the pin connections of FIG. 1C. The present invention, therefore, is not limited to a specific package configuration. Prior to describing the operational features of the memory device, a more detailed description of the interconnect pins and their respective signals is provided. The input clock connection is used to provide a clock signal (CLK). The clock signal can be driven by a system clock, and all synchronous flash memory input signals are sampled on the positive edge of CLK. CLK also increments an internal burst counter and controls the output registers. The input clock enable (CKE) connection is used to activate (HIGH state) and deactivates (LOW state) the CLK signal input. Deactivating the clock input provides POWER-DOWN and STANDBY operation (where all memory banks are idle), ACTIVE POWER-DOWN (a memory row is ACTIVE in either bank) or CLOCK SUSPEND operation (burst/access in progress). CKE is synchronous except after the device enters power-down modes, where CKE becomes asynchronous until after exiting the same mode. The input buffers, including CLK, are disabled during power-down modes to provide low standby power. CKE may be tied HIGH in systems where power-down modes (other than RP# deep power-down) are not required. The chip select (CS#) input connection provides a signal to enable (registered LOW) and disable (registered HIGH) a command decoder provided in the command execution logic. All commands are masked when CS# is registered HIGH. Further, CS# provides for external bank selection on systems with multiple banks, and CS# can be considered part of the command code; but may not be necessary. The input command input connections for RAS#, CAS#, and WE# (along with CAS#, CS#) define a command that is to be executed by the memory, as described in detail below. The input/output mask (DQM) connections are used to provide input mask signals for write accesses and an output enable signal for read accesses. Input data is masked when DQM is sampled HIGH during a WRITE cycle. The output buffers are placed in a high impedance (High-Z) state (after a two-clock latency) when DQM is sampled HIGH during a READ cycle. DQML corresponds to data connections DQ0-DQ7 and DQMH corresponds to data connections DQ8-DQ15. DQML and DQMH are considered to be the same state when referenced as DQM. Address inputs 133 are primarily used to provide address signals. In the illustrated embodiment the memory has 12 lines (A0-A11). Other signals can be provided on the address connections, as described below. The address inputs are sampled during an ACTIVE command (row-address A0-A11) and a READ/WRITE command (column-address A0-A7) to select one location in a respective memory bank. The address inputs are also used to provide an operating code (OpCode) during a LOAD COMMAND REGISTER operation, explained below. Address lines A0-A11 are also used to input mode settings during a LOAD MODE REGISTER operation. An input reset/power-down (RP#) connection 140 is used for reset and power-down operations. Upon initial device power-up, a 100 .mu.s delay after RP# has transitioned from LOW to HIGH is required in one embodiment for internal device initialization, prior to issuing an executable command. The RP# signal clears the status register, sets the internal state machine (ISM) 132 to an array read mode, and places the device in a deep power-down mode when LOW. During power down, all input connections, including CS# 142, are "Don't Care" and all outputs are placed in a High-Z state. When the RP# signal is equal to a VHH voltage (5V), all protection modes are ignored during WRITE and ERASE. The RP# signal also allows a device protect bit to be set to 1 (protected) and allows block protect bits of a 16 bit register, at locations 0 and 15 to be set to 0 (unprotected) when brought to VHH. The protect bits are described in more detail below. RP# is held HIGH during all other modes of operation. Bank address input connections, BA0 and BA1 define which bank an ACTIVE, READ, WRITE, or BLOCK PROTECT command is being applied. The DQ0-DQ15 connections 143 are data bus connections used for bi-directional data communication. Referring to FIG. 1B, a VCCQ connection is used to provide isolated power to the DQ connections to improved noise immunity. In one embodiment, VCCQ=Vcc or 1.8V.±.0.15V. The VSSQ connection is used to isolated ground to DQs for improved noise immunity. The VCC connection provides a power supply, such as 3V. A ground connection is provided through the Vss connection. Another optional voltage is provided on the VCCP connection 144. The VCCP connection can be tied externally to VCC, and sources current during device initialization, WRITE and ERASE operations. That is, writing or erasing to the memory device can be performed using a VCCP voltage, while all other operations can be performed with a VCC voltage. The Vccp connection is coupled to a high voltage switch/pump circuit 145. The following sections provide a more detailed description of the operation of the synchronous flash memory. One embodiment of the present invention is a nonvolatile, electrically sector-erasable (Flash), programmable read-only memory containing 67,108,864 bits organized as 4,194,304 words by 16 bits. Other population densities are contemplated, and the present invention is not limited to the example density. Each memory bank is organized into four independently erasable blocks (16 total). To ensure that critical firmware is protected from accidental erasure or overwrite, the memory can include sixteen 256K-word hardware and software lockable blocks. The memory's four-bank architecture supports true concurrent operations. A read access to any bank can occur simultaneously with a background WRITE or ERASE operation to any other bank. The synchronous flash memory has a synchronous interface (all signals are registered on the positive edge of the clock signal, CLK). Read accesses to the memory can be burst oriented. That is, memory accesses start at a selected location and continue for a programmed number of locations in a programmed sequence. Read accesses begin with the registration of an ACTIVE command, followed by a READ command. The address bits registered coincident with the ACTIVE command are used to select the bank and row to be accessed. The address bits registered coincident with the READ command are used to select the starting column location and bank for the burst access. The synchronous flash memory provides for programmable read burst lengths of 1, 2, 4 or 8 locations, or the full page, with a burst terminate option. Further, the synchronous flash memory uses an internal pipelined architecture to achieve high-speed operation. The synchronous flash memory can operate in low-power memory systems, such as systems operating on three volts. A deep power-down mode is provided, along with a power-saving standby mode. All inputs and outputs are low voltage transistor-transistor logic (LVTTL) compatible. The synchronous flash memory offers substantial advances in Flash operating performance, including the ability to synchronously burst data at a high data rate with automatic column address generation and the capability to randomly change column addresses on each clock cycle during a burst access. In general, the synchronous flash memory is configured similar to a multi-bank DRAM that operates at low voltage and includes a synchronous interface. Each of the banks is organized into rows and columns. Prior to normal operation, the synchronous flash memory is initialized. The following sections provide detailed information covering device initialization, register definition, command descriptions and device operation. The synchronous flash is powered up and initialized in a predefined manner. After power is applied to VCC, VCCQ and VCCP (simultaneously), and the clock signal is stable, RP# 140 is brought from a LOW state to a HIGH state. A delay, such as a 100 .mu.s delay, is needed after RP# transitions HIGH in order to complete internal device initialization. After the delay time has passed, the memory is placed in an array read mode and is ready for Mode Register programming or an executable command. After initial programming of a non-volatile mode register 147 (NVMode Register), the contents are automatically loaded into a volatile Mode Register 148 during the initialization. The device will power up in a programmed state and will not require reloading of the non-volatile mode register 147 prior to issuing operational commands. This is explained in greater detail below. The Mode Register 148 is used to define the specific mode of operation of the synchronous flash memory. This definition includes the selection of a burst length, a burst type, a CAS latency, and an operating mode, as shown in FIG. 2. The Mode Register is programmed via a LOAD MODE REGISTER command and retains stored information until it is reprogrammed. The contents of the Mode Register may be copied into the NVMode Register 147. The NVMode Register settings automatically load the Mode Register 148 during initialization. Details on ERASE NVMODE REGISTER and WRITE NVMODE REGISTER command sequences are provided below. Those skilled in the art will recognize that an SDRAM requires that a mode register must be externally loaded during each initialization operation. The present invention allows a default mode to be stored in the NV mode register 147. The contents of the NV mode register are then copied into a volatile mode register 148 for access during memory operations. Mode Register bits M0-M2 specify a burst length, M3 specifies a burst type (sequential or interleaved), M4-M6 specify a CAS latency, M7 and M8 specify a operating mode, M9 is set to one, and M10 and M11 are reserved in this embodiment. Because WRITE bursts are not currently implemented, M9 is set to a logic one and write accesses are single location (non-burst) accesses. The Mode Register must be loaded when all banks are idle, and the controller must wait the specified time before initiating a subsequent operation. Read accesses to the synchronous flash memory can be burst oriented, with the burst length being programmable, as shown in Table 1. The burst length determines the maximum number of column locations that can be automatically accessed for a given READ command. Burst lengths of 1, 2, 4, or 8 locations are available for both sequential and the interleaved burst types, and a full-page burst is available for the sequential type. The full-page burst can be used in conjunction with the BURST TERMINATE command to generate arbitrar burst lengths that is, a burst can be selectively terminated to provide custom length bursts. When a READ command is issued, a block of columns equal to the burst length is effectively selected. All accesses for that burst take place within this block, meaning that the burst will wrap within the block if a boundary is reached. The block is uniquely selected by A1-A7 when the burst length is set to two, by A2-A7 when the burst length is set to four, and by A3-A7 when the burst length is set to eight. The remaining (least significant) address bit(s) are used to select the starting location within the block. Full-page bursts wrap within the page if the boundary is reached. Accesses within a given burst may be programmed to be either sequential or interleaved; this is referred to as the burst type and is selected via bit M3. The ordering of accesses within a burst is determined by the burst length, the burst type and the starting column address, as shown in Table 1.<tb>TABLE 1<tb>BURST DEFINITION<tb> Order of<tb> Accesses Within a Burst<tb>Burst Type = Type =<tb>Length Starting Column Address Sequential Interleaved<tb>2 A0<tb> 0 0-1 0-1<tb> 1 1-0 1-0<tb>4 A1 A0<tb> 0 0 0-1-2-3 0-1-2-3<tb> 0 1 1-2-3-0 1-0-3-2<tb> 1 0 2-3-0-1 2-3-0-1<tb> 1 1 3-0-1-2 3-2-1-0<tb>8 A2 A1 A0<tb> 0 0 0 0-1-2-3-4-5-6-7 0-1-2-3-4-5-6-7<tb> 0 0 1 1-2-3-4-5-6-7-0 1-0-3-2-5-4-7-6<tb> 0 1 0 2-3-4-5-6-7-0-1 2-3-0-1-6-7-4-5<tb> 0 1 1 3-4-5-6-7-0-1-2 3-2-1-0-7-6-5-4<tb> 1 0 0 4-5-6-7-0-1-2-3 4-5-6-7-0-1-2-3<tb> 1 0 1 5-6-7-0-1-0-3-2 5-4-7-6-1-0-3-2<tb> 1 1 0 6-7-0-1-2-3-4-5 6-7-4-5-2-3-0-1<tb> 1 1 1 7-0-1-2-3-4-5-6 7-6-5-4-3-2-1-0<tb>Full n = A0-A7 Cn, Cn + 1, Not supported<tb>Page (location 0-255) Cn + 2<tb>256 Cn + 3, Cn + 4<tb> . . . Cn - 1,<tb> Cn . . . Column Address Strobe (CAS) latency is a delay, in clock cycles, between the registration of a READ command and the availability of the first piece of output data on the DQ connections. The latency can be set to one, two or three clocks cycles. For example, if a READ command is registered at clock edge n, and the latency is m clocks, the data will be available by clock edge n+m. The DQ connections will start driving data as a result of the clock edge one cycle earlier (n+m-1) and, provided that the relevant access times are met, the data will be valid by clock edge n+m. For example, assuming that the clock cycle time is such that all relevant access times are met, if a READ command is registered at T0, and the latency is programmed to two clocks, the DQs will start driving after T1 and the data will be valid by T2, as shown in FIG. 3. FIG. 3 illustrates example operating frequencies at which different clock latency setting can be used. The normal operating mode is selected by setting M7 and M8 to zero, and the programmed burst length applies to READ bursts. The following truth tables provide more detail on the operation commands of an embodiment of the memory of the present invention. An explanation is provided herein of the commands and follows Truth Table 2.<tb>TRUTH TABLE 1<tb>Interface Commands and DQM Operation<tb>NAME CS RAS CAS WE<tb>(FUNCTION) # # # # DQM ADDR DQs<tb>COMMAND H X X X X X X<tb>INHIBIT (NOP)<tb>NO OPERATION L H H H X X X<tb>(NOP)<tb>ACTIVE (Select L L H H X Bank/ X<tb>bank and activate Row<tb>row)<tb>READ (Select L H L H X Bank/ X<tb>bank, column and Col<tb>start READ burst)<tb>WRITE (Select L H L L X Bank/ Valid<tb>bank, column and Col<tb>start WRITE) <tb>BURST L H H L X X Active<tb>TERMINATE<tb>ACTIVE L L H L X X X<tb>TERMINATE<tb>LOAD COMMAND L L L H X Com X<tb>REGISTER Code<tb>LOAD MODE L L L L X Op X<tb>REGISTER Code<tb>Write Enable/Output -- -- -- -- L -- Active<tb>Enable<tb>Write Inhibit/Output -- -- -- -- H -- High-Z<tb>High-Z<tb>TRUTH TABLE 2<tb>Flash Memory Command Sequences<tb> 1st CYCLE 2nd CYCLE<tb> 3rd CYCLE<tb>Operation CMD ADDR ADDR DQ RP # CMD ADDR ADDR DQ<tb> RP # CMD ADDR ADDR DQ RP #<tb>READ LCR 90H Bank X H ACTIVE Row Bank X<tb> H READ CA Bank X H<tb>DEVICE<tb>Config.<tb>READ LCR 70H X X H ACTIVE X X X<tb> H READ X X X H<tb>Status<tb>Register<tb>CLEAR LCR 50H X X H<tb>Status<tb>Register<tb>ERASE LCR 20H Bank X H ACTIVE Row Bank X<tb> H WRITE X Bank D0H H/VHH<tb>SETUP/<tb>Confirm<tb>WRITE LCR 40H Bank X H ACTIVE Row Bank X<tb> H WRITE Col Bank DIN H/VHH<tb>SETUP/<tb>WRITE<tb>Protect LCR 60H Bank X H ACTIVE Row Bank X<tb> H WRITE X Bank 01H H/VHH<tb>BLOCK/<tb>Confirm<tb>Protect LCR 60H Bank X H ACTIVE X Bank X<tb> H WRITE X Bank F1H VHH<tb>DEVICE/<tb>Confirm<tb>Unprotect LCR 60H Bank X H ACTIVE X Bank X<tb> H WRITE X Bank D0H H/VHH<tb>BLOCKS/<tb>Confirm<tb>ERASE LCR 30H Bank X H ACTIVE X Bank X<tb> H WRITE X Bank C0H H<tb>NVmode<tb>Register<tb>WRITE LCR A0H Bank X H ACTIVE X Bank X<tb> H WRITE X Bank X H<tb>NVmode<tb>Register The COMMAND INHIBIT function prevents new commands from being executed by the synchronous flash memory, regardless of whether the CLK signal is enabled. The synchronous flash memory is effectively deselected, but operations already in progress are not affected. The NO OPERATION (NOP) command is used to perform a NOP to the synchronous flash memory that is selected (CS# is LOW). This prevents unwanted commands from being registered during idle or wait states, and operations already in progress are not affected. The mode register data is loaded via inputs A0-A11. The LOAD MODE REGISTER command can only be issued when all array banks are idle, and a subsequent executable command cannot be issued until a predetermined time delay (MRD) is met. The data in the NVMode Register 147 is automatically loaded into the Mode Register 148 upon power-up initialization and is the default data unless dynamically changed with the LOAD MODE REGISTER command. An ACTIVE command is used to open (or activate) a row in a particular array bank for a subsequent access. The value on the BA0, BA1 inputs selects the bank, and the address provided on inputs A0-A11 selects the row. This row remains active for accesses until the next ACTIVE command, power-down or RESET. The READ command is used to initiate a burst read access to an active row. The value on the BA0, BA1 inputs selects the bank, and the address provided on inputs A0-A7 selects the starting column location. Read data appears on the DQs subject to the logic level on the data mask (DQM) input that was present two clocks earlier. If a given DQM signal was registered HIGH, the corresponding DQs will be High-Z (high impedance) two clocks later; if the DQM signal was registered LOW, the DQs will provide valid data. Thus, the DQM input can be used to mask output data during a read operation. A WRITE command is used to initiate a single-location write access on an active row. A WRITE command must be preceded by a WRITE SETUP command. The value on the BA0, BA1 inputs selects the bank, and the address provided on inputs A0-A7 selects a column location. Input data appearing on the DQs is written to the memory array, subject to the DQM input logic level appearing coincident with the data If a given DQM signal is registered LOW, the corresponding data will be written to memory, if the DQM signal is registered HIGH, the corresponding data inputs will be ignored, and a WRITE will not be executed to that word/column location. A WRITE command with DQM HIGH is considered a NOP. An ACTIVE TERMINATE command is not required for synchronous flash memories, but can be provided to terminate a read in a manner similar to the SDRAM PRECHARGE command. The ACTIVE TERMINATE command can be issued to terminate a BURST READ in progress, and may or may not be bank specific. A BURST TERMINATE command is used to truncate either fixed-length or full-page bursts. The most recently registered READ command prior to the BURST TERMINATE command will be truncated. BURST TERMINATE is not bank specific. The Load Command Register operation is used to initiate flash memory control commands to the Command Execution Logic (CEL) 130. The CEL receives and interprets commands to the device. These commands control the operation of the Internal State Machine 132 and the read path (i.e., memory array 102, ID Register 136 or Status Register 134). Before any READ or WRITE commands can be issued to a bank within the synchronous flash memory, a row in that bank must be "opened." This is accomplished via the ACTIVE command (defined by CS#, WE#, RAS#, CAS#), which selects both the bank and the row to be activated, see FIG. 4. After opening a row (issuing an ACTIVE command), a READ or WRITE command may be issued to that row, subject to a time period (tRCD) specification, tRCD (MIN) should be divided by the clock period and rounded up to the next whole number to determine the earliest clock edge after the ACTIVE command on which a READ or WRITE command can be entered. For example, a tRCD specification of 30 ns with a 90 MHZ clock (11.11 ins period) results in 2.7 clocks, which is rounded to 3. This is reflected in FIG. 5, which covers any case where 2<tRCD (MIN)/tCK.ltoreq.3. (The same procedure is used to convert other specification limits from time units to clock cycles). A subsequent ACTIVE command to a different row in the same bank can be issued without having to close a previous active row, provided the minimum time interval between successive ACTIVE commands to the same bank is defined by tRC. A subsequent ACTIVE command to another bank can be issued while the first bank is being accessed, which results in a reduction of total row access overhead. The minimum time interval between successive ACTIVE commands to different banks is defined by a time period tRRD. READ bursts are initiated with a READ command (defined by CS#, WE#, RAS#, CAS#), as shown in FIG. 6. The starting column and bank addresses are provided with the READ command. During READ bursts, the valid data-out element from the starting column address will be available following the CAS latency after the READ command. Each subsequent data-out element will be valid by the next positive clock edge. Upon completion of a burst, assuming no other commands have been initiated, the DQs will go to a High-Z state. A full page burst will continue until terminated. (At the end of the page, it will wrap to column 0 and continue.) Data from any READ burst may be truncated with a subsequent READ command, and data from a fixed-length READ burst may be immediately followed by data from a subsequent READ command. In either case, a continuous flow of data can be maintained. The first data element from the new burst follows either the last element of a completed burst, or the last desired data element of a longer burst that is being truncated. The new READ command should be issued x cycles before the clock edge at which the last desired data element is valid, where x equals the CAS latency minus one. This is shown in FIG. 7 for CAS latencies of one, two and three; data element n+3 is either the last of a burst of four, or the last desired of a longer burst. The synchronous flash memory uses a pipelined architecture and therefore does not require the 2 n rule associated with a prefetch architecture. A READ command can be initiated on any clock cycle following a previous READ command. Full-speed, random read accesses within a page can be performed as shown in FIG. 8, or each subsequent READ may be performed to a different bank. Data from any READ burst may be truncated with a subsequent WRITE command (WRITE commands must be preceded by WRITE SETUP), and data from a fixed-length READ burst may be immediately followed by data from a subsequent WRITE command (subject to bus turnaround limitations). The WRITE may be initiated on the clock edge immediately following the last (or last desired) data element from the READ burst, provided that I/O contention can be avoided. In a given system design, there may be the possibility that the device driving the input data would go Low-Z before the synchronous flash memory DQs go High-Z. In this case, at least a single-cycle delay should occur between the last read data and the WRITE command. The DQM input is used to avoid I/O contention as shown in FIG. 9. The DQM signal must be asserted (HIGH) at least two clocks prior to the WRITE command (DQM latency is two clocks for output buffers) to suppress data-out from the READ. Once the WRITE command is registered, the DQs will go High-Z (or remain High-Z) regardless of the state of the DQM signal. The DQM signal must be de-asserted prior to the WRITE command (DQM latency is zero clocks for input buffers) to ensure that the written data is not masked. FIG. 9 shows the case where the clock frequency allows for bus contention to be avoided without adding a NOP cycle. A fixed-length or full-page READ burst can be truncated with either ACTIVE TERMINATE (may or may not be bank specific) or BURST TERMINATE (not bank specific) commands. The ACTIVE TERMINATE or BURST TERMINATE command should be issued x cycles before the clock edge at which the last desired data element is valid, where x equals the CAS latency minus one. This is shown in FIG. 10 for each possible CAS latency; data element n+3 is the last desired data element of a burst of four or the last desired of a longer burst. A single-location WRITE is initiated with a WRITE command (defined by CS#, WE#, RAS#, CAS#) as shown in FIG. 11. The starting column and bank addresses are provided with the WRITE command. Once a WRITE command is registered, a READ command can be executed as defined by Truth Tables 4 and 5. An example is shown in FIG. 12. During a WRITE, the valid data-in is registered coincident with the WRITE command. Unlike SDRAM, synchronous flash does not require a PRECHARGE command to deactivate the open row in a particular bank or the open rows in all banks. The ACTIVE TERMINATE command is similar to the BURST TERMINATE command; however, ACTIVE TERMINATE may or may not be bank specific. Asserting input A10 HIGH during an ACTIVE TERMINATE command will terminate a BURST READ in any bank. When A10 is low during an ACTIVE TERMINATE command, BA0 and BA1 will determine which bank will undergo a terminate operation. ACTIVE TERMINATE is considered a NOP for banks not addressed by A10, BA0, BA1. Power-down occurs if clock enable, CKE is registered LOW coincident with a NOP or COMMAND INHIBIT, when no accesses are in progress. Entering power-down deactivates the input and output buffers (excluding CKE) after internal state machine operations (including WRITE operations) are completed, for power savings while in standby. The power-down state is exited by registering a NOP or COMMAND INHIBIT and CKE HIGH at the desired clock edge (meeting tCKS). See FIG. 13 for an example power-down operation. A clock suspend mode occurs when a column access/burst is in progress and CKE is registered LOW. In the clock suspend mode, an internal clock is deactivated, "freezing" the synchronous logic. For each positive clock edge on which CKE is sampled LOW, the next internal positive clock edge is suspended. Any command or data present on the input pins at the time of a suspended internal clock edge are ignored, any data present on the DQ pins will remain driven, and burst counters are not incremented, as long as the clock is suspended (see example in FIG. 14). Clock suspend mode is exited by registering CKE HIGH; the internal clock and related operation will resume on the subsequent positive clock edge. The burst read/single write mode is a default mode in one embodiment. All WRITE commands result in the access of a single column location (burst of one), while READ commands access columns according to the programmed burst length and sequence. The following Truth Table 3 illustrates memory operation using the CKE signal.<tb>TRUTH TABLE 3<tb>CKE<tb> CURRENT<tb>CKEn-1 CKEn STATE COMMANDn ACTIONn<tb>L L POWER- X Maintain POWER-<tb> DOWN DOWN<tb> CLOCK X Maintain CLOCK-<tb> SUSPEND SUSPEND<tb>L H POWER- COMMAND Exit POWER-<tb> DOWN INHIBIT or NOP DOWN<tb> CLOCK X Exit CLOCK<tb> SUSPEND SUSPEND<tb>H L All Banks Idle COMMAND POWER-DOWN<tb> Reading or INHIBIT or NOP Entry CLOCK<tb> Writing VALID SUSPEND Entry<tb>H H See Truth Table 4<tb>TRUTH TABLE 3<tb>CKE<tb> CURRENT<tb>CKEn-1 CKEn STATE COMMANDn ACTIONn<tb>L L POWER- X Maintain POWER-<tb> DOWN DOWN<tb> CLOCK X Maintain CLOCK-<tb> SUSPEND SUSPEND<tb>L H POWER- COMMAND Exit POWER-<tb> DOWN INHIBIT or NOP DOWN<tb> CLOCK X Exit CLOCK<tb> SUSPEND SUSPEND<tb>H L All Banks Idle COMMAND POWER-DOWN<tb> Reading or INHIBIT or NOP Entry CLOCK<tb> Writing VALID SUSPEND Entry<tb>H H See Truth Table 4<tb>TRUTH TABLE 5<tb>Current State Bank n - Command to Bank m<tb>CURRENT CS RAS CAS WE<tb>STATE # # # # COMMAND/ACTION<tb>Any H X X X COMMAND INHIBIT (NOP/<tb> continue previous operation)<tb> L H H H NO OPERATION (NOP/<tb> continue previous operation<tb>Idle X X X X Any Command Otherwise<tb> Allowed to Bank m<tb>Row L L H H ACTIVE (Select and activate<tb>Activating, row)<tb>Active, or L H L H READ (Select column and start<tb>Active READ burst)<tb>Terminate L H L L WRITE (Select column and start<tb> L L H L WRITE<tb> L L L H ACTIVE TERMINATE<tb> LOAD COMMAND<tb> REGISTER<tb>READ L L H H ACTIVE (Select and activate<tb> row)<tb> L H L H READ (Select column and start<tb> new READ burst)<tb> L H L L WRITE (Select column and start<tb> L L H L WRITE)<tb> L L L H ACTIVE TERMINATE<tb> LOAD COMMAND<tb> REGISTER<tb>WRITE L L H H ACTIVE (Select and activate<tb> row)<tb> L H L H READ (Select column and start<tb> READ burst)<tb> L L H L ACTIVE TERMINATE<tb> L H H L BURST TERMINATE<tb> L L L H LOAD COMMAND<tb> REGISTER Function Description The synchronous flash memory incorporates a number of features to make it ideally suited for code storage and execute-in-place applications on an SDRAM bus. The memory array is segmented into individual erase blocks. Each block may be erased without affecting data stored in other blocks. These memory blocks are read, written and erased by issuing commands to the command execution logic 130 (CEL). The CEL controls the operation of the Internal State Machine 132 (ISM), which completely controls all ERASE NVMODE REGISTER, WRITE NVMODE REGISTER, WRITE, BLOCK ERASE, BLOCK PROTECT, DEVICE PROTECT, UNPROTECT ALL BLOCKS and VERIFY operations. The ISM 132 protects each memory location from over-erasure and optimizes each memory location for maximum data retention. In addition, the ISM greatly simplifies the control necessary for writing the device in-system or in an external programmer. The synchronous flash memory is organized into 16 independently erasable memory blocks that allow portions of the memory to be erased without affecting the rest of the memory data. Any block may be hardware-protected against inadvertent erasure or writes. A protected block requires that the RP# pin be driven to VHH (a relatively high voltage) before being modified. The 256K-word blocks at locations 0 and 15 can have additional hardware protection. Once a PROTECT BLOCK command has been executed to these blocks, an UNPROTECT ALL BLOCKS command will unlock all blocks except the blocks at locations 0 and 15, unless the RP# pin is at VHH. This provides additional security for critical code during in-system firmware updates, should an unintentional power disruption or system reset occur. Power-up initialization, ERASE, WRITE and PROTECT timings are simplified by using an ISM to control all programming algorithms in the memory array. The ISM ensures protection against over-erasure and optimizes write margin to each cell. During WRITE operations, the ISM automatically increments and monitors WRITE attempts, verifies write margin on each memory cell and updates the ISM Status Register. When a BLOCK ERASE operation is performed, the ISM automatically Overwrites the entire addressed block (eliminates over-erasure), increments and monitors ERASE attempts and sets bits in the ISM Status Register. The 8-bit ISM Status Register 134 allows an external processor 200 to monitor the status of the ISM during WRITE, ERASE and PROTECT operations. One bit of the 8-bit Status Register (SR7) is set and cleared entirely by the ISM. This bit indicates whether the ISM is busy with an ERASE, WRITE or PROTECT task. Additional error information is set in three other bits (SR3, SR4 and SR5): write and protect block error, erase and unprotect all blocks error, and device protection error. Status register bits SR0, SR1 and SR2 provide details on the ISM operation underway. The user can monitor whether a device-level or bank-level ISM operation (including which bank is under ISM control) is underway. These six bits (SR3-SR5) must be cleared by the host system. The Status Register is describe in further detail below with reference to Table 2. The CEL 130 receives and interprets commands to the device. These commands control the operation of the ISM and the read path (i.e., memory array, device configuration or Status Register). Commands may be issued to the CEL while the ISM is active. To allow for maximum power conservation, the synchronous flash features a very low current, deep power-down mode. To enter this mode, the RP# pin 140 (reset/power-down) is taken to VSS.±.0.2V. To prevent an inadvertent RESET, RP# must be held at Vss for 100 ns prior to the device entering the reset mode. With RP# held at Vss, the device will enter the deep power-down mode. After the device enters the deep power-down mode, a transition from LOW to HIGH on RP# will result in a device power-up initialize sequence as outlined herein. Transitioning RP# from LOW to HIGH after entering the reset mode but prior to entering deep power-down mode requires a 1 .mu.s delay prior to issuing an executable command. When the device enters the deep power-down mode, all buffers excluding the RP# buffer are disabled and the current draw is low, for example, a maximum of 50 .mu.A at 3.3V VCC. The input to RP# must remain at Vss during deep power-down. Entering the RESET mode clears the Status Register 134 and sets the ISM 132 to the array read mode. The synchronous flash memory array architecture is designed to allow sectors to be erased without disturbing the rest of the array. The array is divided into 16 addressable "blocks" that are independently erasable. By erasing blocks rather than the entire array, the total device endurance is enhanced, as is system flexibility. Only the ERASE and BLOCK PROTECT functions are block oriented. The 16 addressable blocks are equally divided into four banks 104, 106, 108 and 110 of four blocks each. The four banks have simultaneous read-while-write functionality. An ISM WRITE or ERASE operation to any bank can occur simultaneously to a READ operation to any other bank. The Status Register 134 may be polled to determine which bank is under ISM operation. The synchronous flash memory has a single background operation ISM to control power-up initialization, ERASE, WRITE, and PROTECT operations. Only one ISM operation can occur at any time; however, certain other commands, including READ operations, can be performed while the ISM operation is taking place. An operational command controlled by the ISM is defined as either a bank-level operation or a device-level operation. WRITE and ERASE are bank-level ISM operations. After an ISM bank operation has been initiated, a READ to any location in the bank may output invalid data, whereas a READ to any other bank will read the array. A READ STATUS REGISTER command will output the contents of the Status Register 134. The ISM status bit will indicate when the ISM operation is complete (SR7=1). When the ISM operation is complete, the bank will automatically enter the array read mode. ERASE NVMODE REGISTER, WRITE NVMODE REGISTER, BLOCK PROTECT, DEVICE PROTECT, and UNPROTECT ALL BLOCKS are device-level ISM operations. Once an ISM device-level operation has been initiated, a READ to any bank will output the contents of the array. A READ STATUS REGISTER command may be issued to determine completion of the ISM operation. When SR7=1, the ISM operation will be complete and a subsequent ISM operation may be initiated. Any block may be protected from unintentional ERASE or WRITE with a hardware circuit that requires the RP# pin be driven to VHH before a WRITE or ERASE is commenced, as explained below. Any block may be hardware-protected to provide extra security for the most sensitive portions of the firmware. During a WRITE or ERASE of a hardware protected block, the RP# pin must be held at VHH until the WRITE or ERASE is completed. Any WRITE or ERASE attempt on a protected block without RP#=VHH will be prevented and will result in a write or erase error. The blocks at locations 0 and 15 can have additional hardware protection to prevent an inadvertent WRITE or ERASE operation. In this embodiment, these blocks cannot be software-unlocked through an UNPROTECT ALL BLOCKS command unless RP#=VHH. The protection status of any block may be checked by reading its block protect bit with a READ STATUS REGISTER command. Further, to protect a block, a three-cycle command sequence must be issued with the block address. The synchronous flash memory can feature three different types of READs. Depending on the mode, a READ operation will produce data from the memory array, status register, or one of the device configuration registers. A READ to the device configuration register or the Status Register must be preceded by an LCR-ACTIVE cycle and burst length of data out will be defined by the mode register settings. A subsequent READ or a READ not preceded by an LCR-ACTIVE cycle will read the array. However, several differences exist and are described in the following section. A READ command to any bank outputs the contents of the memory array. While a WRITE or ERASE ISM operation is taking place, a READ to any location in the bank under ISM control may output invalid data Upon exiting a RESET operation, the device will automatically enter the array read mode. Performing a READ of the Status Register 134 requires the same input sequencing as when reading the array, except that an LCR READ STATUS REGISTER (70H) cycle must precede the ACTIVE READ cycles. The burst length of the Status Register data-out is defined by the Mode Register 148. The Status Register contents are updated and latched on the next positive clock edge subject to CAS latencies. The device will automatically enter the array read mode for subsequent READs. Reading any of the Device Configuration Registers 136 requires the same input sequencing as when reading the Status Register except that specific addresses must be issued. WE# must be HIGH, and DQM and CS# must be LOW. To read the manufacturer compatibility ID, addresses must be at 000000H, and to read the device ID, addresses must be at 000001H. Any of the block protect bits is read at the third address location within each erase block (xx0002H), while the device protect bit is read from location 000003H. The DQ pins are used either to input data to the array. The address pins are used either to specify an address location or to input a command to the CEL during the LOAD COMMAND REGISTER cycle. A command input issues an 8-bit command to the CEL to control the operation mode of the device. A WRITE is used to input data to the memory array. The following section describes bot types of inputs. To perform a command input, DQM must be LOW, and CS# and WE# must be LOW. Address pins or DQ pins are used to input commands. Address pins not used for input commands are "Don't Care" and must be held stable. The 8-bit command is input on DQ0-DQ7 or A0-A7 and is latched on the positive clock edge. A WRITE to the memory array sets the desired bits to logic 0s but cannot change a given bit to a logic 1 from a logic 0. Setting any bits to a logic 1 requires that the entire block be erased. To perform a WRITE, DQM must be LOW, CS# and WE# must be LOW, and VCCP must be tied to VCC. Writing to a protected block also requires that the RP# pin be brought to VHH. A0-A11 provide the address to be written, while the data to be written to the array is input on the DQ pins. The data and addresses are latched on the rising edge of the clock. A WRITE must be preceded by a WRITE SETUP command. To simplify the writing of the memory blocks, the synchronous flash incorporates an ISM that controls all internal algorithms for the WRITE and ERASE cycles. An 8-bit command set is used to control the device. See Truth Tables 1 and 2 for a list of the valid commands. The 8-bit ISM Status Register 134 (see Table 2) is polled to check for ERASE NVMODE REGISTER, WRITE NVMODE REGISTER, WRITE, ERASE, BLOCK PROTECT, DEVICE PROTECT or UNPROTECT ALL BLOCKS completion or any related errors. Completion of an ISM operation can be monitored by issuing a READ STATUS REGISTER (70H) command. The contents of the Status Register will be output to DQ0-DQ7 and updated on the next positive clock edge (subject to CAS latencies) for a fixed burst length as defined by the mode register settings. The ISM operation will be complete when SR7=1. All of the defined bits are set by the ISM, but only the ISM status bit is reset by the ISM. The erase/unprotect block, write/protect block, device protection must be cleared using a CLEAR STATUS REGISTER (50H) command. This allows the user to choose when to poll and clear the Status Register. For example, a host system may perform multiple WRITE operations before checking the Status Register instead of checking after each individual WRITE. Asserting the RP# signal or powering down the device will also clear the Status Register.<tb>TABLE 2<tb>STATUS REGISTER<tb>STATUS<tb>BIT # STATUS REGISTER BIT DESCRIPTION<tb>SR7 ISM STATUS The ISMS bit displays the<tb> 1 = Ready active status of the state<tb> 0 = Busy machine when performing<tb> WRITE or BLOCK<tb> ERASE. The controlling<tb> logic polls this bit to<tb> determine when the erase<tb> and write status bits are<tb> valid.<tb>SR6 RESERVED Reserved for future use.<tb>SR5 ERASE/UNPROTECT BLOCK ES is set to 1 after the<tb> STATUS maximum number of<tb> 1 = BLOCK ERASE or ERASE cycles is executed<tb> BLOCK UNPROTECT error by the ISM without a<tb> 0 = Successful BLOCK ERASE successful verify. This bit<tb> or UNPROTECT is also set to 1 if a BLOCK<tb> UNPROTECT operation is<tb> unsuccessful. ES is only<tb> cleared by a CLEAR<tb> STATUS REGISTER<tb> command or by a RESET.<tb>SR4 WRITE/PROTECT BLOCK WS is set to 1 after the<tb> STATUS maximum number of<tb> 1 = WRITE or BLOCK WRITE cycles is executed<tb> PROTECT error by the ISM without a<tb> 0 = Successful WRITE or successful verify. This bit<tb> BLOCK PROTECT is also set to 1 if a BLOCK<tb> or DEVICE PROTECT<tb> operation is unsuccessful.<tb> WS is only cleared by<tb> a CLEAR STATUS<tb> REGISTER command or<tb> by a RESET.<tb>SR2 BANKA1 ISM STATUS When SR0 = 0, the bank<tb>SR1 BANKA0 ISM STATUS under ISM control can be<tb> decoded from BA0, BA1:<tb> [0,0] Bank0; [0,1] Bank1;<tb> [1,0] Bank2; [1,1] Bank3.<tb>SR3 DEVICE PROTECT STATUS DPS is set to 1 if an invalid<tb> 1 = Device protected, invalid WRITE, ERASE,<tb> operation attempted PROTECT BLOCK,<tb> 0 = Device unprotected or RP# PROTECT DEVICE or<tb> condition met UNPROTECT ALL<tb> BLOCKS is attempted.<tb> After one of these<tb> commands is issued, the<tb> condition of RP#, the<tb> block protect bit and the<tb> device protect bit are<tb> compared to determine if<tb> the desired operation is<tb> allowed. Must be cleared<tb> by CLEAR STATUS<tb> REGISTER or by a<tb> RESET.<tb>SR0 DEVICE/BANK ISM STATUS DBS is set to 1 if the ISM<tb> 1 = Device level ISM operation operation is a device-level<tb> 0 = Bank level ISM operation operation. A valid READ<tb> to any bank of the array<tb> can immediately follow the<tb> registration of a device-<tb> level ISM WRITE<tb> operation. When DBS is set<tb> to 0, the ISM operation is a<tb> bank-level operation.<tb> A READ to the bank under<tb> ISM control may result in<tb> invalid data. SR2 and SR3<tb> can be decoded to<tb> determine which bank is<tb> under ISM control. The device ID, manufacturer compatibility ID, device protection status and block protect status can all be read by issuing a READ DEVICE CONFIGURATION (90H) command. To read the desired register, a specific address must be asserted. See Table 3 for more details on the various device configuration registers 136.<tb>TABLE 3<tb>DEVICE CONFIGURATION<tb>DEVICE<tb>CONFIGURATION ADDRESS DATA CONDITION<tb>Manufacturer 000000H 2CH Manufacturer compatibility<tb>Compatibility read<tb>Device ID 000001H D3H Device ID read<tb>Block Protect Bit xx0002H DQ0 = Block protected<tb> 1<tb> DQ0 = Block unprotected<tb> 0<tb> xx0002H<tb>Device Protect Bit 000003H DQ0 = Block protect modification<tb> 1<tb> DQ0 = prevented<tb> 0<tb> 000003H Block protect modification<tb> enabled Commands can be issued to bring the device into different operational modes. Each mode has specific operations that can be performed while in that mode. Several modes require a sequence of commands to be written before they are reached. The following section describes the properties of each mode, and Truth Tables 1 and 2 list all command sequences required to perform the desired operation. Read-while-write functionality allows a background operation write or erase to be performed on any bank while simultaneously reading any other bank. For a write operation, the LCR-ACTIVE-WRITE command sequences in Truth Table 2 must be completed on consecutive clock cycles. However, to simplify a synchronous flash controller operation, an unlimited number of NOPs or COMMAND INHIBITs can be issued throughout the command sequence. For additional protection, these command sequences must have the same bank address for the three cycles. If the bank address changes during the LCR-ACTIVE-WRITE command sequence, or if the command sequences are not consecutive (other than NOPs and COMMAND INHIBITs, which are permitted), the write and erase status bits (SR4 and SR5) will be set and the operation prohibited. Upon power-up and prior to issuing any operational commands to the device, the synchronous flash is initialized. After power is applied to VCC, VCCQ and VCCP (simultaneously), and the clock is stable, RP# is transitioned from LOW to HIGH. A delay (in one embodiment a 100 .mu.s delay) is required after RP# transitions HIGH in order to complete internal device initialization. The device is in the array read mode at the completion of device initialization, and an executable command can be issued to the device. To read the device ID, manufacturer compatibility ID, device protect bit and each of the block protect bits, a READ DEVICE CONFIGURATION (90H) command is issued. While in this mode, specific addresses are issued to read the desired information. The manufacturer compatibility ID is read at 000000H; the device ID is read at 000001H. The manufacturer compatibility ID and device ID are output on DQ0-DQ7. The device protect bit is read at 000003H; and each of the block protect bits is read at the third address location within each block (xx0002H). The device and block protect bits are output on DQ0. Three consecutive commands on consecutive clock edges are needed to input data to the array (NOPs and Command Inhibits are permitted between cycles). In the first cycle, a LOAD COMMAND REGISTER command is given with WRITE SETUP (40H) on A0-A7, and the bank address is issued on BA0, BA1. The next command is ACTIVE, which activates the row address and confirms the bank address. The third cycle is WRITE, during which the starting column, the bank address, and data are issued. The ISM status bit will be set on the following clock edge (subject to CAS latencies). While the ISM executes the WRITE, the ISM status bit (SR7) will be at 0. A READ operation to the bank under ISM control may produce invalid data. When the ISM status bit (SR7) is set to a logic 1, the WRITE has been completed, and the bank will be in the array read mode and ready for an executable command. Writing to hardware-protected blocks also requires that the RP# pin be set to VHH prior to the third cycle (WRITE), and RP# must be held at VHH until the ISM WRITE operation is complete. The write and erase status bits (SR4 and SR5) will be set if the LCR-ACTIVE-WRITE command sequence is not completed on consecutive cycles or the bank address changes for any of the three cycles. After the ISM has initiated the WRITE, it cannot be aborted except by a RESET or by powering down the part. Doing either during a WRITE may corrupt the data being written. Executing an ERASE sequence will set all bits within a block to logic 1. The command sequence necessary to execute an ERASE is similar to that of a WRITE. To provide added security against accidental block erasure, three consecutive command sequences on consecutive clock edges are required to initiate an ERASE of a block. In the first cycle, LOAD COMMAND REGISTER is given with ERASE SETUP (20H) on A0-A7, and the bank address of the block to be erased is issued on BA0, BA1. The next command is ACTIVE, where A10, A11, BA0, BA1 provide the address of the block to be erased. The third cycle is WRITE, during which ERASE CONFRIM (DOH) is given on DQ0-DQ7 and the bank address is reissued. The ISM status bit will be set on the following clock edge (subject to CAS latencies). After ERASE CONFIRM (DOH) is issued, the ISM will start the ERASE of the addressed block. Any READ operation to the bank where the addressed block resides may output invalid data. When the ERASE operation is complete, the bank will be in the array read mode and ready for an executable command. Erasing hardware-protected blocks also requires that the RP# pin be set to VHH prior to the third cycle (WRITE), and RP# must be held at VHH until the ERASE is completed (SR7=1). If the LCR-ACTIVE-WRITE command sequence is not completed on consecutive cycles (NOPs and COMMAND INHIBITs are permitted between cycles) or the bank address changes for one or more of the command cycles, the write and erase status bits (SR4 and SR5) will be set and the operation is prohibited. The contents of the Mode Register 148 maybe copied into the NVMode Register 147 with a WRITE NVMODE REGISTER command. Prior to writing to the NVMode Register, an ERASE NVMODE REGISTER command sequence must be completed to set all bits in the NVMode Register to logic 1. The command sequence necessary to execute an ERASE NVMODE REGISTER and WRITE NVMODE REGISTER is similar to that of a WRITE. See Truth Table 2 for more information on the LCR-ACTIVE-WRYTE commands necessary to complete ERASE NVMODE REGISTER and WRITE NVMODE REGISTER. After the WRITE cycle of the ERASE NVMODE REGISTER or WRITE NVMODE REGISTER command sequence has been registered, a READ command may be issued to the array. A new WRITE operation will not be permitted until the current ISM operation is complete and SR7=1. Executing a BLOCK PROTECT sequence enables the first level of software/hardware protection for a given block. The memory includes a 16-bit register that has one bit corresponding to the 16 protectable blocks. The memory also has a register to provide a device bit used to protect the entire device from write and erase operations. The command sequence necessary to execute a BLOCK PROTECT is similar to that of a WRITE. To provide added security against accidental block protection, three consecutive command cycles are required to initiate a BLOCK PROTECT. In the first cycle, a LOAD COMMAND REGISTER is issued with a PROTECT SETUP (60H) command on A0-A7, and the bank address of the block to be protected is issued on BA0, BA1. The next command is ACTIVE, which activates a row in the block to be protected and confirms the bank address. The third cycle is WRITE, during which BLOCK PROTECT CONFIRM (01H) is issued on DQ0-DQ7, and the bank address is reissued. The ISM status bit will be set on the following clock edge (subject to CAS latencies). The ISM will then begin the PROTECT operation. If the LCR-ACTIVE-WRITE is not completed on consecutive cycles (NOPs and COMMAND INHIBITs are permitted between cycles) or the bank address changes, the write and erase status bits (SR4 and SR5) will be set and the operation is prohibited. When the ISM status bit (SR7) is set to a logic 1, the PROTECT has been completed, and the bank will be in the array read mode and ready for an executable command. Once a block protect bit has been set to a 1 (protected), it can only be reset to a 0 if the UNPROTECT ALL BLOCKS command. The UNPROTECT ALL BLOCKS command sequence is similar to the BLOCK PROTECT command; however, in the third cycle, a WRITE is issued with a UNPROTECT ALL BLOCKS CONFIRM (DOH) command and addresses are "Don't Care." For additional information, refer to Truth Table 2. The blocks at locations 0 and 15 have additional security. Once the block protect bits at locations 0 and 15 have been set to a 1 (protected), each bit can only be reset to a 0 if RP# is brought to VHH prior to the third cycle of the UNPROTECT operation, and held at VHH until the operation is complete (SR7=1). Further, if the device protect bit is set, RP# must be brought to VHH prior to the third cycle and held at VHH until the BLOCK PROTECT or UNPROTECT ALL BLOCKS operation is complete. To check a block's protect status, a READ DEVICE CON FIGURATION (90H) command may be issued. Executing a DEVICE PROTECT sequence sets the device protect bit to a 1 and prevents a block protect bit modification. The command sequence necessary to execute a DEVICE PROTECT is similar to that of a WRITE. Three consecutive command cycles are required to initiate a DEVICE PROTECT sequence. In the first cycle, LOAD COMMAND REGISTER is issued with a PROTECT SETUP (60H) on A0-A7, and a bank address is issued on BA0, BA1. The bank address is "Don't Care" but the same bank address must be used for all three cycles. The next command is ACTIVE. The third cycle is WRITE, during which a DEVICE PROTECT (F1H) command is issued on DQ0-DQ7, and RP# is brought to VHH. The ISM status bit will be set on the following clock edge (subject to CAS latencies). An executable command can be issued to the device. RP# must be held at VHH until the WRITE is completed (SR7=1). A new WRITE operation will not be permitted until the current ISM operation is complete. Once the device protect bit is set, it cannot be reset to a 0. With the device protect bit set to a 1, BLOCK PROTECT or BLOCK UNPROTECT is prevented unless RP# is at VHH during either operation. The device protect bit does not affect WRITE or ERASE operations. Refer to Table 4 for more information on block and device protect operations.<tb>TABLE 4<tb>PROTECT OPERATIONS TRUTH TABLE<tb> RP CS WE DQ0-<tb>FUNCTION # # DQM # Address VccP DQ7<tb>DEVICE<tb>UNPROTECTED<tb>PROTECT H L H L 60H X X<tb>SETUP<tb>PROTECT H L H L BA H 01H<tb>BLOCK<tb>PROTECT VHH L H L X X F1H<tb>DEVICE<tb>UNPROTECT H/VHH L H L X H D0H<tb>ALL BLOCKS<tb>DEVICE<tb>PROTECTED<tb>PROTECT H or L H L 60H X X<tb>SETUP VHH<tb>PROTECT VHH L H L BA H 01H<tb>BLOCK<tb>UNPROTECT VHH L H L X H D0H<tb>ALL BLOCKS After the ISM status bit (SR7) has been set, the device/ bank (SR0), device protect (SR3), bankA0 (SR1), bankA1 (SR2), write/protect block (SR4) and erase/unprotect (SR5) status bits may be checked. If one or a combination of SR3, SR4, SR5 status bits has been set, an error has occurred during operation. The ISM cannot reset the SR3, SR4 or SR5 bits. To clear these bits, a CLEAR STATUS REGISTER (50H) command must be given. Table 5 lists the combinations of errors.<tb>TABLE 5<tb>STATUS REGISTER ERROR DECODE<tb>STATUS BITS<tb>SR5 SR4 SR3 ERROR DESCRIPTION<tb>0 0 0 No errors<tb>0 1 0 WRITE, BLOCK PROTECT or DEVICE<tb> PROTECT error<tb>0 1 1 Invalid BLOCK PROTECT or DEVICE<tb> PROTECT, RP# not valid (VHH)<tb>0 1 1 Invalid BLOCK or DEVICE PROTECT, RP# not<tb> valid<tb>1 0 0 ERASE or ALL BLOCK UNPROTECT error<tb>1 0 1 Invalid ALL BLOCK UNPROTECT, RP# not<tb> valid (VHH)<tb>1 1 0 Command sequencing error The synchronous flash memory is designed and fabricated to meet advanced code and data storage requirements. To ensure this level of reliability, VCCP must be tied to Vcc during WRITE or ERASE cycles. Operation outside these limits may reduce the number of WRITE and ERASE cycles that can be performed on the device. Each block is designed and processed for a minimum of 100,000-WRITE/ERASE-cycle endurance. The synchronous flash memory offers several power-saving features that may be utilized in the array read mode to conserve power. A deep power-down mode is enabled by bringing RP# to VSS.±.0.2V. Current draw (ICC) in this mode is low, such as a maximum of 50 .mu.A. When CS# is HIGH, the device will enter the active standby mode. In this mode the current is also low, such as a maximum ICC current of 30 mA. If CS# is brought HIGH during a write, erase, or protect operation, the ISM will continue the WRITE operation, and the device will consume active Iccp until the operation is completed. Referring to FIG. 16, a flow chart of a self-timed write sequence according to one embodiment of the present invention is described. The sequence includes loading the command register (code 40H), receiving an active command and a row address, and receiving a write command and a column address. The sequence then provides for a status register polling to determine if the write is complete. The polling monitors status register bit 7 (SR7) to determine if it is set to a 1. An optional status check can be included. When the write is completed, the array is placed in the array read mode. Referring to FIG. 17, a flow chart of a complete write status-check sequence according to one embodiment of the present invention is provided. The sequence looks for status register bit 4 (SR4) to determine if it is set to a 0. If SR4 is a 1, there was an error in the write operation. The sequence also looks for status register bit 3 (SR3) to determine if it is set to a 0. If SR3 is a 1, there was an invalid write error during the write operation. Referring to FIG. 18, a flow chart of a self-timed block erase sequence according to one embodiment of the present invention is provided. The sequence includes loading the command register (code 20H), and receiving an active command and a row address. The memory then a determines if the block is protected. If it is not protected, the memory performs a write operation (D0H) to the block and monitors the status register for completion. An optional status check can be performed and the memory is placed in an array read mode. If the block is protected, the erase is not allowed unless the RP# signal is at an elevated voltage (VHH). FIG. 19 illustrates a flow chart of a complete block erase status-check sequence according to one embodiment of the present invention. The sequence monitors the status register to determine if a command sequence error occurred (SR4 or SR5=1). If SR3 is set to a 1, an invalid erase or unprotect error occurred. Finally, a block erase or unprotect error happened if SR5 is set to a 1. FIG. 20 is a flow chart of a block protect sequence according to one embodiment of the present invention. The sequence includes loading the command register (code 60H), and receiving an active command and a row address. The memory then determines if the block is protected. If it is not protected, the memory performs a write operation (01H) to the block and monitors the status register for completion. An optional status check can be performed and the memory is placed in an array read mode. If the block is protected, the erase is not allowed unless the RP# signal is at an elevated voltage (VHH). Referring to FIG. 21, a flow chart of a complete block status-check sequence according to one embodiment of the present invention is provided. The sequence monitors the status register bits 3, 4 and 5 to determine of errors were detected. FIG. 22 is a flow chart of a device protect sequence according to one embodiment of the present invention. The sequence includes loading the command register (code 60H), and receiving an active command and a row address. The memory then determines if RP# is at VHH. The memory performs a write operation (F1H) and monitors the status register for completion. An optional status check can be performed and the memory is placed in an array read mode. FIG. 23 is a flow chart of a block unprotect sequence according to one embodiment of the present invention. The sequence includes loading the command register (code 60H), and receiving an active command and a row address. The memory then determines if the memory device is protected. If it is not protected, the memory determines if the boot locations (blocks 0 and 15) are protected. If none of the blocks are protected the memory performs a write operation (D0H) to the block and monitors the status register for completion. An optional status check can be performed and the memory is placed in an array read mode. If the device is protected, the erase is not allowed unless the RP# signal is at an elevated voltage (VHH). Likewise, if the boot locations are protected, the memory determines if all blocks should be unprotected. FIG. 24 illustrates the timing of an initialize and load mode register operation. The mode register is programmed by providing a load mode register command and providing operation code (opcode) on the address lines. The opcode is loaded into the mode register. As explained above, the contents of the non-volatile mode register are automatically loaded into the mode register upon power-up and the load mode register operation may not be needed. FIG. 25 illustrates the timing of a clock suspend mode operation, and FIG. 26 illustrates the timing of another burst read operation. FIG. 27 illustrates the timing of alternating bank read accesses. Here active command are needed to change bank addresses. A full page burst read operation is illustrated in FIG. 28. Note that the full page burst does not self terminate, but requires a terminate command. FIG. 29 illustrates the timing of a read operation using a data mask signal. The DQM signal is used to mask the data output so that Dout m+1 is not provided on the DQ connections. Referring to FIG. 30, the timing of a write operation followed by a read to a different bank is illustrated. In this operation, a write is performed to bank a and a subsequent read is performed to bank b. The same row is accessed in each bank. Referring to FIG. 31, the timing of a write operation followed by a read to the same bank is illustrated. In this operation, a write is performed to bank a and a subsequent read is performed to bank a. A different row is accessed for the read operation, and the memory must wait for the prior write operation to be completed. This is different from the read of FIG. 30 where the read was not delayed due to the write operation. Elimination of Precharge Operation An SDRAM and any other type of DRAM use a precharge cycle after reading memory elements. When a DRAM cell is read, the read operation destroys the data stored in the memory cell. This happens since the charge on the capacitor of the cell is shared with a bit line and changes the bit line voltage to slightly higher or lower than a predetermined level, as known to those in the art. That incremental change is then amplified to a data of 0 or 1 for the memory. Precharge operations write-back the data to the memory cell after it has been sensed. The precharge operation is also used to restore a voltage of the bit lines to a Vcc/2 level and prepare them for the next access cycle. Flash memory cells do not require a precharge cycle to write-back the data that was read. That is, Flash is non-volatile and it takes much higher voltages to disturb a memory cell. Further, the present invention uses a first part of a subsequent cycle to shut off data latches coupled to bit lines and used for storing the data, and doing a precharge of bit lines. While this may add a bit more time relative to SDRAM to prepare for a read, the present Flash memory reads data within the timing allowed for SDRAM. This invention improves throughput of a system using the Flash relative to a system using a SDRAM. In SDRAM, the total cycle to read a burst of data is the total clock cycles required to go from Active to Read, CAS latency and Precharge time before a new row can be opened. Elimination of the precharge cycle gets rid of the third delay timing and hence improves the throughput of the interface. The time saved by eliminating the Refresh cycles can be used to enter commands to other banks and hence improve Bank read concurrency. The present invention, therefore, eliminates a memory Precharge/Refresh operation by incorporating a bit line precharge into the Active command time period. As such, the external time delay associated with SDRAM is eliminated from the current synchronous Flash memory. CONCLUSION A synchronous flash memory has been described that includes an array of non-volatile memory cells. The memory device has a package configuration that is compatible with an SDRAM. In one embodiment, the synchronous memory device comprises an array of memory cells arranged in rows and columns. A clock connection is provided to receive an externally provided clock signal. The memory does not require a precharge time period during a time period between the first and second externally provided active commands. |
In described examples of a method (800) of testing a semiconductor wafer including a scribe line and multiple dies, the method (800) includes implementing a first landing pad on the scribe line (802), and implementing a first interconnect on the scribe line and between the first landing pad and a first cluster of the plurality of dies (804), thereby coupling the first landing pad to the first cluster of dies. The method (800) further includes performing the testing of the first cluster of dies using automated test equipment (ATE) coupled to a probe tip by contacting the first landing pad with the probe tip and applying an ATE resource to the first cluster of dies (806). |
CLAIMSWhat is claimed is:1. A method of testing a semiconductor wafer comprising a scribe line and a plurality of dies, the method comprising:implementing a first landing pad on the scribe line;implementing a first interconnect on the scribe line and between the first landing pad and a first cluster of the plurality of dies, thereby coupling the first landing pad to the first cluster of dies; andperforming the testing of the first cluster of dies using automated test equipment (ATE) coupled to a probe tip by: contacting the first landing pad with the probe tip; and applying an ATE resource to the first cluster of dies.2. The method of claim 1 further comprising implementing one or more die-to-die links on the scribe line.3. The method of claim 2 further comprising using a first one of the plurality of dies to perform a test on a second one of the plurality of dies using the die-to-die link implemented on the scribe line.4. The method of claim 1 further comprising:implementing a second landing pad on the scribe line;implementing a second interconnect on the scribe line and between the second landing pad and a second cluster of the plurality of dies, thereby coupling the second landing pad to the second cluster of dies; andperforming concurrent testing of the first and second clusters by: contacting each of the first and second landing pads with probe tips coupled to the ATE; and applying the ATE resource to the first and second clusters of dies.5. The method of claim 4 further comprising:implementing additional landing pads and interconnects on the scribe line such that each of the plurality of dies of the wafer is coupled to one of the landing pads; andtesting all of the plurality of dies of the wafer concurrently.6. The method of claim 1 wherein performing the testing further comprises observing a response from the first cluster of dies as a result of the application of the ATE resource.7. The method of claim 1 wherein a result of the testing indicates a presence of a bad die in the first cluster, the method further comprising isolating the bad die from the other dies in the cluster.8. The method of claim 7 wherein the bad die is isolated from the other dies in the cluster by operating a programmable switch.9. The method of claim 7 further comprising discarding the first cluster as a result of the presence of the bad die in the first cluster.10. A system for testing a plurality of semiconductor dies, the system comprising:a semiconductor wafer comprising: a scribe line between at least some of the plurality of semiconductor dies; a first landing pad on the scribe line; and a first interconnect on the scribe line and between the first landing pad and a first cluster of the plurality of dies, the first interconnect configured to couple the first landing pad to a first cluster of the dies; anda probe tip coupled to automated test equipment (ATE) and configured to: contact the first landing pad; and apply an ATE resource to the first cluster of dies to test the first cluster of dies.11. The system of claim 10 wherein the scribe line comprises one or more die-to-die links.12. The system of claim 11 wherein a first one of the plurality of dies is configured to perform a test on a second one of the plurality of dies using the die-to-die link implemented on the scribe line.13. The system of claim 10 further comprising:a second landing pad on the scribe line;a second interconnect on the scribe line and between the second landing pad and a second cluster of the plurality of dies, the second interconnect configured to couple the second landing pad to the second cluster of dies; anda second probe tip coupled to the ATE configured to contact the second landing pad and apply the ATE resource to the second clusters of dies to concurrently test the first and second clusters of dies.14. The system of claim 13 further comprising:additional landing pads and interconnects on the scribe line such that each of the plurality of dies of the wafer is coupled to one of the landing pads; anda plurality of probe tips coupled to the ATE, each of the probe tips configured to contact one of the landing pads and apply the ATE resource to each of the plurality of dies to concurrently test all of the plurality of dies of the wafer.15. The system of claim 10 wherein the probe tip is further configured to receive a response from the first cluster of dies as a result of the application of the ATE resource.16. The system of claim 10 wherein a result of the test indicates a presence of a bad die in the first cluster, the probe tip configured to isolate the bad die from the other dies in the cluster.17. The system of claim 16 wherein the semiconductor wafer further comprises one or more switches that, when caused to open, isolate the bad die from the other dies in the first cluster.18. A method of testing first and second devices under test using automated device equipment (ATE), the method comprising:performing concurrent testing of the first and second devices under test by: contacting each of the first and second devices under test with probe tips coupled to the ATE; and applying a first ATE resource to the first device under test while applying a second ATE resource to the second device under test;wherein the ATE lacks a sufficient amount of one of the resources to apply that resource to the first and second devices under test concurrently.19. The method of claim 18 further comprising generating a pin-muxing preamble for one of the devices under test to configure that device under test for the ATE resource that is to be applied to that device under test.20. The method of claim 18 wherein the devices under test each comprise one or more dies on a wafer.21. The method of claim 18 wherein the devices under test each comprise one or more packaged parts.22. The method of claim 18 wherein the devices under test each comprise a cluster of dies on a wafer, the cluster of dies coupled to a landing pad implemented in a scribe line of the wafer.23. A system for testing first and second devices under test, the system comprising:a plurality of probe tips coupled to automated test equipment (ATE) and configured to: contact each of the first and second devices under test; and apply a first ATE resource to the first device under test while applying a second ATE resource to the second device under test;wherein the ATE lacks a sufficient amount of one of the resources to apply that resource to the first and second devices under test concurrently.24. The system of claim 23 wherein the probe tip is configured to apply a pin-muxing preamble to one of the devices under test to configure that device under test for the ATE resource that is to be applied to that device under test.25. The system of claim 23 wherein the devices under test each comprise one or more dies on a wafer.26. The system of claim 23 wherein the devices under test each comprise a packaged part.27. The system of claim 23 wherein the devices under test each comprise a cluster of dies on a wafer, the cluster of dies coupled to a landing pad implemented in a scribe line of the wafer. |
SYSTEMS AND METHODS OF TESTING MULTIPLE DIESBACKGROUND[0001] Automated test equipment (ATE) includes multiple resources (e.g., analog resources and digital resources for test and measurement) that are applied to a device under test (DUT), such as a die or dies on a semiconductor wafer. The resources are applied through an interface including one or more probe heads, where each probe head includes multiple probe tips to provide an electrical contact to landing pads on the DUT.[0002] Conventional multi-site testing throughput is limited because a total set of one type of ATE resources may be limited to N, where M resources of that type are required to test a die, resulting in the maximum number of dies that may be tested in parallel during each touch-down of the wafer probe being N/M. Further, N/M is an ideal maximum multi-site capability. In practice, the probe card that controls routing to the various probe heads and tips may further constrain the routing density and reduce the amount of possible connections between ATE resources and multiple dies, thereby reducing the attainable multi-site factor.[0003] In addition to the limitations imposed on the multi-site factor by physical constraints, such as available ATE resources and the design of the probe card, heads, and tips, conventional ATE testing is performed by mapping resources from the ATE onto individual dies on a wafer (or individual packaged parts in final test, where the dies/packaged parts are similarly referred to as DUTs), where all DUTs are tested identically.[0004] Accordingly, a given test executes on all N/M DUTs where, as above, N/M is the multi- site factor (assuming no additional probe card routing constraints). However, the ATE does not necessarily include an equal number of each type of resource. As a result, the multi-site factor is determined by the resource that is least available from the ATE (i.e., the maximum count of the most constrained resource). Examples of such resource limitations include number of analog channels, number of data logging channels, number of high speed interface channels, and number of clock channels. Although higher parallelism is available for ATE resources greater in number, overall test throughput is impeded by the ATE resources that are lower in number, which results in a longer testing time. SUMMARY[0005] In described examples of a method of testing a semiconductor wafer including a scribe line and multiple dies, the method includes implementing a first landing pad on the scribe line, and implementing a first interconnect on the scribe line and between the first landing pad and a first cluster of the plurality of dies, thereby coupling the first landing pad to the first cluster of dies. The method further includes performing the testing of the first cluster of dies using automated test equipment (ATE) coupled to a probe tip by contacting the first landing pad with the probe tip and applying an ATE resource to the first cluster of dies.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 shows a schematic of die clusters on a wafer in accordance with example embodiments.[0007] FIGS. 2a-2b shows a schematic of landing pads implemented in a scribe line on a wafer in accordance with example embodiments.[0008] FIG. 3 shows an example probe head configuration in accordance with example embodiments.[0009] FIGS. 4a-4c show example switch configurations for connecting to or isolating dies on a wafer in accordance with example embodiments.[0010] FIG. 5 shows example test flows in accordance with example embodiments.[0011] FIG. 6 shows conventional test applications and example multi-content test applications in accordance with example embodiments.[0012] FIG. 7 shows example probe head configurations for carrying out multi-content test applications in accordance with example embodiments.[0013] FIG. 8 shows a flow chart of a method in accordance with example embodiments.[0014] FIG. 9 shows a flow chart of another method in accordance with example embodiments. DETAILED DESCRIPTION OF EXAMPLE EMB ODFMENT S[0015] In this description, the term "couple" or "couples" means either an indirect or direct wired or wireless connection. For example, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections.[0016] To address the above problems, example embodiments are directed to systems and methods for testing multiple dies on a semiconductor wafer. For example, a scribe line refers to the space between dies on a wafer where a saw can safely cut the wafer without damaging the dies or circuits implemented on the wafer. Conventionally, the scribe line is a non-functional spacing that merely serves to ensure that a saw (e.g., a mechanical saw, a laser-based saw, or other known device for separating dies on a wafer) is able to effectively cut between the dies or circuits.[0017] However, in accordance with example embodiments, a landing pad and an interconnect coupled to the landing pad are implemented in a scribe line of a wafer. The interconnect couples the landing pad to a cluster of dies on the wafer. A tip of a probe head contacts the landing pad during testing to provide an electrical connection between the probe head and the cluster of dies. Subsequently, the cluster of dies is tested using automated test equipment (ATE) that includes multiple resources as described above, which are applied to the cluster of dies (referred to collectively as a device under test (DUT)) via the landing pads. In this way, the scribe line is used to create landing pads and interconnects that allow a single probe tip to fan out, or electrically contact, a cluster of dies rather than a single die. This results in an increase in the attainable multi-site factor, depending on how many dies an interconnect couples a single landing pad to.[0018] For example, a multi-site factor is conventionally given by N/M as described above (where N is the number of a particular available ATE resource and M is the number of resources of that type required to test a die). However, example embodiments increase the multi-site factor by a factor of L, where L is the number of dies in the cluster contacted by the scribe line-implemented interconnect and the landing pad coupled thereto. So, in a case where a landing pad is implemented in the scribe line and coupled to a die cluster of size 4 through an interconnect also implemented in the scribe line, the multi-site factor is boosted to 4*N/M, which results in an increase in test throughput and corresponding decrease in time required to test all the dies on a wafer of a given size.[0019] Example embodiments should not necessarily be limited to implementing only a single landing pad and interconnect on a scribe line of a wafer. Instead, multiple landing pads and interconnects may be implemented on scribe lines between dies, such as by employing tunneling to provide scribe-to-die connectivity across different layers. In these examples, multiple probe tips may each contact a different landing pad in the scribe line during a single touch-down, improving the fan out of ATE resources across the wafer. In fact, in some examples, all of the dies contained on a single wafer may be coupled to landing pads accessible by the probe head in a single touch-down, which may permit a test to be concurrently performed on all the dies of a wafer. A number of dies of a wafer that are testable during a single touch-down may vary in practice, such as based on restrictions of available ATE resources, location of probe heads mounted on the probe card, and the size of die clusters.[0020] In addition to interconnects between a landing pad and a cluster or plurality of dies, the scribe line may also include one or more die-to-die connections. These die-to-die connections allow one die to test another die. In these examples, dies may be classified as a master (i.e., the die applying the test) or a slave (i.e., the die being tested) and as functional (i.e., enabling testing using functions implemented on the die) or sacrificial (i.e., enabling testing using test-only functions). Sacrificial dies can additionally provide landing pads, improved routing between dies, and other embedded design for testability (DFT) elements to aid in test application and response measurement (e.g., a voltage regulator for providing a reference voltage, current measurement using current mirrors and resistors, and built-in self-test (BIST) controllers). Further, in some examples, DFT structures such as measurement units (e.g., resistor dividers, low-cost analog-digital converters (ADCs), and flash BIST controllers) may be implemented in the scribe line, enabling measurements on individual dies to be performed locally instead of being made using ATE resources.[0021] In some example embodiments, one or more switches may be implemented on signal and/or power connections, either in a scribe line or within a die itself. These switches permit selective connectivity between components (e.g., landing pads, DFT structures, and the dies themselves), which may assist in the isolation of a die or dies identified as being faulty during an earlier testing procedure.[0022] Accordingly, example embodiments allow for wafer probing with an improved multi-site factor, within the restrictions of available ATE resources, by enabling probe heads mounted on the probe card to cover or "fan out" to an increased number of dies or die clusters. In some examples, in fact, all dies on a wafer may be tested with a single touch-down event, due to implementing interconnects and landing pads within the scribe line.[0023] Referring again to FIG. 1, schematic examples of contacting dies on a wafer are shown. In the first example 100, a prober 102 (generally referring to the combination of one or more probe tips on a probe head) is driven by an ATE 104. The prober 102 is an electromechanical assembly that transmits electrical signals from the ATE 104 to an electrical contact on the wafer. In the example 100, the mechanical prober 102, driven by the ATE 104, is configured to test a grouping of nine dies 106 in a single touch-down event. Achieving a high degree of parallelism in testing dies 106 is limited by the fact that the ATE 104 has a finite number of electrical resources (or pins).[0024] To illustrate this limit, in the second example 150, four separate probers 152 are each driven by a dedicated ATE 104, and thus a grouping of 36 dies may be tested in a single touch-down event. However, this requires four times the resources of the first example 100— or four ATEs 154— to implement, which is not ideal. Alternatively, one ATE 154 may drive four probe heads 152, although ATE 154 resources may be constricted and thus not all 36 dies may be tested at once. Further, it may be desirable to reduce the number of probe heads 152 as well. The probers 102, 152 may include multiple probe heads, each probe head in turn including multiple probe tips. However, the examples 100, 150 are for illustrative purposes to demonstrate the limitations when attempting to expand the parallelism of testing in conventional scenarios.[0025] Conventionally, landing pads are located on each of the various dies of a wafer (i.e., the landing pads are the pads of the particular die). To address this limitation, FIG. 2a illustrates a semiconductor wafer 200 in accordance with example embodiments, in which landing pads 202 and interconnects 204 are positioned in one or more scribe lines 206 between dies 208 of the wafer 200. The view 200a depicts landing pads 202 coupled to test routes or interconnects 204 in portions of the scribe line area 206. The view 200b depicts the remaining free scribe area 210, along with the landing pads 202 for reference. The view 200c depicts the merging of the landing pads 202, the interconnects 204, and the free scribe area 210.[0026] FIG. 2b illustrates the landing pads 202 and the interconnects 204 in the scribe lines 206 of FIG. 2a in greater detail. For example, multiple landing pads 202a-d are shown in the scribe line 206. In this example discussion, it may be assumed that each pad 202a-d corresponds to a different ATE resource needed to test a die 208. Each landing pad 202a-d is coupled to a corresponding interconnect 204a-d, respectively, which provides routing to the dies 208. As shown, the dies 208 also include pads 220, which would conventionally be used to make electrical contact with the dies 208.[0027] However, as described above, contacting the pads 220 limits the amount of parallelism in testing dies 208 that may be achieved. By contrast, example embodiments improve the parallelism in testing the dies 208 by utilizing (in the shown example of FIG. 2b) one pad 202a-d to contact four different dies 208. Thus, in one example where a prober 102 only contains four pins, each of a different, required ATE resource type, the example shown in FIG. 2b allows four dies 208 to be tested in a single touch-down event. Conventional wafers and testing systems and methods would require four separate touch-down events, one to test each of the dies 208 shown.[0028] FIG. 2b is an example, and the scope of example embodiments is not limited to a 2x2 tile arrangement (nor a 7x7 tile arrangement as shown in the expanded view 250). Instead, FIG. 2b illustrates the improvements to parallelism in testing the dies 208 enabled by example embodiments in which landing pads 202 and interconnects 204 are positioned in the scribe line 206 to "fan out" to a number of dies 208 in the area. The number of landing pads 202 and interconnects 204 able to be positioned in the scribe line 206 may vary based on the width of the scribe line 206 and the width of the interconnects 204 (e.g., signal lines may be narrower, while interconnects 204 carrying a higher current may be thicker). Also, tunneling may be employed to enable scribe-to-die connectivity across different layers of the scribe line 206, as shown by the four interconnects 204a-d in FIG. 2b.[0029] FIG. 3 illustrates a distributed probe head configuration relative to a single probe head configuration. For example, FIG. 3 shows two example probe heads 300, 350 in accordance with example embodiments. The probe head 300 contains a single prober 302 and pogo connectors 303 (or other spring-loaded devices to allow for compliance when contacting a wafer). The prober 302 includes a plurality of pins 305 that provide electrical connectivity between ATE resources 304 and a pad on the wafer when in contact. The probe head 350 includes similar pogo connectors 353 and several probers 352, which can thus land on a wafer in multiple locations. In some examples, depending on the prober 300, 352, the probe head 302, 352 configuration, how the wafer is laid out, how much current is required to test each die on the wafer, how many pads are required to test each die, the entire wafer may be tested in a single touch-down event because the probers 302, 352 and associated pins (e.g., pins 305) may be "fanned out" across multiple dies as described above.[0030] FIG. 4a illustrates example switch configuration or topology for connecting to and/or isolating individual (or clusters of) dies. FIG. 4 shows a scribe line 206 between two dies 208, similar to those described above with respect to FIG. 2a. Further, the scribe line contains landing pads 202, depicted as rectangular elements. Also, the dies 208 themselves contain landing pads 402, which may be used before or after the scribe line 206 has been cut. As shown, the landing pads 202 in the scribe line 206 may couple to one or more dies 208 as described above. The scribe line 206 also includes internal connections 404 that enable connectivity between the scribe 206 and the die 208, but are not landing pads 202. The internal connection 404 may comprise a logic element, such as a resistor divider or design for testability (DFT), which may be used to drive one or more of the surrounding dies 208. Alternatively, the internal connection 404 may be more "simple" and merely present a metal -to-metal connection between different layers of the scribe line 206 (e.g., between different interconnects, described above with respect to FIG. 2a).[0031] FIG. 4b demonstrates that pad landings may occur on either the landing pad 202 on the scribe line 206 or a landing pad 402 on the die itself. For example, the landing pad 202 may be used during a wafer test, whereas (for example) the landing pad 402 may be used after the pad of the die 208 is bonded to a wire to be placed into a packaged part. Further, a switch 410 may allow for isolation of this particular die 208 from the landing pad 202 in the event that the particular die 208 fails a test, but the remaining dies 208 in its cluster are still to be tested.[0032] FIG. 4c demonstrates a landing pad 402 only on a die, which is coupled to an internal connection 404 by a switch 412. Inclusion of the landing pad 402 on the die is beneficial so that the die may be tested even after the scribe line is sawed through. As described above, the internal connection 404 may comprise a logic element or may simply provide a connection to another layer of the scribe line 206, such as to couple to another landing pad or die. Thus, even in the example of FIG. 4c, although a landing pad 202 itself is not implemented in the scribe line 206, testing of the die 208 and surrounding dies may be improved through their access to a common logic element 404 in the scribe line 206, which would not otherwise exist. For example, either of the switches 410, 412 may be operated, so that after a die 208 is identified as bad or faulty, that die may be isolated from power and input signals or, in the case of 412, isolated from access to the logic resource or connection 404.[0033] FIG. 5 illustrates various example wafer probe test flows in accordance with example embodiments. From the outset, the "phases" corresponding to columns of the table do not necessarily relate to the time taken to perform the phase. Instead, the phases are intended to show the possible steps that may be taken during a particular test flow.[0034] With respect to all test flows, a phase 0 scribe characterization test is performed, which is an electrical test that occurs before actual testing of the individual dies on the wafer. Accordingly, upon manufacture of the wafer, a so-called wafer health flow is performed to ensure that the wafer as a whole meets basic requirements. Subsequent to this phase 0 step, the various test flows may differ. [0035] For example, in the conventional test flow, the first phase includes probing of the dies on the wafer in a conventional manner, such as contacting landing pads on the dies with the probe head and applying ATE resources to those contacted dies. This process may be repeated a number of times until all the dies on the wafer have been suitably tested. Then, the second phase comprises a final test, which may include a test of a die integrated as a packaged part.[0036] However, in accordance with example embodiments, test flows 1 and 2 begin with a test of the scribe itself. For example, because example embodiments utilize the scribe for connectivity— a landing pad to fan out to multiple dies by various interconnects and/or other internal connections such as logic elements— the scribe itself is tested before beginning testing of any of the associated cluster of dies. Both of test flows 1 and 2 then proceed to test both the scribe and dies, such as using the fanned out scribe-implemented landing pads described above. The fanning out of the scribe-implemented landing pads permits a higher multi-site than would otherwise be achievable within the context of a single touch-down event.[0037] Test flow 1 continues in phase 3 to test the dies themselves (again, using the fanned out scribe-implemented landing pads in some examples), while test flow 2 includes an optional step of testing a die-to-die connection; test flow 2 also then proceeds to test the dies themselves. After the dies have been tested, both test flows 1 and 2 may proceed to testing dies in the context of being a packaged part.[0038] The above-described examples improve upon the attainable multi-site factor by fanning out a single landing pad to a plurality of dies, increasing the number of dies reachable in a single touch-down event. In some cases, the method used for testing dies includes mapping ATE resources onto individual dies or DUTs on the wafer or DUTs as packaged parts in a final test. However, it may be that all DUTs are tested identically; that is, a same given text executes on all the DUTs being tested at a given time based on the multi-site factor of the particular testing system, which may waste some resources. For example, resources required exclusively for "Test B" are unutilized while "Test A" is performed. As a result, the overall system multi-site factor is determined by the ATE resource that is least available (i.e., the maximum count of the most constrained ATE resource). Examples of ATE resources include analog channels, digital channels, data logging channels, high-speed interface channels, and clock channels. Commonly, an equivalent number of each resource is not available, which leads to the multi-site factor constraint described above. [0039] Thus, although higher parallelism is available for ATE resources that are greater in number, the overall test throughput (which requires multiple resources to be applied) may still be somewhat impeded by the constrained number of ATE resources. Accordingly, for constrained ATE resources, the test time increases as tests across multiple DUTs are performed serially. However, in developing the example embodiments, it was determined that a typical maximum ATE resource utilization is approximately 70%. Accordingly, across different applications of tests in a test schedule that utilize a varying set of ATE resources for different tests in the schedule, approximately only 70% of the ATE resources are used on average across the time duration required to run all the tests in the test schedule.[0040] To address these issues, some example embodiments may test multiple DUTs (e.g., dies on a wafer or packaged parts) using available ATE resources (or resources from similar test processing equipment) with different tests being applied to different DUTs at the same time. For example, different DUTs are tested using different tests (or different ATE resources) at the same time, which results in a summing of ATE resources that are able to be applied at one time. Due to physical constraints (e.g., the arrangement of probe heads, probe tips, and landing pads), the number of different ATE resources applied to DUTs at the same time may not be the entire sum of the numbers of those ATE resources. For example, if 70 resources of type A and 30 resources of type B are available, physical constraints involved in contacting the wafer or DUTs may result in all 70 resources of type A being applied to certain DUTs, while only 20 resources of type B are applied to other DUTs. Regardless, more resources are applied during a single touch-down event than conventional testing using a single set of resources to apply one test per touch-down event.[0041] For example, multiple DUTs are tested concurrently, although different tests may be applied to different ones of the DUTs at the same time. In the context of testing dies on a wafer, ATE resources are mapped onto different pads on the wafer using various probe head configurations mounted on a probe card. Similarly, in the context of testing packaged parts, ATE resources are mapped onto different packaged device pins on an ATE load board using various routing and relay configurations. The ATE hardware may be configured to allow dynamic allocation of resources connected to various ATE channels to different DUT pads or pins. The particular allocation of ATE resources to the ATE channels is controlled by an internal ATE test program. Thus, example embodiments improve ATE resource utilization by scheduling tests concurrently that leverage otherwise-unused ATE resources. As a result of concurrent testing, overall test throughput is increased while idle ATE resources for a given touch-down event are reduced. The disclosed examples may apply to both ATE-based testing (in which probe cards are used to test dies on a wafer) and board-based testing, in which a load board is used to test packaged dies.[0042] In some example embodiments, a plurality of dies or DUTs are grouped together to form a "cluster," where each DUT is part of only one cluster. DUTs within a cluster are tested concurrently, and may be tested with different test content (or different applied ATE resources). However, all clusters on the wafer may be identical and tested concurrently. Thus, a cluster may be viewed as a DUT itself, composed of sub-modules each of which is also a DUT (e.g., an individual die). As a result, the ATE interfaces with the clusters, which make up the wafer, all of the clusters being identical and tested with identical content.[0043] The described examples overcome conventional bottlenecks in test time, test throughput, attainable multi-site factor (and thus test concurrency), and resource utilization in various scenarios. For example, where constraints exist due to DUT pins because each DUT offers only a limited number of pins for testing, example embodiments may be leveraged in at least one of two ways. First, the number of DUT pins available for test may be increased. Accordingly, the test mode pin-muxing is relaxed, with different pins contacted for different tests. However, this may complicate the probe head/ load board relay designs in order to allocate different ATE resources to different pins of the DUT. Second, the same set of DUT pins may be utilized for application of different tests, where each test may require application of a dedicated pre-amble to internally assign the pins to the relevant module inside the DUT being tested (i.e., apply a pin-muxing preamble). In both the cases, the ATE throughput increases because different ATE resources are used to test different DUTs with different test content.[0044] Referring again to FIG. 6, for simplicity, each DUT may require two tests, referred to as test A and test B. For test A to be performed on a DUT, two ATE resources of type A are required. Similarly, for test B to be performed on a DUT, two ATE resources of type B are required. The example ATE utilized in FIG. 6 is capable of supplying four resources of type A and two resources of type B. Again, for simplicity, each ATE resource connects to one DUT pin, so: 4 pins of resource type A are available from the ATE, and 2 pins of resource type B are available from the ATE.[0045] The example 602 illustrates a conventional test flow, in which only one DUT is capable of being tested with resource B at a time, because only two resources of type B are available, and test B requires two resources of type B. Thus, DUT1 testing is complete in 2 cycles, DUT2 testing is complete in 2 cycles, and DUT3 testing is complete in 2 cycles.[0046] The example 604 illustrates another conventional test flow in which test A is applied to two dies simultaneously (i.e., DUT1 and DUT2 in cycle 1 and DUT3 and DUT4 in cycle 4). This is enabled by the fact that four ATE resources of type A are available, but test A only requires two resources of type A. However, because the amount of resources of type B is constrained, then the application of test B to the DUTs must occur serially.[0047] In accordance with example embodiments, examples 606 and 608 improve upon the conventional test flows 602 and 604 by applying dissimilar test content to different DUTs concurrently. In the example implementation 606 during cycle 1, test A is applied to DUT1 and test B is applied to DUT2. Continuing on, during cycle 2, test A is applied to DUT 3 and test B is applied to DUT1, while in cycle 3, test A is applied to DUT2 and test B is applied to DUT3. Thus, in the first three cycles, tests A and B have been applied to three DUTs. Cycles 4-6 are similarly used to apply tests A and B to DUTs 4-6. In the example 606, the same test is never concurrently applied to multiple DUTs (i.e., test A because that is the only test capable of concurrent application to more than one DUT). Importantly, however, different types of tests (i.e., test A and test B) are concurrently applied to different DUTs in the same cycle, resulting in the improvement of testing efficiency over conventional examples 602 and 604.[0048] Example 608 presents similar benefits relative to example 606. However, the test flow illustrated by the example implementation 608 demonstrates additional benefits that may be achieved by example embodiments of multi-content testing. For example, because four resources are of type A, and because applying test A only requires two resources of type A, during cycle 1, test A is applied to both DUT1 and DUT2 and test B is applied to DUT2. Then, during cycle 2, test A is applied to DUT2 (completing DUT2's test process) and test B is applied to DUT1 (completing DUTl 's test process). Finally, during cycle 3, test B is applied to DUT3. Cycles 4-6 are similarly utilized for DUTs 4-6. Thus, as was the case with example 606 described above, in the first three cycles, tests A and B have been applied to three DUTs.[0049] However, as illustrated in example 608, resource A is idle for effectively three cycles' worth of time (i.e., over the three cycles required to test three DUTs). Certain real-world examples may take advantage of this, such as where a DUT requires an additional test A' that also utilizes resource of type A, or where test A requires additional time to complete relative to test B (e.g., double the time of test B in example 608).[0050] FIG. 7 shows various configurations of probe heads, having varying numbers of probe tips, that may be used to perform the multi-content test in accordance with example embodiments. The example configuration 706a shows one way that the test of example 606 above may be performed using a probe head having 12 probe tips. For example, each DUT has four probe tips dedicated to that die: two to deliver resources of type A and two to deliver resources of type B.[0051] The configuration 706b represents some examples in which the size of the probe head can be reduced, in this case to a probe head having six probe tips. Each pair of probe tips is dedicated to a DUT, and thus resources of type A and type B are mapped to the same pins of that particular DUT. In this example 706b, a pin-muxing preamble is applied (notated by a double boundary between cycles) to inform the DUT that the ATE is about to switch from application of resources of type A to application of resources of type B (or vice versa in the case of DUT2). This improves the efficiency because in 706a, eight pins were unused every cycle, whereas in 706b, only two pins are unused every cycle.[0052] The configuration 708 represents some examples in which the probe head may be somewhat reduced in size, but rather than utilizing a pin-muxing preamble as in the example 706b, the probe head itself may be moved between cycles to contact different pins of the DUTs. This reduction in the number of probe pins is achieved by assigning idle probe pins during the testing of one die to the testing of a neighboring die. This is feasible when the topology of the pins within the die has a similar pattern to the topology of the pins across two neighboring dies.[0053] Referring again to FIG. 8, a method 800 is shown in accordance with example embodiments. The method 800 begins in block 802 with implementing a first landing pad on a scribe line of a semiconductor wafer including a plurality of dies. The method 800 continues in block 804 with implementing a first interconnect on the scribe line and between the landing pad and a cluster of the dies on the wafer. As described above, by implementing such an interconnect between a scribe-based landing pad and a cluster of dies, a single probe tip may be used to apply an ATE resource to multiple dies due to the fact that the interconnect fans out from the landing pad to the cluster of dies. This improves the attainable multi-site factor for a given touchdown event. Thus, the method 800 also continues in block 806 with testing the cluster of dies using the ATE by contacting the landing pad with the probe tip and applying an ATE resource through the probe tip to the cluster of dies.[0054] In some cases, the method 800 may continue in block 808 with implementing one or more die-to-die links on the scribe line, which enables one die to test another die by the die-to-die link. For example, a receiver circuit of one die can be used to test a transmitter circuit of a neighboring die, and vice versa. In addition to enabling a wider variety of testing flows, this may also help reduce the dependency on the ATE for transmitter-receiver testing.[0055] In examples, additional landing pads and interconnects may be implemented on the scribe line to enable testing of additional cluster(s) with another probe tips coupled to the ATE in a single touchdown event. In some other examples, the landing pads, interconnects, and probe tips may be configured in a way that enables all of the dies on the wafer to be tested concurrently in a single touchdown event.[0056] During testing of the one or more clusters, the ATE may monitor a response from the dies as a result of the application of the ATE resource. Further, the testing may indicate the presence of a bad die in one or more of the clusters, which may be isolated (e.g., for further testing or discarding). In some cases, the bad die may be isolated by operating a programmable switch to decouple that die from the rest of the cluster. However, in other cases, such as where such switches are unavailable or where efficiency or timing constraints require, the cluster containing the bad die may be discarded and testing on other clusters of dies on the wafer is performed.[0057] FIG. 9 shows another method 900 in accordance with example embodiments. The method 900 begins in block 902 with contacting first and second devices under test with probe tips coupled to ATE. Examples of devices under test may include dies on a wafer, clusters of dies on a wafer, and packaged parts. Further, where the device under test comprises a cluster of dies on a wafer, the contact may be by a landing pad implemented on a scribe line and a coupling interconnect, as described above. In some cases, the devices under test may each comprise a cluster of packaged parts that is coupled to pins on the probe head using routing on an ATE hardware board. The method 900 continues in block 904 with applying a first ATE resource to the first device under test while applying a second ATE resource to the second device under test. As described above, in some cases the ATE lacks a sufficient amount of one of the resources to apply that resource to the first and second devices under test concurrently. Thus, but controlling the ATE such that different resources are applied to different devices under test concurrently, an increased test throughput is enabled by utilizing resources that conventionally are wasted. [0058] In some cases, the method 900 further continues in block 906 with generating a pin-muxing preamble for one of the devices under test to configure that device under test for the ATE resource that is to be applied to that device under test. In this way, multiple types of resources may be mapped to the same pin of a device under test, while allowing for configuration of that pin before the application of a different resource type.[0059] In the foregoing discussion, various examples utilize particular probe head designs, probe tip numbers and pin constrictions on DUTs.[0060] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. |
A symmetric varactor structure may include a first varactor component. The first varactor component may include a gate operating as a second plate, a gate oxide layer operating as a dielectric layer and a body operating as a first plate of an area modulating capacitor. In addition, doped regions may surround the body of the first varactor component. The first varactor component may be supported ona backside by an isolation layer. The symmetric varactor structure may also include a second varactor component electrically coupled to the backside of the first varactor component through a backsideconductive layer. |
1.A symmetrical varactor structure includes:a first varactor assembly having a gate operated as a second plate of the zone adjustment capacitor, a gate oxide layer operating as a dielectric layer, and a body operating as a first plate of the zone adjustment capacitor, And a plurality of doped regions surround the body, the first varactor assembly being supported on the back side by an isolation layer; andA second varactor assembly is electrically coupled to the back side of the first varactor assembly through a backside conductive layer.2.The symmetrical varactor structure of claim 1, further comprising:a signal port that is coupled to the gate; andA plurality of control ports, each control port being coupled to one of the plurality of doped regions, the signal ports are isolated from the plurality of control ports.3.The symmetrical varactor structure of claim 1, wherein the plate area of the first plate is adjusted based on a bias voltage received from the control port to control the area adjusting capacitor.4.The symmetrical varactor structure of claim 1, wherein the spacer layer comprises a buried oxide layer.5.The symmetrical varactor structure of claim 1, wherein the first varactor assembly and the second varactor assembly are integrated in an integrated circuit.6.The symmetrical varactor structure of claim 5, wherein the integrated circuit comprises a power amplifier (PA), an oscillator, an RF (radio frequency) tuner, an RF transceiver, a multiplexer, and/or RF. Circuit die.7.The symmetrical varactor structure of claim 1, wherein the first varactor assembly and the second varactor assembly are integrated into an RF (radio frequency) switch.8.The symmetrical varactor structure of claim 1, wherein the first varactor assembly and the second varactor assembly are supported by a substrate, the substrate comprising glass, quartz, or silicon.9.The symmetrical varactor structure of claim 1, wherein the symmetrical varactor structure is incorporated into at least one of the following: a music player, a video player, an entertainment unit, a navigation device, a communication device, Personal digital assistants (PDAs), fixed-position data units, and computers.10.A method of manufacturing a symmetrical varactor structure includes:Fabricating a first varactor assembly on the isolation layer adjacent to the second varactor assembly of the symmetrical varactor structure;Thinning the symmetrical varactor structure to expose the body of the first varactor assembly and the body of the second varactor assembly; andA conductive layer is deposited and patterned to couple the body of the first varactor assembly with the body of the second varactor assembly.11.The method of claim 10, wherein depositing and patterning the conductive layer further comprises:Depositing and patterning a redistribution layer as the conductive layer to couple the body of the first varactor assembly with the body of the second varactor assembly; andA passivation layer is deposited and patterned on the redistribution layer.12.The method of claim 11 further comprising bonding a substrate to the passivation layer.13.The method according to claim 10, further comprising incorporating the symmetrical varactor structure into a music player, a video player, an entertainment unit, a navigation device, a communication device, a personal digital assistant (PDA), and a stationary device. At least one of a location data unit and a computer.14.A symmetrical varactor structure includes:a first varactor assembly having a gate operated as a second plate of the zone adjustment capacitor, a gate oxide layer operating as a dielectric layer, and a body operating as a first plate of the zone adjustment capacitor, And a plurality of doped regions surround the body, and the first varactor assembly is supported on the back side by an isolation layer;Second varactor assembly; andMeans for electrically coupling the second varactor assembly to the back side of the first varactor assembly.15.The symmetrical varactor structure of claim 14 further comprising:a signal port that is coupled to the gate; andA plurality of control ports, each control port being coupled to one of the plurality of doped regions, the signal ports are isolated from the plurality of control ports.16.The symmetrical varactor structure of claim 14, wherein the plate area of the first plate is adjusted based on a bias voltage received from the control port to control the area adjusting capacitor.17.The symmetric varactor structure of claim 14, wherein the isolation layer comprises a buried oxide layer.18.The symmetrical varactor structure of claim 14, wherein the first varactor assembly and the second varactor assembly are integrated in an integrated circuit.19.The symmetrical varactor structure of claim 18, wherein the integrated circuit comprises a power amplifier (PA), an oscillator, an RF (radio frequency) tuner, an RF transceiver, a multiplexer, and/or an RF Circuit die.20.The symmetrical varactor structure of claim 14, wherein the first varactor assembly and the second varactor assembly are integrated into an RF (radio frequency) switch.21.The symmetrical varactor structure of claim 14, wherein the first varactor assembly and the second varactor assembly are supported by a substrate, the substrate comprising glass, quartz, or silicon.22.The symmetrical varactor structure of claim 14, wherein the symmetrical varactor structure is incorporated into at least one of the following: a music player, a video player, an entertainment unit, a navigation device, a communication device, Personal digital assistants (PDAs), fixed-position data units, and computers.23.A method of manufacturing a symmetrical varactor structure includes:a step of fabricating a first varactor assembly on a second varactor assembly adjacent to the symmetrical varactor structure on an isolation layer;a step for thinning the symmetrical varactor structure to expose the body of the first varactor assembly and the body of the second varactor assembly; andA step for depositing and patterning a conductive layer to couple the body of the first varactor assembly with the body of the second varactor assembly.24.The method of claim 23, wherein the step of depositing and patterning the conductive layer further comprises:a step for depositing and patterning the redistribution layer as a conductive layer to couple the body of the first varactor assembly with the body of the second varactor assembly; andA step for depositing and patterning a passivation layer on the redistribution layer.25.The method of claim 24, further comprising the step of bonding a substrate to the passivation layer.26.The method of claim 23, further comprising incorporating the symmetrical varactor structure into a music player, video player, entertainment unit, navigation device, communication device, personal digital assistant (PDA). Steps in at least one of a fixed location data unit, and a computer. |
Backside coupled symmetrical varactor structurebackgroundfieldAspects of the present disclosure relate to semiconductor devices, and more particularly to back-side coupled symmetrical varactor structures.Background techniqueProcesses for semiconductor manufacturing of integrated circuits (ICs) may include front-end process (FEOL), middle process (MOL), and back-end process (BEOL) processes. Front-end process processes may include wafer preparation, isolation, well formation, gate patterning, spacers, expansion and source/drain implants, silicide formation, and double stress liner formation. The middle process may include gate contact formation. The middle process layer may include, but is not limited to, middle process contacts, vias, or other layers in close proximity to a semiconductor device transistor or other similar active device. The back end processing process may include a series of wafer processing steps for interconnecting the semiconductor devices created during the front end process and the middle process process. The successful manufacture of modern semiconductor chip products involves the interaction between the materials and processes used.Due to cost and power considerations, mobile radio frequency (RF) chip designs (eg, mobile RF transceivers) have migrated to deep submicron process nodes. The complexity of designing a mobile RF transceiver is further complicated by adding circuit functions for supporting communication enhancements. Further design challenges for mobile RF transceivers include analog/RF performance considerations, including mismatch, noise, and other performance considerations. The design of these mobile RF transceivers includes the use of voltage-controlled capacitors and/or tunable capacitors (eg, varactors) to provide, for example, a voltage controlled oscillator. Varactors can also be referred to as variable capacitance diodes.OverviewThe symmetrical varactor structure may include a first varactor assembly. The first varactor assembly may include a gate operated as a second plate of the zone conditioning capacitor, a gate oxide layer operated as a dielectric layer, and a body operated as a first plate of the zone conditioning capacitor. In addition, the doped region may surround the body of the first varactor assembly. The first varactor assembly may be supported on the back side by a barrier layer. The symmetrical varactor structure may also include a second varactor assembly that is electrically coupled to the back side of the first varactor assembly through a backside conductive layer.A method of manufacturing a symmetrical varactor structure includes fabricating a first varactor assembly on a second varactor assembly adjacent to the symmetrical varactor structure on an isolation layer. The method also includes thinning the symmetrical varactor structure to expose the body of the first varactor assembly and the body of the second varactor assembly. The method further includes depositing and patterning a conductive layer to couple the body of the first varactor assembly with the body of the second varactor assembly.The symmetrical varactor structure may include a first varactor assembly. The first varactor assembly may include a gate operated as a second plate of the zone conditioning capacitor, a gate oxide layer operated as a dielectric layer, and a body operated as a first plate of the zone conditioning capacitor. In addition, the doped region may surround the body of the first varactor assembly. The first varactor assembly may be supported on the back side by a barrier layer. The symmetrical varactor structure may also include a second varactor assembly. The symmetrical varactor structure may further include means for electrically coupling the second varactor assembly to a backside of the first varactor assembly.This has outlined broadly the features and technical advantages of the present disclosure so that the following detailed description can be better understood. Additional features and advantages of the present disclosure will be described below. Those skilled in the art should appreciate that the present disclosure can be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art will also recognize that such equivalent constructions do not depart from the teachings of this disclosure as set forth in the appended claims. The novel features which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the disclosure.Brief description of the drawingsFor a more complete understanding of this disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.FIG. 1 illustrates a perspective view of a semiconductor wafer in one aspect of the present disclosure.FIG. 2 illustrates a cross-sectional view of a die according to an aspect of the present disclosure.FIG. 3 illustrates a varactor in accordance with an aspect of the present disclosure.FIG. 4 illustrates a symmetrical varactor structure in accordance with various aspects of the present disclosure.FIG. 5 is a process flow diagram illustrating a method for manufacturing a symmetrical varactor structure in accordance with an aspect of the present disclosure.FIG. 6 is a block diagram illustrating an exemplary wireless communication system in which the configurations of the present disclosure may be advantageously employed.7 is a block diagram illustrating a design workstation for a circuit, layout, and logic design of a semiconductor device in accordance with one configuration.A detailed descriptionThe detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. This detailed description includes specific details in order to provide a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form to avoid obscuring such concepts. As used herein, the use of the term "and/or" is intended to mean "including or" and the use of the term "or" is intended to mean "exclusive or."Due to cost and power considerations, mobile radio frequency (RF) chip designs (eg, mobile RF transceivers) have migrated to deep submicron process nodes. The complexity of the design of a mobile RF transceiver is further complicated by adding circuit functions that support communication enhancements such as carrier aggregation. Further design challenges for mobile RF transceivers include analog/RF performance considerations, including mismatch, noise, and other performance considerations. The design of these mobile RF transceivers includes the use of voltage-controlled capacitors and/or tunable capacitors (eg, varactors) to provide, for example, a voltage controlled oscillator. Varactors can also be referred to as variable capacitance diodes.A varactor is an example of an electrical device for storing energy (eg, charge) in an electric field between closely spaced capacitor plates based on capacitance values. This capacitance value provides a measure of the amount of charge stored by the capacitor at a particular voltage. In addition to their charge storage capabilities, capacitors are also useful as electronic filters because they enable differentiation between high and low frequency signals. However, in conventional varactors, the plate width is adjusted to change the electric field formed between the capacitor plates. The varactor proves the electrically controllable capacitance that can be used to tune the circuit. Although using varactors is advantageous in many applications (for example, due to small size and reduced cost), varactors generally exhibit lower quality (Q) factors and non-linearity because varactors are asymmetric devices. .Linearity is an important factor in mobile RF chip design. Linearity can refer to circuit behavior in which the output signal is proportional to the input signal. In a linear device, the amplitude ratio of the output to the input signal should be the same regardless of the strength of the input signal. As mentioned, varactors are examples of asymmetric devices. For example, conventional standard complementary metal oxide semiconductor (CMOS) varactors cannot implement a fully symmetric varactor. This lack of symmetry makes standard CMOS varactors generate second- and third-order harmonics that cause signal leakage when used in RF systems. In particular, the use of an asymmetric device in an RF system results in a non-linearity from the device that inhibits the tunability of the RF system.Various aspects of the present disclosure provide techniques for manufacturing a back-side coupled symmetrical varactor. Processes for semiconductor fabrication of backside coupled symmetrical varactors may include a front end process (FEOL) process, a middle process (MOL) process, and a back end of the line (BEOL) process. It will be understood that the term "layer" includes a membrane and should not be read to indicate a longitudinal or lateral thickness unless otherwise stated. As described herein, the term "substrate" may refer to a substrate that has diced a wafer or may refer to a substrate that has not yet been diced. Similarly, the terms chip and die can be used interchangeably unless such an interchange would be unbelievable.Aspects of the present disclosure describe a symmetrical varactor structure. In one arrangement, the first varactor assembly includes a gate operated as a first plate of a plate region conditioning capacitor, a gate oxide layer operating as a dielectric layer, and a second as a plate region conditioning capacitor Plate-operated body. In addition, each doped region surrounds the body, and the first varactor assembly is supported on the back side by an isolation layer. In this aspect of the present disclosure, the varactor comprises a zone conditioning capacitor in the plate region provided by the body of the varactor tube, which is adjusted based on the bias voltage received from the control port to control the plate zone conditioning capacitor . In addition, the second varactor assembly is electrically coupled to the back side of the first varactor through the backside conductive layer, which eliminates the second harmonic caused by the single varactor assembly.In this arrangement, the second varactor assembly may be a replica (eg, Siamese) varactor wherein the body of the second varactor assembly is coupled to the body of the first varactor assembly to provide symmetry Variable tube. By separating the signal and control ports to the first and second varactor components, the backside connection enables symmetrical varactors that cancel any second-order harmonics in the RF system. In addition, the plate area adjustment capacitor capabilities of the first and second varactor assemblies provide improved capacitor linearity. In addition, separate control and signal ports achieve higher control over signal isolation and linearity. In addition, the increased thickness of the backside conductive layer provides the high Q factor of the backside coupled symmetrical varactor. Despite the area penalty due to the first and second varactor components, the symmetrical varactor structure can exhibit Q factor improvements.FIG. 1 illustrates a perspective view of a semiconductor wafer in one aspect of the present disclosure. Wafer 100 may be a semiconductor wafer, or may be a substrate material having one or more layers of semiconductor material on the surface of wafer 100 . When the wafer 100 is a semiconductor material, it can be grown from a seed crystal using a Czochralski process where the seed crystals are immersed in a bath of semiconductor material and are slowly rotated and removed from the cell. The molten material then crystallizes on the seed crystals in the orientation of the crystals.Wafer 100 may be a composite material such as gallium arsenide (GaAs) or gallium nitride (GaN), a ternary material (such as indium gallium arsenide (InGaAs)), a quaternary material, or may be a substrate for other semiconductor materials Material of any material. Although many materials may be crystalline in nature, polycrystalline or amorphous materials may also be used for the wafer 100 .The wafer 100 or layers coupled to the wafer 100 may be provided with a material that makes the wafer 100 more conductive. By way of example and not limitation, the silicon wafer may have phosphorous or boron added to the wafer 100 to allow charge flow in the wafer 100 . These additives are referred to as dopants and provide additional charge carriers (electrons or holes) within the wafer 100 or portions of the wafer 100 . Different types can be formed in or on wafer 100 by selecting the regions that provide additional charge carriers, what types of charge carriers are provided, and the amount (density) of additional charge carriers in wafer 100. Electronic devices.The wafer 100 has an orientation 102 that indicates the orientation of the wafer 100 . The orientation 102 may be a flat edge of the wafer 100 as shown in FIG. 1, or may be a notch or other indicia to illustrate the orientation of the wafer 100. The orientation 102 may indicate the Miller index of the plane of the lattice in the wafer 100 .Once the wafer 100 is processed as desired, the wafer 100 is singulated along the dicing lines 104. The cut line 104 indicates where the wafer 100 will be separated or separated into multiple pieces. Dicing lines 104 may define the profile of various integrated circuits that have been fabricated on wafer 100 .Once the cutting line 104 is defined, the wafer 100 can be sawn or otherwise divided into multiple pieces to form the die 106 . Each die 106 may be an integrated circuit with many devices or may be a single electronic device. The physical dimensions of the die 106 (which may also be referred to as a chip or semiconductor chip) depend, at least in part, on the ability to separate the wafer 100 into specific sizes, and the number of individual designs that the die 106 is designed to contain.Once the wafer 100 has been divided into one or more dies 106, the dies 106 may be mounted into the package to allow access to devices and/or integrated circuits fabricated on the dies 106. The package may include single in-line packages, dual in-line packages, mainboard packages, flip-chip packages, indium dot/bump packages, or other types of devices that provide access to the die 106 . The die 106 can also be accessed directly by wire bonds, probes, or other connections without the need to install the die 106 into a separate package.FIG. 2 illustrates a cross-sectional view of a die 106 in accordance with an aspect of the present disclosure. In the die 106, there may be a substrate 200, which may be a semiconductor material and/or may act as a mechanical support for the electronic device. The substrate 200 may be a doped semiconductor substrate having electron (designated as N-channel) or hole (designated as P-channel) charge carriers present in the substrate 200 . Subsequent doping of the substrate 200 with charge carrier ions/atoms may change the charge carrying capability of the substrate 200 .In the substrate 200 (eg, a semiconductor substrate), there may be wells 202 and 204. These wells may be the source and/or drain of a field effect transistor (FET), or the wells 202 and/or 204 may have a fin structure. Fin structure of FET (FinFET). Depending on the structure and other characteristics of wells 202 and/or 204 and the peripheral structure of substrate 200, wells 202 and/or 204 may also be other devices (eg, resistors, capacitors, diodes, or other electronics).The semiconductor substrate may also have a well 206 and a well 208 . The well 208 may be completely within the well 206, and in some cases, a bipolar junction transistor (BJT) may be formed. Well 206 may also be used as an isolation well to isolate well 208 from the electric and/or magnetic fields within die 106 .Various layers (eg, 210 to 214) may be added to the die 106. Layer 210 may be, for example, an oxide or insulating layer that may isolate wells (eg, 202-208) from each other or from other devices on die 106. In such a case, layer 210 may be silicon dioxide (SiO2), a polymer, a dielectric, or another electrically insulating layer. Layer 210 may also be an interconnect layer, in which case layer 210 may include a conductive material such as copper, tungsten, aluminum, an alloy, or other conductive or metallic material.Depending on the desired device characteristics and/or the materials of the various layers (eg, 210 and 214 ), layer 212 may also be a dielectric or a conductive layer. Layer 214 can be an encapsulation layer that can protect layers (eg, 210 and 212), as well as wells 202-208 and substrate 200 from external forces. By way of example and not limitation, layer 214 may be a layer that protects die 106 from mechanical damage, or layer 214 may be a layer of material that protects die 106 from electromagnetic or radiation damage.Electronic devices designed on the die 106 may include many features or structural components. For example, the die 106 may be exposed to any number of methods to deliver dopants into the substrate 200, the wells 202-208, and, if desired, into the layers (eg, 210-214). By way of example and not limitation, the die 106 may be exposed to ion implantation, deposition of dopant atoms, and these dopant atoms are driven into the lattice by diffusion processes, chemical vapor deposition, epitaxial growth, or other methods. Selective growth, material selection, and removal through portions of layers (eg, 210-214), and through selective removal of substrate 200 and wells 202-208, material selection, and dopant concentration, can be Many different structures and electronic devices are formed within the scope of the disclosure.In addition, the substrate 200, wells 202-208, and layers (eg, 210-214) may be selectively removed or added by various processes. Chemical wet etching, chemical mechanical planarization (CMP), plasma etching, photoresist masks, damascene processes, and other methods can create structures and devices of the present disclosure.FIG. 3 illustrates a complementary metal oxide semiconductor (CMOS) varactor 300 in accordance with one aspect of the present disclosure. Typically, the CMOS varactor 300 includes a gate operated as a second plate 314 of a metal-insulator-metal (MIM) capacitor 310, a gate oxide layer operated as a dielectric layer 313, and a MIM capacitor 310. The body on which the first plate 312 operates. In addition, the first doped region 316 and the second doped region 318 surround the first plate 312 (eg, body) to adjust the distance between the first plate 312 and the second plate 314 of the MIM capacitor 310 to provide a Variable capacitance. The CMOS varactor 300 is supported on the backside by an isolation layer 304 (eg, buried oxide layer) on the substrate 302 (eg, operating silicon (Si)). In the CMOS varactor 300, the plate width is adjusted to change the electric field formed between the capacitor plates by adjusting the distance between the first plate 312 and the second plate 314.As shown in FIG. 3, the first plate 312 is adjusted according to the input node 315 and the output nodes 317 and 319 to change the electric field formed between the first plate 312 and the second plate 314. The capacitance of the MIM capacitor 310 is generally controlled by the thickness of the dielectric layer 313 . However, in the CMOS varactor 300, the capacitance is adjusted according to the inversion and depletion between the input node 315 and the output nodes 317 and 319, which effectively operate as a diode. Unfortunately, this variable capacitance diode that operates by changing the distance between the first plate 312 and the second plate 314 is non-linear.In addition, the CMOS varactor 300 also presents a parasitic diode 320 between the substrate 302 and the isolation layer 304 . The parasitic diode 320 is due to a bonding process for coupling the substrate 302 and the isolation layer 304 . As a result, the parasitic diode 320 spans the entire wafer, which affects all the devices carried by the wafer. The diode presented by the MIM capacitor 310 and the parasitic diode 320 inhibit the CMOS varactor 300 from achieving symmetry. That is, any input signal received by the CMOS varactor 300 is distorted regardless of signal strength.Unfortunately, the asymmetry of the CMOS varactor 300 generates second-order, third-order, and fourth-order harmonics that cause signal leakage when used in an RF system. In particular, the use of an asymmetrical device in an RF system results in non-linearity from the asymmetrical device, which disables the tunability of the RF system. For example, when the CMOS varactor 300 is used in an RF transceiver that supports carrier aggregation, artificial harmonics may overlap with multiple transmit and receive functional channel bands used for carrier aggregation. That is, the second harmonic may overlap with the second frequency band, and the third harmonic may overlap with the third frequency band used for carrier aggregation.FIG. 4 illustrates a symmetrical varactor structure 400 in accordance with aspects of the present disclosure. In this aspect of the present disclosure, the first varactor assembly 410 is disposed adjacent to the second varactor assembly 420 in a Siamese configuration. Symmetricality provided by the symmetrical varactor structure 400 in a twin configuration eliminates second-order harmonics. The symmetrical varactor structure 400 provides symmetrical varactors by coupling the first varactor assembly 410 to the backside conductive layer 430 of the second varactor assembly 420 .The thinning and backside conductive interconnect process may form the backside conductive layer 430. The backside conductive layer 430 electrically couples the first varactor assembly 410 to the second varactor assembly 420 to provide a symmetrical varactor with reduced cost and increased Q factor. For example, a conductive interconnect layer (eg, 20 micron thick copper (Cu)) provides a thin film-based backside conductive interconnect (eg, copper trace) with reduced resistance. Lateral signal loss is reduced by connecting the body (B) of the first varactor assembly 410 to the body B of the second varactor assembly 420 . It is important to reduce the lateral signal loss in the active region of the symmetrical varactor structure because the lateral signal loss degrades the Q factor.Representatively, the first varactor assembly 410 includes a gate (G) operated as a second plate 414 of a zone conditioning capacitor, a gate oxide layer (Gox) operating as a dielectric layer 413, and a region-adjusting capacitor. The first plate 412 operates on the body (B). The first varactor assembly 410 also includes a first doped region 416 and a second doped region 418 surrounding the first plate 412 (eg, body B) of the first varactor assembly 410 . In this arrangement, the first varactor assembly 410 is supported on the backside by a spacer layer 406 . The isolation layer 406 may be a buried oxide (BOX) layer.In this aspect of the present disclosure, the second varactor assembly 420 is electrically coupled to the backside of the first varactor assembly 410 via the backside conductive layer 430 . The second varactor assembly 420 includes a gate (G) operated as a second plate 424 of a zone adjustment capacitor, a gate oxide layer (Gox) operating as a dielectric layer 423, and a first electrode as a zone adjustment capacitor. The plate 422 operates the body (B). The second varactor assembly 420 also includes a first doped region 426 and a second doped region 428 surrounding the first plate 422 (eg, body B) of the second varactor assembly 420 . In this arrangement, the second varactor assembly 420 is also supported by the isolation layer 406 on the back side.The back-side coupled varactor also includes a first signal port 440 coupled to the gate contact 415 of the gate G of the first varactor assembly 410 . In addition, the first control port 450 is coupled to the first diffusion contact 417 of the first doped region 416, and the second diffusion contact 419 of the second doped region 418. In this configuration, the first signal port 440 is isolated from the first control port 450 . The back-side coupled varactor further includes a second signal port 442 coupled to the gate contact 425 of the gate G of the second varactor assembly 420 . In addition, the second control port 452 is coupled to the first diffusion contact 427 of the first doped region 426, and the second diffusion contact 429 of the second doped region 428. In this configuration, the second signal port 442 is also isolated from the second control port 452. The input signal to the first signal port 440 and/or the second signal port 442 may be an RF signal. In addition, the control signal to the first control port 450 and/or the second control port 452 may be a DC control signal.In this configuration, the second doped region 418 of the first varactor assembly 410 is separated from the first doped region 426 of the second varactor assembly 420 by a shallow trench isolation (STI) region 408 . In addition, the backside conductive layer 430 is covered by a passivation layer 404 bonded to the substrate 402 . In this arrangement, the first varactor assembly 410 and the second varactor assembly 420 are supported by a substrate, which may include glass, quartz, silicon, polymers, or other similar insulator materials. In one aspect of the present disclosure, bonding the substrate 402 to the passivation layer 404 eliminates the parasitic diode 320 associated with the CMOS varactor 300 shown in FIG. 3 .In operation, the plate area provided by the first plate 412 of the first varactor assembly 410 is adjusted based on the bias voltage received from the first control port 450 . Similarly, the plate area provided by the first plate 422 of the second varactor assembly 420 is adjusted based on the bias voltage received from the second control port 452 . For example, the diameter of the body B of the first varactor assembly 410 is adjusted according to the bias voltages applied to the first doped region 416 and the second doped region 418 so as to have little signal to the body B. influences.In this arrangement, the body B can be manufactured as a partially depleted float. This arrangement provides a variable capacitance while maintaining the width of the dielectric layer 413 and the dielectric layer 423. That is, in contrast to adjusting the distance between the first plate and the second substrate in the CMOS varactor 300 of FIG. 3, between the first plate 412 and the second plate 414 and the first plate The distance between 422 and second plate 424 is maintained. The plate area adjustment provided by the first plate 412 and the second plate 424 reduces the signal from the diffusion area (eg, 416, 418, 426, 428) and the contacts (eg, 417, 419, 427, 429) Path loss. By avoiding signal path losses, symmetrical varactor structure 400 provides both symmetry and linearity to achieve high performance RF tunable devices.The symmetrical varactor structure 400 including the first varactor assembly 410 and the second varactor assembly 420 may be integrated in a circuit for implementing high performance RF tunable devices. The circuit may include, but is not limited to, power amplifiers (PAs), oscillators (eg, voltage controlled oscillators (VCOs)), RF tuners, RF transceivers, multiplexers, RF circuit dies, or other similar RF communications. Circuits (such as RF switches). Symmetric varactor structure 400 can exhibit linearity and significant Q-factor improvement when integrated into a mobile RF transceiver.Although shown in the arrangement of FIG. 4, it should be appreciated that the symmetrical varactor structure 400 is not limited to this arrangement. In addition, in contrast to decreasing process nodes, the symmetrical varactor structure 400 can be fabricated at larger process nodes. For example, the symmetrical varactor structure 400 can be fabricated using one hundred eighty (180) nanometer process nodes. As a result, the area penalty caused by the first varactor assembly 410 and the second varactor assembly 420 of the symmetrical varactor structure 400 is negligible and therefore sacrificed to facilitate improved linearity.FIG. 5 is a flow diagram illustrating a method 500 of fabricating a symmetrical varactor structure in accordance with aspects of the present disclosure. At block 502, a first varactor assembly is fabricated adjacent to the second varactor assembly of the back-side coupled varactor structure. For example, as shown in FIG. 4, the back side of the first varactor assembly 410 is arranged adjacent to the second varactor assembly 420 in a twin configuration. In this arrangement, the first varactor assembly 410 and the second varactor assembly 420 have the same configuration.Referring again to FIG. 5, at block 504, the varactor structure is thinned to expose the body of the first varactor assembly and the body of the second varactor assembly. For example, as shown in FIG. 4, the backside of the symmetrical varactor structure 400 is thinned to expose the body B of the first varactor assembly 410. In addition, the back side of the thin symmetrical varactor structure 400 also exposes the body B of the second varactor assembly 420 . Once exposed, the body B of the first varactor assembly 410 and the body B of the second varactor assembly 420 may be electrically coupled by using a symmetrical backside contact structure.At block 506, a conductive layer is deposited and patterned to electrically couple the body of the first varactor assembly and the body of the second varactor assembly. As shown in FIG. 4, the backside conductive layer 430 electrically couples the first varactor assembly 410 to the second varactor assembly 420. In one configuration, the backside conductive layer 430 is made using a redistribution layer. For example, a redistribution layer (RDL) may be deposited and patterned into the backside conductive layer 430 to couple the body B of the first varactor assembly 410 with the body B of the second varactor assembly 420 . Passivation layer 404 may be deposited and patterned on backside conductive layer 430 . Symmetric varactor structure 400 is accomplished by bonding substrate 402 to passivation layer 404 . In one aspect of the present disclosure, bonding the substrate 402 to the passivation layer 404 eliminates the parasitic diode 320 associated with the CMOS varactor 300 shown in FIG. 3 .The backside conductive layer 430 provides a symmetrical backside contact structure that enables symmetrical varactors with reduced cost and increased Q factor. For example, using a conductive interconnect layer (eg, 20 microns thick copper (Cu)) as the backside conductive layer 430 provides a thin film based backside conductive interconnect (eg, copper trace) with reduced resistance. Lateral signal loss is reduced by connecting the body (B) of the first varactor assembly 410 to the body B of the second varactor assembly 420 . Decreasing the lateral signal loss in the active region of the symmetrical varactor structure 400 is important because the lateral signal loss degrades the Q factor.In one configuration, a symmetrical varactor configuration is described. The symmetrical varactor structure includes means for electrically coupling the second varactor assembly to the backside of the first varactor assembly. In one aspect of the present disclosure, the electrical coupling device is the backside conductive layer 430 of FIG. 4 that is configured to perform the functions recited by the electrical coupling device. In another aspect, the aforementioned means may be a device or any layer configured to perform the functions recited by the aforementioned means.Various aspects of the present disclosure describe a back-side coupled symmetrical varactor. In one arrangement, the first varactor (also referred to as a first varactor assembly) includes a gate operated as a first plate of a plate region conditioning capacitor, a gate oxide layer operating as a dielectric layer And a body operated as a second plate of the plate area adjustment capacitor. In addition, each doped region surrounds the body, and the first varactor is supported on the back side by an isolation layer. In one aspect of the present disclosure, the varactor comprises a zone conditioning capacitor in the plate region provided by the body of the varactor tube, which is adjusted based on the bias voltage received from the control port to control the plate zone conditioning capacitor. In addition, a second varactor (also referred to as a second varactor assembly) is electrically coupled to the back side of the first varactor via a backside conductive layer.In such an arrangement, the second varactor may be a replica (eg, twin) varactor, where the body of the second varactor is coupled to the body of the first varactor to provide a symmetrical varactor. By separating the signal and control ports to the first and second varactors, the backside connection implements a symmetrical varactor that eliminates any second-order harmonics in the RF system. In addition, the plate area adjustment capacitor capabilities of the first and second varactors provide improved capacitor linearity. In addition, separate control and signal ports achieve higher control over signal isolation and linearity. In addition, the high Q factor of the backside coupled symmetrical varactor is provided by the increased thickness of the backside conductive layer. Symmetric varactor structures can be fabricated using one hundred eighty (180) nanometer process nodes. As a result, the area penalty caused by the first varactor assembly and the second varactor assembly of the symmetrical varactor structure 400 is negligible and therefore sacrificed to facilitate improved linearity.FIG. 6 is a block diagram illustrating an exemplary wireless communication system 600 in which aspects of the present disclosure may be advantageously employed. For illustrative purposes, FIG. 6 shows three remote units 620, 630, and 650, and two base stations 640. It will be appreciated that wireless communication systems can have many more remote units and base stations. Remote units 620, 630, and 650 include IC devices 625A, 625C, and 625B that include the disclosed symmetrical varactor configuration. It will be appreciated that other devices may also include the disclosed symmetrical varactor configurations, such as base stations, switching equipment, and network equipment. FIG. 6 shows forward link signals 680 from base station 640 to remote units 620, 630, and 650, and reverse link signals 690 from remote units 620, 630, and 650 to base station 640.In FIG. 6, remote unit 620 is shown as a mobile phone, remote unit 630 is shown as a portable computer, and remote unit 650 is shown as a fixed location remote unit in a wireless local loop system. For example, remote units 620, 630, and 650 may be mobile phones, handheld personal communications system (PCS) units, portable data units (such as personal digital assistants (PDAs)), GPS enabled devices, navigation devices, set top boxes, music players , video players, entertainment units, fixed location data units (such as meter reading equipment), or communication devices that store or retrieve data or computer instructions, or combinations thereof. Although FIG. 6 illustrates remote units according to aspects of the present disclosure, the present disclosure is not limited to the exemplary units illustrated. Various aspects of the present disclosure may be suitably employed in many devices including the disclosed symmetrical varactor configuration.FIG. 7 is a block diagram illustrating a design workstation for circuit, layout, and logic design of a semiconductor assembly such as a symmetric varactor configuration. The design workstation 700 includes a hard disk 702 that contains operating system software, supporting files, and design software such as Cadence or OrCAD. The design workstation 700 also includes a display 704 to facilitate the design of a circuit 706 or semiconductor component 708 such as a symmetric varactor structure. A storage medium 710 is provided for tangibly storing the design of the circuit 706 or the semiconductor component 708 . The design of circuit 706 or semiconductor component 708 may be stored on storage medium 710 in a file format such as GDSII or GERBER. Storage medium 710 may be a CD-ROM, DVD, hard disk, flash memory, or other suitable device. In addition, the design workstation 700 includes a drive 712 for accepting input from or writing output to the storage medium 710 .The data recorded on the storage medium 710 may specify a logic circuit configuration, pattern data for a lithography mask, or mask pattern data for a tandem writing tool such as electron beam lithography. The data may further include logic verification data associated with logic simulations, such as timing diagrams or net circuits. Providing data on the storage medium 710 facilitates the design of the circuit 706 or the semiconductor component 708 by reducing the number of processes used to design the semiconductor wafer.For firmware and/or software implementations, these methodologies may be implemented with modules (eg, procedures, functions, and so on) that perform the functions described herein. Machine-readable media that tangibly embody instructions can be used to implement the methodologies described herein. For example, software code may be stored in memory and executed by a processor unit. The memory may be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to long-term, short-term, volatile, non-volatile memory, or other memory, and is not limited to a particular type of memory or memory number, or media on which memory is stored. type.If implemented in firmware and/or software, the functions may be stored on a computer-readable medium as one or more instructions or code. Examples include computer-readable media encoded with data structures and computer-readable media encoded with computer programs. Computer-readable media includes physical computer storage media. The storage medium may be a usable medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM, or other optical disk storage, magnetic disk storage, or other magnetic storage devices, or may be used to store instructions or data structure forms of desired programs. Other media that can be accessed by the code and can be accessed by the computer; discs and discs as used herein include compact discs (CDs), laser discs, compact discs, digital versatile discs (DVDs), and Blu-ray discs, where discs often Data is magnetically reproduced, and the disc optically reproduces data with a laser. Combinations of the above should also be included within the scope of computer-readable media.In addition to being stored on a computer-readable medium, the instructions and/or data may also be provided as a signal on a transmission medium included in the communication device. For example, a communications device may include a transceiver with signals indicating instructions and data. The instructions and data are configured to cause one or more processors to implement the functions recited in the claims.Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the technology of the present disclosure as defined by the appended claims. For example, relational terms such as "above" and "below" are used with respect to a substrate or an electronic device. Of course, if the substrate or electronic device is turned upside down, the top becomes bottom and vice versa. In addition, if oriented sideways, above and below may refer to the side of the substrate or electronic device. Also, the scope of the present application is not intended to be limited to the specific configurations of processes, machines, manufacture, compositions of matter, means, methods, and steps described in the specification. As those of ordinary skill in the art will readily appreciate from this disclosure, in accordance with the present disclosure, processes, machines, manufacturing, existing or later developed to perform substantially the same functions or achieve substantially the same results as the corresponding configurations described herein may be utilized. , material composition, device, method, or step. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.The skilled artisan will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or a combination of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends on the specific application and design constraints imposed on the overall system. Skilled persons may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The various illustrative logic blocks, modules, and circuits described in connection with the disclosure herein may employ general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable devices designed to perform the functions described herein. The gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof implement or execute. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, eg, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integrated into the processor. The processor and storage medium may reside in an ASIC. The ASIC may reside in the user terminal. In the alternative, the processor and the storage medium may reside as discrete components in the user terminal.In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM, or other optical disk storage, magnetic disk storage, or other magnetic storage devices, or may be used to carry or store instructions or data structures. Any other medium that specifies program code means and that can be accessed by a general purpose or special purpose computer, or a general or special purpose processor. Any connection is also properly termed a computer-readable medium. For example, if the software is delivered from a web site, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwaves. Then, the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwaves are included in the definition of media. Disks and discs as used herein include compact discs (CDs), laser discs, compact discs, digital versatile discs (DVDs), and Blu-ray discs, where disks tend to reproduce data magnetically. Disc reproduces data optically with a laser. Combinations of the above should also be included within the scope of computer-readable media.The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, with references to the singular form of elements not being intended to mean "having and There is only one (unless specifically stated) but "one or more." Unless specifically stated otherwise, the term "some" refers to one or more. A phrase that refers to "at least one" in a list of items refers to any combination of these items, including individual members. By way of example, "at least one of a, b, or c" is intended to cover: a; b; c; a and b; a and c; b and c; and a, b, and c. The elements of the various aspects described throughout this disclosure are all structural and functional equivalents that are presently or in the future known to one of ordinary skill in the art and are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No element of a claim shall be construed under the provisions of 35 USC § 112, paragraph 6, unless the element is explicitly stated using the wording “for...means” or in the case of a method claim. The wording is used for the "steps" to describe. |
Techniques for providing a semiconductor memory device are disclosed. In one particular exemplary embodiment, the techniques may be realized as a semiconductor memory device including a plurality of memory cells arranged in an array of rows and columns. Each memory cell including a first region, a a second region, and a body region capacitively coupled to at least one word line and disposed between the first region and the second region. Each memory cell also including a third region, wherein the third region may be doped differently than the first region, the second region, and the body region. |
A memory cell comprising:a first region coupled to a source line;a second region coupled to a bit line;a body region capacitively coupled to at least one word line and disposed between the first region and the second region; anda third region, coupled to a carrier injection line;wherein the first region, the second region, and the body region have a common first doping polarity.The memory cell of claim 1, wherein the third region has a second doping polarity that is different from the first doping polarity.The memory cell of claim 1, wherein the first region is coupled to a first poly plug and the second region is coupled to a second poly plug.The memory cell of claim 1, wherein the first region, the second region, the body region, and the third region are arranged in a contiguous horizontal configuration on a substrate.The memory cell of claim 1, wherein the first region, the second region, and the body region are arranged in a contiguous vertical configuration on a substrate.The memory cell of claim 4 or 5, wherein the first region, the second region, and the body region are doped with donor impurities, and wherein the third region is doped with acceptor impurities.The memory cell of claim 4 or 5, wherein the first region, the second region, and the body region are doped with acceptor impurities, and wherein the third region is doped with donor impurities.The memory cell of claim 4, wherein the first region, the second region, and the body region are undoped regions.The memory cell of claim 4, wherein the body region is coupled to a first doped region and the third region is coupled to a second doped region.The memory cell of claim 9, wherein the second doped region is doped with acceptor impurities having a concentration higher than the doped third region.The memory cell of claim 5, wherein the first region, the second region, and the body region are doped with donor impurities, wherein the third region is doped with acceptor impurities, and wherein the third region is made of a P-well region.The memory cell of claim 5, wherein the first region, the second region, and the body region are doped with acceptor impurities, wherein the third region is doped with donor impurities, and wherein the third region is made of an N-well region.The memory cell of claim 5, wherein the source line and the bit line are arranged on opposite sides of the memory cell.The memory cell of claim 1, wherein the first region, the second region, and the body region have different doping concentrations.A method for biasing a semiconductor memory device comprising the steps of:applying a first voltage potential to a first region of a first memory cell in an array of memory cells via a respective source line of the array;applying a second voltage potential to a second region of the first memory cell via a respective bit line of the array;applying a third voltage potential to a body region of the first memory cell via at least one respective word line of the array that is capacitively coupled to the body region; andapplying a fourth voltage potential to a third region of the first memory cell via a respective carrier injection line of the array;wherein the first region, the second region, and the body region have a common first doping polarity. |
CROSS-REFERENCE TO RELATED APPLICATIONSThis patent application claims priority to U.S. Provisional Patent Application No. 61/313,986, filed March 15, 2010 , which is hereby incorporated by reference herein in its entirety.FIELD OF THE DISCLOSUREThe present disclosure relates generally to semiconductor memory devices and, more particularly, to techniques for providing a junction-less semiconductor memory device.BACKGROUND OF THE DISCLOSUREThe semiconductor industry has experienced technological advances that have permitted increases in density and/or complexity of semiconductor memory devices. Also, the technological advances have allowed decreases in power consumption and package sizes of various types of semiconductor memory devices. There is a continuing trend to employ and/or fabricate advanced semiconductor memory devices using techniques, materials, and devices that improve performance, reduce leakage current, and enhance overall scaling. Silicon-on-insulator (SOI) and bulk substrates are examples of materials that may be used to fabricate such semiconductor memory devices. Such semiconductor memory devices may include, for example, partially depleted (PD) devices, fully depleted (FD) devices, multiple gate devices (e.g., double, triple gate, or surrounding gate), and Fin-FET devices.A semiconductor memory device may include a memory cell having a memory transistor with an electrically floating body region wherein electrical charge may be stored. When excess majority electrical charges carriers are stored in the electrically floating body region, the memory cell may store a logic high (e.g., binary "1" data state). When the electrical floating body region is depleted of majority electrical charge carriers, the memory cell may store a logic low (e.g., binary "0" data state). Also, a semiconductor memory device may be fabricated on silicon-on-insulator (SOI) substrates or bulk substrates (e.g., enabling body isolation). For example, a semiconductor memory device may be fabricated as a three-dimensional (3-D) device (e.g., a multiple gate device, a Fin-FET device, and a vertical pillar device).In one conventional technique, the memory cell of the semiconductor memory device may be manufactured by an implantation process. During a conventional implantation process, defect structures may be produced in a silicon lattice of various regions of the memory cell of the semiconductor memory device. The defect structures formed during the implantation process may decrease retention time of majority charge carriers stored in the memory cell of the semiconductor memory device. Also, during a conventional implantation process, various regions of the memory cell may be doped with undesired doping concentrations. The undesired doping concentrations may thus produce undesired electrical properties for the memory cell of the semiconductor memory device. Further, the conventional implantation process may face lateral and vertical scaling challenges.In view of the foregoing, it may be understood that there may be significant problems and shortcomings associated with conventional techniques for providing a semiconductor memory device.BRIEF DESCRIPTION OF THE DRAWINGSIn order to facilitate a fuller understanding of the present disclosure, reference is now made to the accompanying drawings, in which like elements are referenced with like numerals. These drawings should not be construed as limiting the present disclosure, but are intended to be exemplary only.Figure 1 shows a block diagram of a semiconductor memory device including a memory cell array, data write and sense circuitry, and memory cell selection and control circuitry in accordance with an embodiment of the present disclosure.Figure 2 shows a cross-sectional view of the memory cell shown in Figure 1 in accordance with an embodiment of the present disclosure.Figure 3 shows a cross-sectional view of the memory cell shown in Figure 1 in accordance with an alternate embodiment of the present disclosure.Figure 4 shows a cross-sectional view of the memory cell shown in Figure 1 in accordance with an embodiment of the present disclosure.Figure 5 shows a cross-sectional view of the memory cell shown in Figure 1 in accordance with an alternate embodiment of the present disclosure.Figure 6 shows cross-sectional views of at least a portion of the memory cell array shown in Figure 1 in accordance with an embodiment of the present disclosure.Figure 7 shows cross-sectional views of at least a portion of the memory cell array shown in Figure 1 in accordance with an alternate embodiment of the present disclosure.Figure 8 shows cross-sectional views of at least a portion of the memory cell array shown in Figure 1 in accordance with an alternate embodiment of the present disclosure.Figure 9 shows cross-sectional views of at least a portion of the memory cell array shown in Figure 1 in accordance with an alternate embodiment of the present disclosure.Figure 10 shows control signal voltage waveforms for performing a write operation on a memory cell shown in Figure 2 in accordance with an embodiment of the present disclosure.Figure 11 shows control signal voltage waveforms for performing a read operation on a memory cell shown in Figure 2 in accordance with an embodiment of the present disclosure.DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTSReferring to Figure 1 , there is shown a block diagram of a semiconductor memory device 10 comprising a memory cell array 20, data write and sense circuitry 36, and memory cell selection and control circuitry 38 in accordance with an embodiment of the present disclosure. The memory cell array 20 may comprise a plurality of memory cells 12 each coupled to the memory cell selection and control circuitry 38 via a word line (WL) 28 and a carrier injection line (EP) 34, and to the data write and sense circuitry 36 via a bit line (CN) 30 and a source line (EN) 32. It may be appreciated that the bit line (CN) 30 and the source line (EN) 32 are designations used to distinguish between two signal lines and they may be used interchangeably.The data write and sense circuitry 36 may read data from and may write data to selected memory cells 12. In an exemplary embodiment, the data write and sense circuitry 36 may include a plurality of data sense amplifier circuits. Each data sense amplifier circuit may receive at least one bit line (CN) 30 and a current or voltage reference signal. For example, each data sense amplifier circuit may be a crosscoupled type sense amplifier to sense a data state stored in a memory cell 12. The data write and sense circuitry 36 may include at least one multiplexer that may couple to a data sense amplifier circuit to at least one bit line (CN) 30. In an exemplary embodiment, the multiplexer may couple a plurality of bit lines (CN) 30 to a data sense amplifier circuit.Each data sense amplifier circuit may employ voltage and/or current sensing circuitry and/or techniques. In an exemplary embodiment, each data sense amplifier circuit may employ current sensing circuitry and/or techniques. For example, a current sense amplifier may compare current from a selected memory cell 12 to a reference current (e.g., the current of one or more reference cells). From that comparison, it may be determined whether the selected memory cell 12 stores a logic high (e.g., binary "1" data state) or a logic low (e.g., binary "0" data state). It may be appreciated by one having ordinary skill in the art that various types or forms of the data write and sense circuitry 36 (including one or more sense amplifiers, using voltage or current sensing techniques, to sense a data state stored in a memory cell 12) may be employed to read data stored in the memory cells 12.The memory cell selection and control circuitry 38 may select and/or enable one or more predetermined memory cells 12 to facilitate reading data therefrom by applying control signals on one or more word lines (WL) 28 and/or carrier injection lines (EP) 34. The memory cell selection and control circuitry 38 may generate such control signals from address signals, for example, row address signals. Moreover, the memory cell selection and control circuitry 38 may include a word line decoder and/or driver. For example, the memory cell selection and control circuitry 38 may include one or more different control/selection techniques (and circuitry thereof) to select and/or enable one or more predetermined memory cells 12. Notably, all such control/selection techniques, and circuitry thereof, whether now known or later developed, are intended to fall within the scope of the present disclosure.In an exemplary embodiment, the semiconductor memory device 10 may implement a two step write operation whereby all the memory cells 12 in a row of memory cells 12 may be written to a predetermined data state by first executing a "clear" or a logic low (e.g., binary "0" data state) write operation, whereby all of the memory cells 12 in the row of memory cells 12 are written to logic low (e.g., binary "0" data state). Thereafter, selected memory cells 12 in the row of memory cells 12 may be selectively written to the predetermined data state (e.g., a logic high (binary "1" data state)). The semiconductor memory device 10 may also implement a one step write operation whereby selected memory cells 12 in a row of memory cells 12 may be selectively written to either a logic high (e.g., binary "1" data state) or a logic low (e.g., binary "0" data state) without first implementing a "clear" operation. The semiconductor memory device 10 may employ any of the exemplary writing, preparation, holding, refresh, and/or reading techniques described herein.The memory cells 12 may comprise N-type, P-type and/or both types of transistors. Circuitry that is peripheral to the memory cell array 20 (for example, sense amplifiers or comparators, row and column address decoders, as well as line drivers (not illustrated herein)) may also include P-type and/or N-type transistors. Regardless of whether P-type or N-type transistors are employed in memory cells 12 in the memory cell array 20, suitable voltage potentials (for example, positive or negative voltage potentials) for reading from the memory cells 12 will be described further herein.Referring to Figure 2 , there is shown a cross-sectional view of the memory cell 12 shown in Figure 1 in accordance with an embodiment of the present disclosure. The memory cell 12 may comprise a first N- region 120, a second N- region 122, a third N- region 124, and/or a P- region 126. The first N-region 120, the second N- region 122, the third N- region 124, and/or the P- region 126 may be disposed in sequential contiguous relationship within a planar configuration that may extend horizontally or parallel to a plane defined by an oxide region 128 and/or a P- substrate 130. In an exemplary embodiment, the second N- region 122 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges that may be spaced apart from and capacitively coupled to the word line (WL) 28.The first N- region 120 of the memory cell 12 may be coupled to the source line (EN) 32 via a first N+ poly plug 232. The first N+ poly plug 232 may be directly coupled to the first N- region 120 of the memory cell 12. The second N-region 122 of the memory cell 12 may be coupled to the word line (WL) 28 via a gate region 228. The gate region 228 may be capacitively coupled to the second N- region 122 of the memory cell 12. The third N- region 124 of the memory cell 12 may be coupled to a bit line (CN) 30 via a second N+ poly plug 230. The second N+ poly plug 230 may be directly coupled to the third N- region 124 of the memory cell 12. The P- region 126 of the memory cell 12 may be coupled to a carrier injection line (EP) 34 via a P+ region 234. The P+ region 234 may be directly coupled to the P- region 126 of the memory cell 12.The first N- region 120, the second N- region 122, and the third N- region 124 may be formed of the same material or different materials. Also, the first N- region 120, the second N- region 122, and the third N- region 124 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first N-region 120, the second N- region 122, and the third N- region 124 may be formed of a semiconductor material (e.g., silicon) comprising donor impurities (e.g., nitrogen, arsenic, and/or phosphorus). In an exemplary embodiment, the first N- region 120, the second N- region 122, and/or the third N- region 124 may be formed of a silicon material with donor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3.The P- region 126 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising acceptor impurities. For example, the P- region 126 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the P- region 126 may be formed of a silicon material with acceptor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. In another exemplary embodiment, the P- region 126 may be formed of an undoped semiconductor material (e.g., intrinsic silicon).The first N+ poly plug 232 and the second N+ poly plug 230 may be formed of the same material or different materials. The first N+ poly plug 232 and the second N+ poly plug 230 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. The first N+ poly plug 232 and the second N+ poly plug 230 may couple voltage potentials from the source line (EN) 32 and the bit line (CN) 30, respectively, to the first N- region 120 and the third N- region 124 of the memory cell 12. In another exemplary embodiment, the first N+ poly plug 232 and the second N+ poly plug 230 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. The first N+ poly plug 232 and the second N+ poly plug 230 may have a height extending from the first N- region 120 and the third N- region 124, respectively, to the source line (EN) 32 and the bit line (CN) 30.The gate region 228 may be formed of a polycide material, a silicon material, a metal material, and/or a combination thereof. In another exemplary embodiment, the gate region 228 may be formed of a doped silicon layer. The gate region 228 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the gate region 228 may be formed of a silicon material doped with boron impurities.The P+ region 234 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the P+ region 234 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the P+ region 234 may be doped with acceptor impurities having a concentration of 1020 atom/cm3 or higher.The oxide layer 128 may be formed on the P- substrate 130. For example, the oxide layer 128 may be formed of an insulating material. The oxide layer 128 may include a continuous planar region configured above the P- substrate 130. In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may form a trench region that may have a cross-sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross-sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12.In an exemplary embodiment, the P- substrate 130 may be made of a semiconductor material (e.g., silicon) comprising acceptor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of P- substrates 130 may form the base of the memory cell array 20 or a single P- substrate 130 may form the base of the memory cell array 20. Also, the P- substrate 130 may be made in the form of a P-well substrate.An insulating layer 132 may be formed on top of the oxide layer 128. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the oxide layer 128 to electrically insulating the first N+ poly plug 232, the gate region 228, the second N+ poly plug 230, and/or the P+ region 234.Referring to Figure 3 , there is shown a cross-sectional view of the memory cell 12 shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. The memory cell 12 illustrated in Figure 3 may be similar to the memory cell 12 illustrated in Figure 2 , except that the memory cell 12 may comprise a plurality of undoped regions. The plurality of undoped region may comprise a first undoped region 320 coupled a corresponding first N+ poly plug 232, a second undoped region 322 capacitively coupled to a corresponding gate region 228, and/or a third undoped region 324 coupled to a corresponding second N+ poly plug 230.The plurality of undoped regions may be formed of the same material or different materials. For example, the plurality of undoped regions (e.g., the first undoped region 320, the second undoped region 322, and/or the third undoped region 324) may be formed of an undoped semiconductor material (e.g., intrinsic silicon).Referring to Figure 4 , there is shown a cross-sectional view of the memory cell 12 shown in Figure 1 in accordance with an embodiment of the present disclosure. The memory cell 12 illustrated in Figure 4 may be similar to the memory cell 12 illustrated in Figure 2 , except that the memory cell 12 may comprise a first P- region 420, a second P- region 422, a third P- region 424, and/or an N- region 426. The first P-region 420, the second P- region 422, the third P- region 424, and/or the N- region 426 may be disposed in sequential contiguous relationship within a planar configuration that may extend horizontally or parallel to a plane defined by an oxide region 128 and/or a P- substrate 130. In an exemplary embodiment, the second P- region 422 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges that may be spaced apart from and capacitively coupled to the word line (WL) 28.The first P- region 420 of the memory cell 12 may be coupled to the source line (EN) 32 via a first P+ poly plug 432. The first P+ poly plug 432 may be directly coupled to the first P- region 420 of the memory cell 12. The second P-region 422 of the memory cell 12 may be coupled to the word line (WL) 28 via a gate region 428. The gate region 428 may be capacitively coupled to the second P- region 422 of the memory cell 12. The third P- region 424 of the memory cell 12 may be coupled to a bit line (CN) 30 via a second N+ poly plug 430. The second N+ poly plug 430 may be directly coupled to the third P- region 424 of the memory cell 12. The N- region 426 of the memory cell 12 may be coupled to a carrier injection line (EP) 34 via an N+ region 434. The N+ region 434 may be directly coupled to the N- region 426 of the memory cell 12.The first P- region 420, the second P- region 422, and the third P- region 424 may be formed of the same material or different materials. Also, the first P- region 420, the second P- region 422, and the third P- region 424 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first P-region 420, the second P- region 422, and the third P- region 424 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the first P-region 420, the second P- region 422, and/or the third P-region 424 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first P-region 420, the second P- region 422, and/or the third P-region 424 may be formed of a silicon material with acceptor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3.The N- region 426 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising donor impurities. For example, the N- region 426 may be formed of a silicon material doped with nitrogen, arsenic, and/or phosphorous impurities. In an exemplary embodiment, the N-region 426 may be formed of a silicon material with donor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. In another exemplary embodiment, the N- region 426 may be formed of an undoped semiconductor material (e.g., intrinsic silicon).The first P+ poly plug 432 and/or the second P+ poly plug 430 may be formed of the same material or different materials. The first P+ poly plug 432 and the second P+ poly plug 430 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. The first P+ poly plug 432 and/or the second P+ poly plug 430 may couple voltage potentials from the source line (EN) 32 and the bit line (CN) 30, respectively, to the first P- region 420 and the third P- region 424 of the memory cell 12. In another exemplary embodiment, the first P+ poly plug 432 and/or the second P+ poly plug 430 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. The first P+ poly plug 432 and/or the second P+ poly plug 430 may have a height extending from the first P- region 420 and the third P- region 424, respectively, to the carrier injection line (EP) 34 and the bit line (CN) 30.The gate region 428 may be formed of a polycide material, a silicon material, a metal material, and/or a combination thereof. In another exemplary embodiment, the gate region 428 may be formed of a doped silicon layer. The gate region 428 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the gate region 428 may be formed of a silicon material doped with boron impurities.The N+ region 434 may be formed of a semiconductor material (e.g., silicon) comprising donor impurities. For example, the N+ region 434 may be formed of a silicon material doped with nitrogen, arsenic, and/or phosphorous impurities. In an exemplary embodiment, the N+ region 434 may be formed of a silicon material with donor impurities having a concentration of 1020 atom/cm3 or higher.Referring to Figure 5 , there is shown a cross-sectional view of the memory cell 12 shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. The memory cell 12 illustrated in Figure 5 may be similar to the memory cell 12 illustrated in Figure 4 , except that the memory cell 12 may comprise a plurality of undoped regions. The plurality of undoped region may comprise a first undoped region 520 coupled a corresponding first P+ poly plug 432, a second undoped region 522 capacitively coupled to a corresponding gate region 428, and/or a third undoped region 524 coupled to a corresponding second N+ poly plug 430.The plurality of undoped regions may be formed of the same material or different materials. For example, the plurality of undoped regions (e.g., the first undoped region 420, the second undoped region 422, and/or the third undoped region 424) may be formed of an undoped semiconductor material (e.g., intrinsic silicon).Referring to Figure 6 , there is shown cross-sectional views of at least a portion of the memory cell array 20 shown in Figure 1 in accordance with an embodiment of the present disclosure. Figure 6 illustrates a cross-sectional view of at least a portion of the memory cell array 20 along the bit line (CN) 30 and a cross-sectional view of at least a portion of the memory cell array 20 along the word line (WL) 28. The memory cells 12 of the memory cell array 20 may be implemented in a vertical configuration having various regions. For example, the memory cell 12 may comprise a first N- region 620, a second N- region 622, a third N- region 624, and/or a P+ region 626. The first N- region 620, the second N- region 622, the third N- region 624, and/or the P+ region 626 may be disposed in a sequential contiguous relationship, and may extend vertically from a plane defined by a P- substrate 130. In an exemplary embodiment, the second N- region 622 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges, and may be spaced apart from and capacitively coupled to the plurality of word lines (WL) 28.The first N- region 620 of the memory cell 12 may be coupled to the source line (EN) 32. The second N- region 622 of the memory cell 12 may be capacitively coupled to the word line (WL) 28. The third N- region 624 of the memory cell 12 may be coupled to a bit line (CN) 30. The P+ region 626 of the memory cell 12 may be coupled to a carrier injection line (EP) 34.The first N- region 620, the second N- region 622, and the third N- region 624 may be formed of the same material or different materials. Also, the first N- region 620, the second N- region 622, and the third N- region 624 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first N-region 620, the second N- region 622, and the third N- region 624 may be formed of a semiconductor material (e.g., silicon) comprising donor impurities (e.g., nitrogen, arsenic, and/or phosphorus). In an exemplary embodiment, the first N- region 620, the second N- region 622, and/or the third N- region 624 may be formed of a silicon material with donor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3.The P+ region 626 may be formed of at least one layer. In an exemplary embodiment, the P+ region 626 may comprise a plurality of layers. For example, the first layer of the P+ region 626 may be formed of a polysilicon material or silicon dioxide material, and/or a combination thereof. In another exemplary embodiment, the first layer of the P+ region 626 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising acceptor impurities. For example, the first layer of the P+ region 626 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first layer of the P+ region 626 may be formed of a silicon material with acceptor impurities having a concentration of 1018 atoms/cm3 or above. The second layer of the P+ region 626 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. In an exemplary embodiment, the second layer of the P+ region 626 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof.The source line (EN) 32 may be formed of a metal material. In another exemplary embodiment, the source line (EN) 32 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material). In other exemplary embodiments, the source line (EN) 32 may be formed of an N+ doped silicon layer. The source line (EN) 32 may provide voltage potentials to the first N- region 620 of the memory cells 12. For example, the source line (EN) 32 may be coupled to a plurality of memory cells 12 (e.g., a column or a row of memory cells 12 of the memory cell array 20). The source line (EN) 32 may be configured on a side portion of the first N- region 620.The word lines (WL) 28 may be capacitively coupled to the second N- region 622. The word lines (WL) 28 may be oriented in a row direction of the memory cell array 20 and coupled to a plurality of memory cells 12. The word lines (WL) 28 may be arranged on side portions of the memory cells 12 (e.g., memory cells 12 located on a row direction of the memory cell array 20). For example, the word lines (WL) 28 may be arranged at two side portions of the second N- region 622 of the memory cells 12.For example, the word lines (WL) 28 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material), a metal material, and/or a combination of a polycide material and a metal material. In another exemplary embodiment, the word lines (WL) 28 may be formed of an N+ doped silicon material. In an exemplary embodiment, the word lines (WL) 28 may capacitively couple a voltage/current source of the memory cell selection and control circuitry 38 to the second N- region 622 of the memory cell 12. In an exemplary embodiment, the first word line (WL) 28 may implement a write logic low (e.g., binary "0" data state) operation on the memory cell 12, while the second word line (WL) 28 may implement a write logic high (e.g., binary "1" data state) operation.The bit line (CN) 30 may be coupled to the third N-region 624 of the memory cell 12. The bit line (CN) 30 may be formed of a metal material. In another exemplary embodiment, the bit line (CN) 30 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material). In other exemplary embodiments, the bit line (CN) 30 may be formed of an N+ doped silicon layer. For example, the bit line (CN) 30 may be coupled to a plurality of memory cells 12. The bit line (CN) 30 may be configured on a side portion of the third N- region 624. In an exemplary embodiment, the bit line (CN) 30 may be configured on an opposite side portion as the source line (EN) 30.An oxide layer 128 may be formed on the P- substrate 130. For example, the oxide layer 128 may be formed of an insulating material. In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may include a plurality of barrier walls formed of an insulating oxide material. The plurality of barrier walls may be oriented in a column direction and a row direction of the memory cell array 20. For example, a first barrier wall of the plurality of barrier walls may be oriented in a column direction. A second barrier wall of the plurality of barrier walls may be oriented in a row direction. In an exemplary embodiment, the first barrier wall oriented in the column direction and the second barrier wall oriented in the row direction may intersect to form a trench region. The oxide layer 128 may form a trench region that may have a cross-sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross-sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12.In an exemplary embodiment, the P- substrate 130 may be made in the form of a P-well substrate. In another exemplary embodiment, the P- substrate 130 may be made of a semiconductor material (e.g., silicon) comprising acceptor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of P-substrates 130 may form the base of the memory cell array 20 or a single P- substrate 130 may form the base of the memory cell array 20.An insulating layer 132 may be formed on top of the P+ region 626. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the P+ region 626 to electrically insulating the P+ region 626.Referring to Figure 7 , there is shown cross-sectional views of at least a portion of the memory cell array 20 shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. Figure 7 illustrates a cross-sectional view of at least a portion of the memory cell array 20 along the bit line (CN) 30 and a cross-sectional view of at least a portion of the memory cell array 20 along the word line (WL) 28. The memory cells 12 of the memory cell array 20 may be implemented in a vertical configuration having various regions. For example, the memory cell 12 may comprise a first N- region 720, a second N- region 722, a third N- region 724, and/or a P+ region 726. The first N- region 720, the second N- region 722, the third N- region 724, and/or the P+ region 726 may be disposed in a sequential contiguous relationship, and may extend vertically from a plane defined by an N+ substrate 130. In an exemplary embodiment, the second N-region 722 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges, and may be spaced apart from and capacitively coupled to the plurality of word lines (WL) 28.The first N- region 720 of the memory cell 12 may be coupled to the source line (EN) 32. The second N- region 722 of the memory cell 12 may be capacitively coupled to the word line (WL) 28. The third N- region 724 of the memory cell 12 may be coupled to a bit line (CN) 30. The P+ region 726 of the memory cell 12 may be coupled to a carrier injection line (EP) 34.The first N- region 720, the second N- region 722, and the third N- region 724 may be formed of the same material or different materials. Also, the first N- region 720, the second N- region 722, and the third N- region 724 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first N-region 720, the second N- region 722, and the third N- region 724 may be formed of a semiconductor material (e.g., silicon) comprising donor impurities (e.g., nitrogen, arsenic, and/or phosphorus). In an exemplary embodiment, the first N- region 720, the second N- region 722, and/or the third N- region 724 may be formed of a silicon material with donor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3.The P+ region 726 may be made in the form of a P-well region. In another exemplary embodiment, the P+ region 726 may be made of a semiconductor material (e.g., silicon) comprising acceptor impurities and may form a base of the one or more memory cells 12. For example, the P+ region 726 may form the base of a row or a column of memory cells 12 of the memory cell array 20. The P+ region 726 may comprise a continuous planar region configured above the N+ substrate 130. The P+ region 726 may also comprise a plurality of barrier walls formed on the continuous planar region. The plurality of barrier walls of the P+ region 726 may be oriented in a column direction and/or a row direction of the memory cell array 20.The source line (EN) 32 may be formed of at least one layer. In an exemplary embodiment, the source line (EN) 32 may comprise a plurality of layers. For example, the first layer of the source line (EN) 32 may be formed of a polysilicon material or silicon dioxide material, and/or a combination thereof. In another exemplary embodiment, the first layer of the source line (EN) 32 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising donor impurities. For example, the first layer of the source line (EN) 32 may be formed of a silicon material doped with nitrogen, arsenic, and/or phosphorus impurities. In an exemplary embodiment, the first layer of the source line (EN) 32 may be formed of a silicon material with acceptor impurities having a concentration of 1018 atoms/cm3 or above. The second layer of the source line (EN) 32 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. In an exemplary embodiment, the second layer of the source line (EN) 32 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. For example, the source line (EN) 32 may be coupled to a plurality of memory cells 12 (e.g., a column or a row of memory cells 12 of the memory cell array 20). The source line (EN) 32 may be configured above the first N- region 720.The word lines (WL) 28 may be capacitively coupled to the second N- region 722. The word lines (WL) 28 may be oriented in a row direction of the memory cell array 20 and coupled to a plurality of memory cells 12. The word lines (WL) 28 may be arranged on side portions of the memory cells 12 (e.g., memory cells 12 located on a row direction of the memory cell array 20). For example, the word lines (WL) 28 may be arranged at two side portions of the second N- region 722 of the memory cells 12.For example, the word lines (WL) 28 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material), a metal material, and/or a combination of a polycide material and a metal material. In another exemplary embodiment, the word lines (WL) 28 may be formed of an N+ doped silicon material. In an exemplary embodiment, the word lines (WL) 28 may capacitively couple a voltage potential/current source of the memory cell selection and control circuitry 38 to the second N- region 722 of the memory cell 12. In an exemplary embodiment, the first word line (WL) 28 may implement a write logic low (e.g., binary "0" data state) operation on the memory cell 12, while the second word line (WL) 28 may implement a write logic high (e.g., binary "1" data state) operation.The bit line (CN) 30 may be coupled to the third N-region 724 of the memory cell 12. The bit line (CN) 30 may be formed of a metal material. In another exemplary embodiment, the bit line (CN) 30 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material). In other exemplary embodiments, the bit line (CN) 30 may be formed of an N+ doped silicon layer. For example, the bit line (CN) 30 may be coupled to a plurality of memory cells 12. The bit line (CN) 30 may be configured on a side portion of the third N- region 724.An oxide layer 128 may be formed on the P+ region 726 and/or the N+ substrate 130. For example, the oxide layer 128 may be formed of an insulating material. In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may include a plurality of barrier walls formed of an insulating oxide material. The plurality of barrier walls may be oriented in a column direction and a row direction of the memory cell array 20. For example, a first barrier wall of the plurality of barrier walls may be oriented in a column direction. A second barrier wall of the plurality of barrier walls may be oriented in a row direction. The first barrier wall oriented in a column direction may have a different height from the second barrier wall oriented in a row direction. In an exemplary embodiment, the first barrier wall oriented in the column direction and the second barrier wall oriented in the row direction may intersect to form a trench region. The oxide layer 128 may form a trench region that may have a cross-sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross-sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12.In an exemplary embodiment, the N+ substrate 130 may be made in the form of an N-well substrate. In another exemplary embodiment, the N+ substrate 130 may be made of a semiconductor material (e.g., silicon) comprising donor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of N+ substrates 130 may form the base of the memory cell array 20 or a single N+ substrate 130 may form the base of the memory cell array 20.An insulating layer 132 may be formed on top of the first N- region 720. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the first N- region 720 to electrically insulating the source line (EN) 32.Referring to Figure 8 , there is shown cross-sectional views of at least a portion of the memory cell array 20 shown in Figure 1 in accordance with an embodiment of the present disclosure. Figure 8 illustrates a cross-sectional view of at least a portion of the memory cell array 20 along the bit line (CN) 30 and a cross-sectional view of at least a portion of the memory cell array 20 along the word line (WL) 28. The memory cells 12 of the memory cell array 20 may be implemented in a vertical configuration having various regions. For example, the memory cell 12 may comprise a first P- region 820, a second P- region 822, a third P- region 824, and/or an N+ region 826. The first P- region 820, the second P- region 822, the third P- region 824, and/or the N+ region 826 may be disposed in a sequential contiguous relationship, and may extend vertically from a plane defined by an N+ substrate 130. In an exemplary embodiment, the second P- region 822 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges, and may be spaced apart from and capacitively coupled to the plurality of word lines (WL) 28.The first P- region 820 of the memory cell 12 may be coupled to the source line (EN) 32. The second P- region 822 of the memory cell 12 may be capacitively coupled to the word line (WL) 28. The third P- region 824 of the memory cell 12 may be coupled to a bit line (CN) 30. The N+ region 826 of the memory cell 12 may be coupled to a carrier injection line (EP) 34.The first P- region 820, the second P- region 822, and the third P- region 824 may be formed of the same material or different materials. Also, the first P- region 820, the second P- region 822, and the third P- region 824 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first P-region 820, the second P- region 822, and the third P- region 824 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. The first P- region 820, the second P- region 822, and/or the third P- region 824 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first P- region 820, the second P- region 822, and/or the third P- region 824 may be formed of a silicon material with acceptor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3.The N+ region 826 may be formed of at least one layer. In an exemplary embodiment, the N+ region 826 may comprise a plurality of layers. For example, the first layer of the N+ region 826 may be formed of a polysilicon material or silicon dioxide material, and/or a combination thereof. In another exemplary embodiment, the first layer of the N+ region 826 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising donor impurities. For example, the first layer of the N+ region 826 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first layer of the N+ region 826 may be formed of a silicon material with donor impurities having a concentration of 1018 atoms/cm3 or above. The second layer of the N+ region 826 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. In an exemplary embodiment, the second layer of the N+ region 826 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof.The source line (EN) 32 may be formed of a metal material. In another exemplary embodiment, the source line (EN) 32 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material). In other exemplary embodiments, the source line (EN) 32 may be formed of a P+ doped silicon layer. The source line (EN) 32 may provide voltage potentials to the first P- region 820 of the memory cells 12. For example, the source line (EN) 32 may be coupled to a plurality of memory cells 12 (e.g., a column or a row of memory cells 12 of the memory cell array 20). The source line (EN) 32 may be configured on a side portion of the first P- region 820.The word lines (WL) 28 may be capacitively coupled to the second P- region 822. The word lines (WL) 28 may be oriented in a row direction of the memory cell array 20 and coupled to a plurality of memory cells 12. The word lines (WL) 28 may be arranged on side portions of the memory cells 12 (e.g., memory cells 12 located on a row direction of the memory cell array 20). For example, the word lines (WL) 28 may be arranged at two side portions of the second P- region 822 of the memory cells 12.For example, the word lines (WL) 28 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material), a metal material, and/or a combination of a polycide material and a metal material. In another exemplary embodiment, the word lines (WL) 28 may be formed of a P+ doped silicon material. In an exemplary embodiment, the word lines (WL) 28 may capacitively couple a voltage/current source of the memory cell selection and control circuitry 38 to the second P- region 822 of the memory cell 12. In an exemplary embodiment, the first word line (WL) 28 arranged on a side portion of the second P- region 822 may implement a write logic low (e.g., binary "0" data state) operation on the memory cell 12, while the second word line (WL) 28 arranged on an opposite side portion of the second P- region 822 may implement a write logic high (e.g., binary "1" data state) operation.The bit line (CN) 30 may be coupled to the third P-region 824 of the memory cell 12. The bit line (CN) 30 may be formed of a metal material. In another exemplary embodiment, the bit line (CN) 30 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material). In other exemplary embodiments, the bit line (CN) 30 may be formed of a P+ doped silicon layer. For example, the bit line (CN) 30 may be coupled to a plurality of memory cells 12. The bit line (CN) 30 may be configured on a side portion of the third P- region 824. In an exemplary embodiment, the bit line (CN) 30 may be configured on an opposite side portion as the source line (EN) 30.An oxide layer 128 may be formed on the N+ substrate 130. For example, the oxide layer 128 may be formed of an insulating material. In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may include a plurality of barrier walls formed of an insulating oxide material. The plurality of barrier walls may be oriented in a column direction and a row direction of the memory cell array 20. For example, a first barrier wall of the plurality of barrier walls may be oriented in a column direction. A second barrier wall of the plurality of barrier walls may be oriented in a row direction. In an exemplary embodiment, the first barrier wall oriented in the column direction and the second barrier wall oriented in the row direction may intersect to form a trench region. The oxide layer 128 may form a trench region that may have a cross-sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross-sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12.In an exemplary embodiment, the N+ substrate 130 may be made in the form of an N-well substrate. In another exemplary embodiment, the N+ substrate 130 may be made of a semiconductor material (e.g., silicon) comprising donor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of N+ substrates 130 may form the base of the memory cell array 20 or a single N+ substrate 130 may form the base of the memory cell array 20.An insulating layer 132 may be formed on top of the N+ region 826. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the N+ region 826 to electrically insulating the N+ region 826.Referring to Figure 9 , there is shown cross-sectional views of at least a portion of the memory cell array 20 shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. Figure 9 illustrates a cross-sectional view of at least a portion of the memory cell array 20 along the bit line (CN) 30 and a cross-sectional view of at least a portion of the memory cell array 20 along the word line (WL) 28. The memory cells 12 of the memory cell array 20 may be implemented in a vertical configuration having various regions. For example, the memory cell 12 may comprise a first P- region 920, a second P- region 922, a third P- region 924, and/or an N+ region 926. The first P- region 920, the second P- region 922, the third P- region 924, and/or the N+ region 926 may be disposed in a sequential contiguous relationship, and may extend vertically from a plane defined by a P+ substrate 130. In an exemplary embodiment, the second P-region 922 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges, and may be spaced apart from and capacitively coupled to the plurality of word lines (WL) 28.The first P- region 920 of the memory cell 12 may be coupled to the bit line (CN) 30. The second P- region 922 of the memory cell 12 may be capacitively coupled to the word line (WL) 28. The third P- region 924 of the memory cell 12 may be coupled to the source line (EN) 32. The N+ region 926 of the memory cell 12 may be coupled to a carrier injection line (EP) 34.The first P- region 920, the second P- region 922, and the third P- region 924 may be formed of the same material or different materials. Also, the first P- region 920, the second P- region 922, and the third P- region 924 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first P-region 920, the second P- region 922, and the third P- region 924 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the first P-region 920, the second P- region 922, and/or the third P-region 924 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first P-region 920, the second P- region 922, and/or the third P-region 924 may be formed of a silicon material with acceptor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3.The N+ region 926 may be made in the form of an N-well region. In another exemplary embodiment, the N+ region 926 may be made of a semiconductor material (e.g., silicon) comprising donor impurities and may form a base of the one or more memory cells 12. For example, the N+ region 926 may form the base of a row or a column of memory cells 12 of the memory cell array 20. The N+ region 926 may comprise a continuous planar region configured above the P+ substrate 130. The N+ region 926 may also comprise a plurality of barrier walls formed on the continuous planar region. The plurality of barrier walls of the N+ region 926 may be oriented in a column direction and/or a row direction of the memory cell array 20.The bit line (CN) 30 may be formed of at least one layer. In an exemplary embodiment, the bit line (CN) 30 may comprise a plurality of layers. For example, the first layer of the bit line (CN) 32 may be formed of a polysilicon material or silicon dioxide material, and/or a combination thereof. In another exemplary embodiment, the first layer of the bit line (CN) 30 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising donor impurities. For example, the first layer of the bit line (CN) 30 may be formed of a silicon material doped with nitrogen, arsenic, and/or phosphorus impurities. In an exemplary embodiment, the first layer of the bit line (CN) 30 may be formed of a silicon material with donor impurities having a concentration of 1018 atoms/cm3 or above. The second layer of the bit line (CN) 30 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. In an exemplary embodiment, the second layer of the bit line (CN) 30 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. For example, the bit line (CN) 30 may be coupled to a plurality of memory cells 12 (e.g., a column or a row of memory cells 12 of the memory cell array 20). The bit line (CN) 30 may be configured above the first P- region 920.The word lines (WL) 28 may be capacitively coupled to the second P- region 922. The word lines (WL) 28 may be oriented in a row direction of the memory cell array 20 and coupled to a plurality of memory cells 12. The word lines (WL) 28 may be arranged on side portions of the memory cells 12 (e.g., memory cells 12 located on a row direction of the memory cell array 20). For example, the word lines (WL) 28 may be arranged at two side portions of the second P- region 922 of the memory cells 12.For example, the word lines (WL) 28 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material), a metal material, and/or a combination of a polycide material and a metal material. In another exemplary embodiment, the word lines (WL) 28 may be formed of an N+ doped silicon material. In an exemplary embodiment, the word lines (WL) 28 may capacitively couple a voltage potential/current source of the memory cell selection and control circuitry 38 to the second P- region 922 of the memory cell 12. In an exemplary embodiment, the first word line (WL) 28 may implement a write logic low (e.g., binary "0" data state) operation on the memory cell 12, while the second word line (WL) 28 may implement a write logic high (e.g., binary "1" data state) operation.The source line (EN) 32 may be coupled to the third P-region 924 of the memory cell 12. The source line (EN) 32 may be formed of a metal material. In another exemplary embodiment, the source line (EN) 32 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material). In other exemplary embodiments, the source line (EN) 32 may be formed of a P+ doped silicon layer. For example, the source line (EN) 32 may be coupled to a plurality of memory cells 12. The source line (EN) 32 may be configured on a side portion of the third P- region 924.An oxide layer 128 may be formed on the N+ region 926 and/or the P+ substrate 130. For example, the oxide layer 128 may be formed of an insulating material. In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may include a plurality of barrier walls formed of an insulating oxide material. The plurality of barrier walls may be oriented in a column direction and a row direction of the memory cell array 20. For example, a first barrier wall of the plurality of barrier walls may be oriented in a column direction. A second barrier wall of the plurality of barrier walls may be oriented in a row direction. The first barrier wall oriented in a column direction may have a different height from the second barrier wall oriented in a row direction. In an exemplary embodiment, the first barrier wall oriented in the column direction and the second barrier wall oriented in the row direction may intersect to form a trench region. The oxide layer 128 may form a trench region that may have a cross-sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross-sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12.In an exemplary embodiment, the P+ substrate 130 may be made in the form of a P-well substrate. In another exemplary embodiment, the P+ substrate 130 may be made of a semiconductor material (e.g., silicon) comprising acceptor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of P+ substrates 130 may form the base of the memory cell array 20 or a single P+ substrate 130 may form the base of the memory cell array 20.An insulating layer 132 may be formed on top of the first P- region 920. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the first P- region 920 to electrically insulating the bit line (CN) 30.Referring to Figure 10 , there are shown control signal voltage waveforms for performing a write operation on a memory cell 12 shown in Figure 2 in accordance with an embodiment of the present disclosure. For example, the various control signals may be configured to perform a write logic low (e.g., binary "0" data state) operation, and/or a write logic high (e.g., binary "1" data state) operation. In an exemplary embodiment, various control signals may be applied to the memory cell 12 to perform one or more write logic low (e.g., binary "0" data state) operations to one or more selected memory cells 12. For example, the write logic low (e.g., binary "0" data state) operation may be performed to one or more selected memory cells 12 in order to deplete charge carriers that may have accumulated/stored in the floating body regions of the one or more selected memory cells 12. Various voltage potentials may be applied to the various regions of the memory cell 12. In an exemplary embodiment, the voltage potentials applied to the first N- region 120, the third N-region 124, and/or the P- region 126 may be maintained at 0V. The voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 may be raised from a voltage potential applied during the hold operation. In an exemplary embodiment, the voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 may be raised to -0.5V.Under such biasing, the junction between the first N-region 120 and the second N- region 122 and the junction between the second N- region 122 and the third N- region 124 may be forward biased. The junction between the third N-region 124 and the P- region 126 may be reverse biased or weakly forward biased (e.g., above a reverse bias voltage and below a forward bias threshold voltage potential). The hole charge carriers that may have accumulated/stored in the second N- region 122 may flow to the first N- region 120 and/or the third N- region 124. Thus, the hole charge carriers that may have accumulated/stored in the second N- region 122 may be depleted via the first N- region 120 and/or the third N-region 124. By removing the hole charge carriers that may have accumulated/stored in the second N- region 122, a logic low (e.g., binary "0" data state) may be written to the memory cell 12.After performing a write logic low (e.g., binary "0" data state) operation, the control signals may be configured to perform a hold operation in order to maintain a data state (e.g., a logic high (binary "1" data state)) stored in the memory cell 12. In particular, the control signals may be configured to perform a hold operation in order to maximize a retention time of a data state (e.g., a logic low (binary "0" data state)) stored in the memory cell 12. Also, the control signals for the hold operation may be configured to eliminate or reduce activities or field (e.g., electrical fields between junctions which may lead to leakage of charges) within the memory cell 12. In an exemplary embodiment, during a hold operation, a negative voltage potential may be applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 of the memory cell 12 while constant voltage potentials may be applied to the first N- region 120 via the source line (EN) 32, the third N- region 124 via the bit line (CN) 30, and/or the P- region 126 via the carrier injection line (EP) 34 may be maintained at 0V.For example, the negative voltage potential applied to the word line (WL) 28 (e.g., capacitively coupled to the P-region 122 of the memory cell 12) may be -2.0V. During the hold operation, the junction between the first N- region 120 and the second N- region 122 and the junction between the third N- region 124 and the second N- region 122 may be reverse biased in order to retain a data state (e.g., a logic high (binary "1" data state) or a logic low (binary "0" data state)) stored in the memory cell 12.In another exemplary embodiment, control signals may be configured to write a logic high (e.g., binary "1" data state) to one or more selected memory cells 12 of one or more selected rows of the memory cell array 20. For example, the write logic high (e.g., binary "1" data state) operation may be performed on one or more selected rows of the memory cell array 20 or the entire memory cell array 20. In another exemplary embodiment, a write logic high (e.g., binary "1" data state) operation may have control signals configured to cause accumulation/storage of hole charge carriers in the second N- region 122.In an exemplary embodiment, a voltage potential applied to the first N- region 120 of the memory cell 12 via the source line (EN) 32 and a voltage potential applied to the third N- region 124 via the bit line (CN) 30 may be maintained at the same voltage potential as the voltage potential during the hold operation. For example, the voltage potential applied to first N- region 120 via the source line (EN) 32 and the third N- region 124 via the bit line (CN) 30 may be maintained at 0V. The voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N-region 122 may be also maintained the same as during the hold operation. For example, the voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 may be maintained at -2.0V.The voltage potential applied to the P- region 126 via the carrier injection line (EP) 34 may be raised from a voltage potential applied during the hold operation. In an exemplary embodiment, the voltage potential applied to the P-region 126 via the carrier injection line (EP) 34 may be raised to approximately 0.7V to 0.9V from 0V.Under such biasing, the junction between the third N-region 124 and the P- region 126 may become forward biased. For example, the majority charge carriers (e.g., holes) may flow toward from the P- region 126 to the second N- region 122 via the third N- region 124. Thus, a predetermined amount of hole charge carriers may be accumulated/stored in the N-region 122 via the P+ region 126 and the third N- region 124. The predetermined amount of charge carriers accumulated/stored in the second N- region 122 (e.g., capacitively coupled to word line (WL) 28) may represent that a logic high (e.g., binary "1" data state) may be written in the memory cell 12.Referring to Figure 11 , there are shown control signal voltage waveforms for performing a read operation on a memory cell 12 shown in Figure 2 in accordance with an embodiment of the present disclosure. In an exemplary embodiment, control signals may be configured to perform a read operation of a data state (e.g., a logic low (binary "0" data state) and/or a logic high (binary "1" data state)) stored in one or more selected memory cells 12 of one or more selected rows of the memory cell array 20.The control signals may be configured to a predetermined voltage potential to implement a read operation via the bit line (CN) 30. In an exemplary embodiment, the voltage potential applied to the first N- region 120 via the source line (EN) 32 and the voltage potential applied to the P-region 126 via the carrier injection line (EP) 34 may be maintained at 0V. The voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N-region 122 and the voltage potential applied to the third N-region 124 may be raised from the voltage potentials applied during the hold operation. In an exemplary embodiment, the voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 may be raised to -1.0V from -2.0V. The voltage potential applied to the third N- region 124 via the bit line (CN) 30 may be raised to 1.0V from 0V.Under such biasing, when a logic low (e.g., binary "0" data state) is stored in the memory cell 12, the predetermined amount of hole charge carriers accumulated/stored in the second N- region 122 during hold operation may flow toward the third N- region 124. The predetermined amount of hole charge carriers flown to the third N- region 124 may cause an injection of electron charge carriers from the third N- region 124. The injection of electron charge carriers from the third N- region 124 may cause a current spike and may change a voltage potential on the bit line (CN) 30. A data sense amplifier in the data write and sense circuitry 36 may detect the small amount of voltage potential or current (e.g., compared to a reference voltage potential or current) or no voltage potential or current via the bit line (CN) 30 coupled to the third N- region 124.When a logic high (e.g., binary "1" data state) is stored in the memory cell 12, the predetermined amount of hole charge carriers (e.g., that may represent a logic high (e.g., binary "1" data state)) accumulated/stored in the second N- region 122 may flow toward the third N- region 124. The predetermined amount of hole charge carriers injected into the third N- region 124 may also cause an injection of electron charge carriers into the third N- region 124. The injection of electron charge carriers into the third N- region 124 may cause a current spike and may change a voltage potential on the bit line (CN) 30. A data sense amplifier in the data write and sense circuitry 36 may detect the generated voltage potential or current (e.g., compared to a reference voltage potential or current) via the bit line (CN) 30.At this point it should be noted that providing techniques for providing a semiconductor memory device in accordance with the present disclosure as described above typically involves the processing of input data and the generation of output data to some extent. This input data processing and output data generation may be implemented in hardware or software. For example, specific electronic components may be employed in a semiconductor memory device or similar or related circuitry for implementing the functions associated with providing a semiconductor memory device in accordance with the present disclosure as described above. Alternatively, one or more processors operating in accordance with instructions may implement the functions associated with providing a semiconductor memory device in accordance with the present disclosure as described above. If such is the case, it is within the scope of the present disclosure that such instructions may be stored on one or more processor readable media (e.g., a magnetic disk or other storage medium), or transmitted to one or more processors via one or more signals embodied in one or more carrier waves.Further embodiments are set out in the following clauses:1. A semiconductor memory device comprising:a plurality of memory cells arranged in an array of rows and columns, each memory cell comprising:a first region;a second region;a body region capacitively coupled to at least one word line and disposed between the first region and the second region; anda third region, wherein the third region is doped differently than the first region, the second region, and the body region.2. The semiconductor memory device according to clause 1, wherein the first region is coupled to a first poly plug and the second region is coupled to a second poly plug.3. The semiconductor memory device according to clause 1, wherein the first region, the second region, the body region, and the third region are arranged in a planar configuration.4. The semiconductor memory device according to clause 3, wherein the first region, the second region, and the body region are doped with donor impurities.5. The semiconductor memory device according to clause 4, wherein the third region is doped with acceptor impurities.6. The semiconductor memory device according to clause 5, wherein the first region, the second region, and the body region are undoped regions.7. The semiconductor memory device according to clause 3, wherein the body region is coupled to a first doped region and the third region is coupled to a second doped region.8. The semiconductor memory device according to clause 7, wherein the second doped region is doped with acceptor impurities having a concentration higher than the doped third region.9. The semiconductor memory device according to clause 3, wherein the first region, the second region, and the body region are doped with acceptor impurities.10. The semiconductor memory device according to clause 3, wherein the third region is doped with donor impurities.11. The semiconductor memory device according to clause 10, wherein the first region, the second region, and the body region are undoped regions.12. The semiconductor memory device according to clause 1, wherein the first region, the second region, and the body region are arranged in a vertical configuration.13. The semiconductor memory device according to clause 12, wherein the first region, the second region, and the body region are doped with donor impurities.14. The semiconductor memory device according to clause 13, wherein the third region is doped with acceptor impurities.15. The semiconductor memory device according to clause 14, wherein the third region is made of a P-well region.16. The semiconductor memory device according to clause 13, wherein the first region is coupled to a source line and the second region is coupled to a bit line.17. The semiconductor memory device according to clause 16, wherein the source line and the bit line are arranged on opposite sides of the memory cell.18. The semiconductor memory device according to clause 12, wherein the first region, the second region, and the body region are doped with acceptor impurities.19. The semiconductor memory device according to clause 18, wherein the third region is doped with donor impurities.20. The semiconductor memory device according to clause 19, wherein the third region is made of an N-well region.21. A method for biasing a semiconductor memory device comprising the steps of:applying a plurality of voltage potentials to a plurality of memory cells arranged in an array of rows and columns, wherein applying the plurality of voltage potentials to the plurality of memory cells comprises:applying a first voltage potential to a first region of each of the plurality of memory cells;applying a second voltage potential to a second region of each of the plurality of memory cells;applying a third voltage potential to a body region of each of the plurality of memory cells via at least one respective word line of the array that is capacitively coupled to the body region; andapplying a fourth voltage potential to a third region.22. The method according to clause 21, further comprising increasing the third voltage potential applied to the at least one respective word line during a hold operation in order to perform a write logic low operation.23. The method according to clause 21, further comprising maintaining the first voltage potential, the second voltage potential, and the fourth voltage potential applied during a hold operation in order to perform a write logic low operation.24. The method according to clause 21, further comprising increasing the fourth voltage potential applied during a hold operation in order to perform a write logic high operation.25. The method according to clause 21, further comprising maintaining the first voltage potential, the second voltage potential, and the third voltage potential applied during a hold operation in order to perform a write logic high operation.26. The method according to clause 21, further comprising increasing the second voltage potential applied during a hold operation in order to perform a read operation.27. The method according to clause 21, further comprising increasing the third voltage potential applied during a hold operation in order to perform a read operation.The present disclosure is not to be limited in scope by the specific embodiments described herein. Indeed, other various embodiments of and modifications to the present disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other embodiments and modifications are intended to fall within the scope of the present disclosure. Further, although the present disclosure has been described herein in the context of a particular implementation in a particular environment for a particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein. |
Systems and methods of conductively coupling at least three semiconductor dies included in a semiconductor package using a multi-die interconnect bridge that is embedded, disposed, or otherwise integrated into the semiconductor package substrate are provided. The multi-die interconnect bridge is a passive device that includes passive electronic components such as conductors, resistors, capacitorsand inductors. The multi-die interconnect bridge communicably couples each of the semiconductor dies included in the at least three semiconductor dies to each of at least some of the remaining at least three semiconductor dies. The multi-die interconnect bridge occupies a first area on the surface of the semiconductor package substrate. The smallest of the at least three semiconductor dies coupledto the multi-die interconnect bridge 120 occupies a second area on the surface of the semiconductor package substrate, where the second area is greater than the first area. |
1.A semiconductor package comprising:a semiconductor package substrate having a first surface separated by a certain thickness and a second surface laterally opposite;At least three semiconductor dies coupled to the semiconductor package substrate;Wherein the smallest of the at least three semiconductor dies occupies a first physical area on the first surface of the semiconductor package substrate;a multi-die interconnect bridge comprising a second disposed adjacent the first surface of the semiconductor package substrate and occupying the first surface of the semiconductor package substrate One or more electrically conductive members of the physical region;Wherein the multi-die interconnect bridge electrically couples each of the at least three semiconductor dies to each of the remaining at least three semiconductor dies;The second physical area occupied by the multi-die interconnect bridge is smaller than the first physical area of the smallest of the at least three semiconductor dies.2.The semiconductor package of claim 1 wherein said one or more electrically conductive members included in said multi-die interconnect bridge electrically couple said at least three semiconductor dies without being included Any intermediate semiconductor die of at least three silicon dies.3.The semiconductor package of claim 1 wherein said multi-die interconnect bridge defines a shortest distance between each of said at least three semiconductor dies and the remaining said at least three semiconductor dies.4.The semiconductor package of claim 1 wherein said multi-die interconnect bridge comprises at least one of: a silicon die at least partially embedded in said first surface of said semiconductor package substrate, And a silicon bridge integrally formed with the semiconductor package substrate.5.The semiconductor package of any of claims 1 to 4, further comprising an active die communicably coupled to the multi-die interconnect bridge.6.The semiconductor package of claim 5 wherein said active die comprises at least one of: a control circuitry or a repeater circuitry.7. A method of fabricating a semiconductor package, comprising:Deploying a multi-die interconnect bridge comprising a plurality of conductive members adjacent a first surface of a semiconductor package substrate, the multi-die interconnect bridge occupies the semiconductor package substrate a first physical area of the first surface;Conductively coupling each of the at least three semiconductor dies to the multi-die interconnect bridge such that the plurality of conductive members electrically couple each of the at least three semiconductor dies to the remaining At least three semiconductor dies;Wherein the smallest of the at least three semiconductor dies occupies a second physical region on the first surface of the semiconductor package substrate;The first physical area occupied by the multi-die interconnect bridge is smaller than the second physical area of the smallest of the at least three semiconductor dies.8.The method of claim 7 wherein forming a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate further comprises:Forming a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate such that the plurality of electrically conductive members included in the multi-die interconnect bridge are electrically coupled At least three semiconductor dies are described without passing through any intermediate semiconductor dies that are included in the at least three silicon dies.9.The method of claim 7 wherein electrically coupling each of the at least three semiconductor dies to the multi-die interconnect bridge further comprises:Each of the at least three semiconductor dies is conductively coupled to the multi-die interconnect bridge defining a shortest distance between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies Device.10. The method of claim 7 wherein deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate comprises at least one of the following:At least partially embedding a silicon die in the first surface of the semiconductor package substrate to provide the multi-die interconnect bridge;An integrated silicon bridge is formed in the thickness of the semiconductor package substrate.11.The method of any of claims 7 to 10, further comprising:At least one active semiconductor die is conductively coupled to the multi-die interconnect bridge.12.The method of claim 11 wherein electrically coupling at least one active semiconductor die to the multi-die interconnect bridge comprises:At least one active semiconductor die including at least one of control circuitry and repeater circuitry is electrically coupled to the multi-die interconnect bridge.13. A semiconductor package fabrication system comprising:Means for deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent a first surface of a semiconductor package substrate, the multi-die interconnect bridge occupying the a first physical region of the first surface of the semiconductor package substrate;Conductively coupling each of the at least three semiconductor dies to the multi-die interconnect bridge such that the plurality of conductive members electrically couple each of the at least three semiconductor dies to the remaining Describe the components of at least three semiconductor dies;Wherein the smallest of the at least three semiconductor dies occupies a second physical region on the first surface of the semiconductor package substrate;The first physical area occupied by the multi-die interconnect bridge is smaller than the second physical area of the smallest of the at least three semiconductor dies.14.The system of claim 13 wherein said means for forming a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate further comprises:A multi-die interconnect bridge for forming a plurality of conductive members including a first surface of a semiconductor package substrate such that the plurality of conductive members included in the multi-die interconnect bridge are conductively coupled The at least three semiconductor dies do not pass through components of any intermediate semiconductor dies that are included in the at least three silicon dies.15.The system of claim 13 wherein said means for electrically coupling each of the at least three semiconductor dies to said multi-die interconnect bridge further comprises:Conductively coupling each of the at least three semiconductor dies to the plurality of die dies defining a shortest distance between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies Connect the components of the bridge.16. The system of claim 13 wherein said means for deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate comprises at least one of the following:Means for at least partially embedding a silicon die in the first surface of the semiconductor package substrate to provide the multi-die interconnect bridge;A component for forming an integrated silicon bridge in the thickness of the semiconductor package substrate.17.A system according to any one of claims 13 to 16, further comprising:A component for electrically coupling at least one active semiconductor die to the multi-die interconnect bridge.18.The system of claim 17 wherein said means for electrically coupling at least one active semiconductor die to said multi-die interconnect bridge comprises:A component for electrically coupling at least one active semiconductor die including at least one of a control circuitry and a repeater circuitry to the multi-die interconnect bridge.19.An electronic device comprising:A printed circuit board;a semiconductor package electrically conductively coupled to the printed circuit board, the semiconductor package comprising:a semiconductor package substrate coupled to the printed circuit board, the semiconductor package substrate having a first surface separated by a thickness and a laterally opposite second surface;At least three semiconductor dies that are included in the semiconductor package and that are coupled to the first surface of the semiconductor package substrate;Wherein the smallest of the at least three semiconductor dies occupies a first physical area on the first surface of the semiconductor package substrate;a multi-die interconnect bridge disposed adjacent the first surface of the semiconductor package substrate, the multi-die interconnect bridge including one or more conductive members and occupying a portion of the semiconductor package substrate a second physical area of the first surface;Wherein the multi-die interconnect bridge electrically couples each of the at least three semiconductor dies to each of the remaining at least three semiconductor dies;The second physical area occupied by the multi-die interconnect bridge is smaller than the first physical area of the smallest of the at least three semiconductor dies.20.The electronic device of claim 19 wherein said one or more electrically conductive members included in said multi-die interconnect bridge electrically couple said at least three semiconductor dies without being included Any intermediate semiconductor die of at least three silicon dies.21.The electronic device of claim 19 wherein said silicon bridge defines a shortest distance between each of said at least three semiconductor dies and the remaining said at least three semiconductor dies.22.The electronic device of claim 19, wherein the multi-die interconnect bridge comprises at least one of: a silicon die at least partially embedded in the first surface of the semiconductor package substrate And a silicon bridge integrally formed with the semiconductor package substrate.23.The electronic device of any of claims 19 to 22, wherein the semiconductor package further comprises an active die communicably coupled to the multi-die interconnect bridge.24.The electronic device of claim 23 wherein said active die comprises at least one of: a control circuitry and a repeater circuitry. |
Bridge hub splicing architectureTechnical fieldThe present disclosure relates to semiconductor packages.Background techniqueNext-generation data centers are trending toward systems that offer greater computing power, operational flexibility, and improved power efficiency. The combination of requirements presented by next-generation data centers presents a considerable challenge to current general-purpose servers. The ever-increasing demand for reduced system complexity and commercial agility and scalability has increased the need for virtualized data center infrastructure, which will place additional demands on next-generation data servers. To meet the demands of such changes, next-generation servers can be designed to address specific workload matrices. However, such task-oriented or service-oriented designs compromise the long-term flexibility of such next-generation servers while improving power efficiency. Therefore, servers used in next-generation data centers must be able to provide cost-saving solutions that address current and future computing needs while delivering improved power efficiency over traditional servers, providing the ability to meet evolved operations. A flexible platform needed.The challenges presented by the increasing popularity of Internet of Things (IoT) devices are strikingly similar to those presented by next-generation data centers. With almost billions of connected devices, cloud-based infrastructure must quickly evaluate high-bandwidth data streams and determine which data can be processed and which data can be safely discarded.Next generation platforms share several different requirements: increased bandwidth; increased flexibility for increased functionality; improved power efficiency (or reduced power consumption) and reduced footprint requirements. To date, designers have been able to address the need for such changes by packaging additional components on a standard printed circuit board. The limitations inherent in such single board solutions are unsatisfactory to address the multiple demands placed on next generation devices. Such limitations include: chip-to-chip bandwidth limitations based on interconnect density; power requirements for long-distance traces between chips; and increased physical size of printed circuit boards to accommodate chips. Monolithic integration of system components provides a potential solution, however, such integration is not easy to permit integration of system components, and each system component may evolve at different rates. For example, a logic chip built using update techniques may not easily integrate itself into or be suitable for monolithic fabrication with memory chips built using older techniques.Conventional solutions therefore fail to meet all future demands for higher bandwidth, greater power efficiency, increased functionality, and increased operational flexibility in physically smaller packages.DRAWINGSThe features and advantages of the various embodiments of the claimed subject matter will be apparent from the accompanying drawings, in which1A is a schematic diagram of an illustrative system including at least three semiconductor dies, each semiconductor die using a multi-die interconnect bridge at least partially disposed in a semiconductor package substrate, in accordance with at least one embodiment described herein. And being electrically coupled to the remaining semiconductor die;1B is a cross-sectional elevational view of the illustrative system depicted in FIG. 1A along section line 1B-1B, in accordance with at least one embodiment described herein;2A is a plan view of an illustrative semiconductor package 200 including four semiconductor dies having a multi-die interconnect bridge electrically coupled to a single centrally located, in accordance with at least one embodiment described herein. Corresponding PHY layer transceiver;2B is a schematic illustration of a communication path provided by a single centrally located multi-die interconnect bridge as depicted in FIG. 2A, in accordance with at least one embodiment described herein;3 is a plan view of a system including a semiconductor package in which a total of nine semiconductor dies are communicably coupled using only four multi-die interconnect bridges, in accordance with at least one embodiment described herein. Together4 is a plan view of a system including a semiconductor package in which a total of sixteen semiconductor dies are communicably communicated using only five multi-die interconnect bridges, in accordance with at least one embodiment described herein. Coupled together;5A is a plan view of an illustrative semiconductor package having a first non-conventional configuration including three conductively coupled to a single triangular multi-die interconnect bridge, in accordance with at least one embodiment described herein. Rectangular semiconductor die;5B is a plan view of an illustrative semiconductor package having a second non-conventional configuration including conductively coupled to a single cross-shaped multi-die interconnect bridge, in accordance with at least one embodiment described herein. Four rectangular semiconductor dies;5C is a plan view of an illustrative semiconductor package having a third non-conventional configuration including three electrically conductively coupled to a single triangular multi-die interconnect bridge, in accordance with at least one embodiment described herein. Triangular semiconductor die;5D is a plan view of an illustrative semiconductor package having a fourth non-conventional configuration including conductively coupled to a single hexagonal multi-die interconnect bridge, in accordance with at least one embodiment described herein. Six triangular semiconductor dies;5E is a plan view of an illustrative semiconductor package having a fifth non-conventional configuration including conductively coupled to a single cross-shaped multi-die interconnect bridge, in accordance with at least one embodiment described herein. Four semiconductor dies;6A is a plan view of an illustrative system including a semiconductor package in which a single multi-die interconnect bridge including an active die is electrically coupled to four semiconductors, in accordance with at least one embodiment described herein. Die6B is a cross-sectional elevational view of the illustrative semiconductor package depicted in FIG. 6A along section line 6B-6B, in accordance with at least one embodiment described herein;7 is a schematic diagram of an illustrative electronic device including a system in chip (SiC), including a conductively coupled graphics processing unit as described in FIGS. 1 through 6, in accordance with at least one embodiment described herein, a multi-die interconnect bridge for processor circuitry and system memory;8 is an illustration of an illustrative method of fabricating a semiconductor package (such as an in-chip system) incorporating at least one multi-die interconnect bridge communicatively coupling at least three semiconductor dies, in accordance with at least one embodiment described herein. Flow chart;9 is a high level flow diagram of an illustrative method of electrically coupling one or more active dies to a passive multi-die interconnect bridge, in accordance with at least one embodiment described herein.While the following detailed description is to be considered in aDetailed waysThe systems and methods described herein facilitate the coupling of various semiconductor dies ("chiplets") within a semiconductor package using a multi-die interconnect bridge deployed in the surface of a semiconductor package substrate, and Communically coupling three or more semiconductor dies such that each of the three or more semiconductor dies is conductively coupled to each of the remaining three or more semiconductor dies, and at least three The conductive coupling between any two of the semiconductor dies does not pass through any other semiconductor dies included in at least three of the semiconductor dies. The multi-die interconnect bridge can be formed as a separate silicon die that is at least partially embedded in the semiconductor package substrate during the package fabrication process. The multi-die interconnect bridge can be integrally formed with the semiconductor package substrate during the substrate fabrication process. Each of the at least three semiconductor dies that are electrically coupled to the multi-die interconnect bridge can occupy the same or different physical regions on the surface of the semiconductor package substrate. The multi-die interconnect bridge can occupy a physical area of the surface of the semiconductor package substrate that is less than the physically smallest semiconductor die of the at least three semiconductor dies.The use of a multi-die interconnect bridge that electrically couples at least three semiconductor dies occupies a very small footprint on the semiconductor package substrate, thereby permitting greater density and thus permitting a reduced package footprint. The physical proximity of the component semiconductor dies coupled to the bridge shortens the interconnect length, beneficially improving bandwidth and power efficiency. The use of a multi-die interconnect bridge that electrically couples at least three semiconductor dies does not require the use of through-silicon vias (TSVs), which is beneficially improved when compared to conventional silicon interposer layers Signal quality and bandwidth. The use of a multi-die interconnect bridge that electrically couples at least three semiconductor dies permits the selective use of fine pitch microbumps for high density communications and coarser pitch flip chip bumps for power and ground connections.The multi-die interconnect bridge may only include conductors for directly coupling each of the at least three semiconductor dies to the remaining at least three semiconductor dies. The multi-die interconnect bridge may include one or more active components, such as control circuitry and/or repeaters between one or more of the at least three semiconductor dies and the remaining at least three semiconductor dies electrical system.The use of a multi-die interconnect bridge also decouples each of the at least three semiconductor dies, permitting the use of hybrid architecture dies in a single package - something that is not possible with monolithic fabrication techniques. For example, the use of multi-die interconnect bridges allows the use of logic chips fabricated using 14 nanometer (nm) technology to memory chips fabricated using 40 nm technology and graphics processing units (GPUs) fabricated using 28 nm technology. Operation and conductive coupling. Thus, individual semiconductor die assemblies can be mixed and matched as needed to provide a flexible system architecture that meets energy and performance criteria.Smaller multi-die interconnect bridges are generally less expensive and less prone to manufacturing problems such as warpage when compared to physically larger silicon interposers. Further, for each signal connected to a ball coupled to a semiconductor package substrate, the silicon interposer requires a corresponding through silicon via (TSV). Such TSVs increase package manufacturing complexity. The increase in manufacturing complexity increases the loss of output and adversely affects overall commercial viability. In addition, the use of a large number of TSVs results in poor signal integrity of high speed signals and causes IR drops in the power delivery network. TSV also adds series resistance and capacitance, which compromises the high speed design of the transceiver block on the semiconductor die.A semiconductor package is provided. The semiconductor package can include: a semiconductor package substrate having a first surface separated by a thickness and a laterally opposite second surface; at least three semiconductor dies coupled to the semiconductor package substrate; wherein a minimum of the at least three semiconductor dies occupying a first physical region on the first surface of the semiconductor package substrate; and a multi-die interconnect bridge, the multi-die interconnect bridge including One or more electrically conductive members disposed adjacent the first surface of the semiconductor package substrate and occupying a second physical region of the first surface of the semiconductor package substrate; wherein the multi-die interconnect A bridge electrically conductively coupling each of the at least three semiconductor dies to each of the remaining at least three semiconductor dies; and wherein the second is occupied by the multi-die interconnect bridge The physical area is less than the first physical area of the smallest of the at least three semiconductor dies.A method of fabricating a semiconductor package is provided. The method can include deploying a multi-die interconnect bridge comprising a plurality of conductive members adjacent a first surface of a semiconductor package substrate, the multi-die interconnect bridge occupies a first physical region of the first surface of the semiconductor package substrate; and electrically coupling each of the at least three semiconductor dies to the multi-die interconnect bridge such that the plurality of conductive members Each of the at least three semiconductor dies is conductively coupled to the remaining at least three semiconductor dies; wherein a minimum of the at least three semiconductor dies occupies the semiconductor package substrate a second physical area on the first surface; and wherein the first physical area occupied by the multi-die interconnect bridge is smaller than the second of the at least three of the at least three semiconductor dies Physical area.A semiconductor package fabrication system is provided. The semiconductor package fabrication system can include: a component for deploying a multi-die interconnect bridge, the multi-die interconnect bridge including a plurality of conductive members adjacent a first surface of a semiconductor package substrate, the plurality a die interconnect bridge occupies a first physical region of the first surface of the semiconductor package substrate; and for electrically coupling each of the at least three semiconductor dies to the multi-die interconnect bridge Having the plurality of electrically conductive members electrically couple each of the at least three semiconductor dies to a component of the remaining at least three semiconductor dies; wherein a minimum of the at least three semiconductor dies A second physical area on the first surface of the semiconductor package substrate; and wherein the first physical area occupied by the multi-die interconnect bridge is smaller than the at least three semiconductor dies The second physical area of the smallest die.An electronic device including a semiconductor package having at least one multi-die interconnect bridge is provided. The electronic device can include: a printed circuit board; a semiconductor package electrically coupled to the printed circuit board, the semiconductor package comprising: a semiconductor package substrate coupled to the printed circuit board, the semiconductor package substrate having a first surface separated by a thickness and a laterally opposite second surface; at least three semiconductor dies included in the semiconductor package and coupled to the first surface of the semiconductor package substrate; a minimum of the at least three semiconductor dies occupying a first physical region on the first surface of the semiconductor package substrate; and being disposed adjacent the first surface of the semiconductor package substrate a multi-die interconnect bridge comprising one or more electrically conductive members and occupying a second physical region of the first surface of the semiconductor package substrate; wherein the multi-die An interconnect bridge electrically coupling each of the at least three semiconductor dies to each of the remaining at least three semiconductor dies; and wherein the plurality of dies are Even the second physical bridge region is less than the minimum occupied by said at least three die semiconductor die first physical region.As used herein, the terms "top," "bottom," "upper," "lower," "lower," and "uppermost" are used to mean relative rather than absolute when used with respect to one or more elements. Physical configuration. Thus, an element described as "upper film layer" or "top element" in the device can be turned into a "lowest element" or "bottom element" in the device when the device is inverted. Similarly, elements described as "lowest element" or "bottom element" in the device can be turned into "upper element" or "top element" in the device when the device is inverted.As used herein, the term "logically associated" when used in a reference to a plurality of objects, systems, or elements, is intended to mean the existence of a relationship between the objects, systems or elements, such that for an object, Access to a system or component reveals the remaining objects, systems, or elements that have a "logical association" with or to the accessed object, system, or component. An example "logical association" exists between related databases, wherein access to elements in the first database can provide information and/or data from one or more of a plurality of additional databases, each element having access to the accessed element The relationship of the logo. In another example, if "A" is logically associated with "B," accessing "A" will reveal or otherwise derive information and/or data from "B", and vice versa.1A is a schematic diagram of an illustrative system 100 including at least three semiconductor dies 110A-110D (collectively referred to as "semiconductor dies 110"), at least partially deployed on a semiconductor package substrate, in accordance with at least one embodiment described herein. Multiple die interconnect bridges 120 in 130 electrically couple each semiconductor die to the remaining semiconductor die. FIG. 1B is a cross-sectional elevational view of the illustrative system 100 depicted in FIG. 1A along section line 1B-1B, in accordance with at least one embodiment described herein. In an embodiment, the semiconductor package 102 can be electrically coupled to a substrate 140, such as a circuit board or the like. In an embodiment, the multi-die interconnect bridge 120 can include a silicon die that is fabricated separately from the semiconductor package substrate 130 and at least partially embedded in the first surface 132 of the semiconductor package substrate 130. In other embodiments, the multi-die interconnect bridge 120 can be fabricated integrally with the semiconductor package substrate 130.Multi-die interconnect bridge 120 provides bidirectional communication paths 122A-122n (collectively referred to as "communication paths 122") between each of at least three semiconductor dies 110 and some or all of the remaining at least three semiconductor dies 110. . Beneficially, the bidirectional communication path 122 electrically coupled between any of the semiconductor dies 110 of the multi-die interconnect bridge 120 does not pass through any intermediate dies included in at least three of the semiconductor dies 110 Appeared below. In an embodiment, the multi-die interconnect bridge 120 defines a shortest communication path 122 between any two of the at least three semiconductor dies 110. By providing the shortest direct communication path 122 between each of the at least three semiconductor dies 110 coupled to the multi-die interconnect bridge 120, power loss is reduced, power efficiency is increased, and communication bandwidth is maximized .Semiconductor die 110 can include any number, combination, and/or type of currently available and/or future developed dies. Exemplary semiconductor die 110 includes, but is not limited to, one or more of the following: a central processing unit (CPU); an application specific integrated circuit (ASIC); a field programmable gate array (FPGA); a transceiver; a flash memory; Random access memory (DRAM); and the like. In an embodiment, semiconductor die 110 may form a system in package (SiP) semiconductor package 102. In an embodiment, semiconductor die 110 disposed on semiconductor package substrate 130 may integrate chiplets or semiconductor dies 110 from different process nodes in a single package. For example, in contrast to a monolithic semiconductor package architecture, the systems and methods described herein permit integration of semiconductor dies 110 having different architectures (14 nanometers, 20 nanometers, 28 nanometers, and 40 nanometers, etc.) in a single semiconductor package 102.The ability to quickly and efficiently modify the semiconductor die 110 included in the semiconductor package 102 beneficially improves manufacturing flexibility and market responsiveness. For example, a semiconductor package 102 fabricated for a first customer may include one or more peripheral component interconnect fast (PCIe) third generation transceivers, while a second customer is required to have one or more semiconductor dies 110 optical and/or Or the same semiconductor package 102 of a pulse width modulated (eg, PAM-4) transceiver. Since semiconductor die 110 can be easily replaced without requiring complete redoing of semiconductor package 102, time to market is reduced and market responsiveness is beneficially improved. Similarly, technological growth improvements (eg, 3G to 4G to 5G improvements in cellular communication technology) can be easily incorporated into semiconductor package 102 without requiring costly and time consuming redesign of the entire package.Each of the at least three semiconductor dies 110 includes any number of contact elements 112A-112n (tabs) disposed in, on, around, or across at least a portion of a lower surface of the semiconductor die 110. Land), pads, trenches, pins, slots, etc. - collectively referred to as "contact elements 112"). In an embodiment, semiconductor die 110 may be communicably communicable using relatively small conductive structures 154A-154n (collectively referred to as "conductive structures 154"), such as microbumps that are compatible with fine pitch and/or high density connection configurations. Coupled to the multi-die interconnect bridge 120. In some embodiments, such high density connections can be used for inter-die communication coupled to some or all of at least three semiconductor dies 110 of multi-die interconnect bridge 120. In an embodiment, semiconductor die 110 may be communicable using relatively large conductive structures 152A-152n (collectively referred to as "conductive structures 152"), such as solder balls compatible with relatively coarse pitch and/or relatively low density connection configurations. The ground is coupled to the multi-die interconnect bridge 120. In such embodiments, low density connections may be useful as inter-die power distribution and/or grounding between some or all of the at least three semiconductor dies 110 coupled to the multi-die interconnect bridge 120.In an embodiment, some or all of at least three semiconductor dies 110 may be fabricated using flip chip fabrication techniques in which circuitry is formed in each die on the die, A metallization pad is formed over the die, a conductive structure 152 and/or conductive structure 154 is deposited, patterned, otherwise formed or otherwise formed on the metallization pad, and the die is separated to form the semiconductor die 110. The separated semiconductor die 110 is positioned on the semiconductor package substrate 102 and/or the multi-die interconnect bridge 120, and the conductive structures 152 and/or conductive structures 154 are reflowed to at least three Semiconductor die 110 is physically attached and conductively bonded to multi-die interconnect bridge 120 and semiconductor package substrate 102.The use of a central multi-die interconnect bridge 120 that electrically couples at least three semiconductor dies 110 can be reduced to be patterned in, on, or around each of at least three semiconductor dies 110 The number of transceivers formed by the way. For example, in a conventional layout, a multi-die interconnect bridge is used to connect two die-to-die transceivers that are patterned or otherwise formed in adjacent semiconductor dies. Thus, a square semiconductor die surrounded by four other semiconductor dies (one on each side) can require up to four (4) different die-to-die transceivers, one for each multi-die interconnect bridge . The systems and methods described herein reduce the number of required transceivers on each die to one, freeing the die area previously occupied by the three (3) additional die-to-die transceivers.In an embodiment, each of the at least three semiconductor dies 110 occupies an upper portion of the semiconductor package substrate 130 or a region defined on the first surface 132. Each of the at least three semiconductor dies 110 can occupy the same or different regions on the first surface 132 of the semiconductor package substrate 130. At least one of the at least three semiconductor dies 110 may occupy less surface area on the first surface 132 of the semiconductor package substrate 130 than the remaining at least three semiconductor dies 110. Thus, the at least one of the at least three semiconductor dies 110 occupying the least surface area of the first surface 132 of the semiconductor package substrate can be considered the "minimum" dies of the at least three semiconductor dies 110.The multi-die interconnect bridge 120 uses contact elements 124A-112n (tabs, pads) disposed in, on, around or across at least a portion of the upper surface of the multi-die interconnect bridge 120 , trenches, pins, sockets, etc. - collectively referred to as "contact elements 124"), each of the at least three semiconductor dies 110 is conductively coupled to each of the remaining at least three semiconductor dies. In an embodiment, the multi-die interconnect bridge 120 can be fabricated as a silicon die that is at least partially deposited, infiltrated, or otherwise embedded in the first surface 132 of the semiconductor package substrate 130 during substrate fabrication. . In other embodiments, the multi-die interconnect bridge 120 can be fabricated integrally in the first surface 132 of the semiconductor package substrate 120. In still other embodiments, the multi-die interconnect bridge 120 can include a silicon member that is electrically isolated from the semiconductor package substrate 130 and disposed adjacent the first surface 132 of the semiconductor package substrate 130.Multi-die interconnect bridge 120 can include any number of conductive members (trace, components, wires, conductors, etc.) that provide any number of communication paths 122A between at least some of at least three semiconductor dies 110A-110n -122n. The conductive members (which form the communication path 122) may be formed, patterned, deposited, or otherwise disposed in any number of layers or similar structures. Further, multi-die interconnect bridge 120 includes any number and/or combination of contact elements 124A-112n (tabs, pads, trenches, slots, etc.) that are conductively coupled to at least three semiconductor dies 110 - collectively Is "contact element 124"). The multi-die interconnect bridge 120 can have any physical geometry, size, and/or shape suitable for physical and conductive coupling to at least three semiconductor dies 110. For example, the multi-die interconnect bridge 120 can have a square, rectangular, triangular, circular, elliptical, or polygonal polygonal configuration. As depicted in FIGS. 1A and 1B, the multi-die interconnect bridge 120 includes a plurality of relatively short communication paths 122 that each of the semiconductor die 110A and some or all of the remaining semiconductor dies 110B-110D Conductively coupled. In an embodiment, communication path 122 may define a shortest distance between any two of at least three semiconductor dies 110.The multi-die interconnect bridge 120 facilitates integration in a heterogeneous package by connecting at least three semiconductor dies 110 using an ultra-high density interconnect to electrically couple at least three semiconductor dies 110. Due to the overall reduction in input/output (I/O), the multi-die interconnect bridge 120 enables integration, placement or positioning of the contact elements 112 near the edges of at least three semiconductor dies 110. This geometry promotes precise physical coupling of at least three semiconductor dies 110 and creates the shortest possible communication path 122 between at least three semiconductor dies 110. The shortened communication path 122 causes a reduced load on the drive buffer, thereby reducing the load on the drive buffer, thereby impeding performance, relative to other solutions, such as silicon interposers, where a substantially increased communication path length increases the load on the drive buffer. Improved performance.In an embodiment, the multi-die interconnect bridge 120 can include any number and/or combination of contact elements (tabs, pads, trenches, slots, etc.) to accept one or more active components (in the figure) Insertion not depicted in 1A and 1B). In an embodiment, such an active component can be deployed such that it is coupled between any one of at least three semiconductor dies coupled to the multi-die interconnect bridge 120 and any other die of at least three semiconductor dies Communication takes place through active components. In other embodiments, such active components can be deployed such that communication between selected dies of at least three semiconductor dies 110 passes through the active components, while between the remaining at least three semiconductor dies 110 Communication does not pass through the active component. Example active components include, but are not limited to, silicon dies, including: control circuitry and/or repeater circuitry.The multi-die interconnect bridge 120 can be fabricated using one or more dielectric or electrically insulating materials. In some embodiments, the multi-die interconnect bridge 120 can be fabricated as a silicon die. In some embodiments, the multi-die interconnect bridge 120 can be fabricated as a structure or member that includes one or more conductive layers and one or more dielectric layers. In some embodiments, the communication path 122 through the multi-die interconnect bridge 120 can include a plurality of patterned traces deposited using any currently available or future developed patterning and/or deposition process. Conductive elements 122 forming communication path 122 may include one or more metallic or non-metallic electrically conductive materials. Exemplary electrically conductive materials include, but are not limited to, copper; alloys or compounds comprising copper; aluminum; alloys or compounds comprising aluminum; electrically conductive polymers, and the like.In an embodiment, the area of the first surface 132 of the semiconductor package substrate 130 occupied by the multi-die interconnect bridge 120 is less than the semiconductor package substrate 130 occupied by the smallest of the at least three semiconductor dies 110 The area of the first surface 132. Thus, unlike conventional silicon interposers, multi-die interconnect bridge 120 is less prone to quality issues such as warpage and does not require the use of through-silicon vias, thereby simplifying the manufacturing process and reducing overall manufacturing complexity. .At least three semiconductor dies 110 are conductively coupled to contact elements 174A-174n (tabs, 184A-174n disposed on, around, or across all of first surface 132 of semiconductor package substrate 130 Pads, trenches, pins, slots, etc. - collectively referred to as "contact elements 174"). Conductive elements 172A-172n electrically couple contact elements 174 on first surface 132 of semiconductor package substrate 130 to all or a portion of lower or second surface 134 of semiconductor package substrate 130 disposed thereon, on Contact elements 176A-176n (tabs, pads, trenches, pins, slots, etc. - collectively referred to as "contact elements 176") are disposed around or across them. The first surface 132 and/or the second surface 134 are laterally opposed across the thickness of the semiconductor package substrate 130.In an embodiment, the multi-die interconnect bridge 120 can include a semiconductor die or similar pre-fabricated structure that is deployed, positioned, placed, or otherwise adhered to the semiconductor package substrate 130 such that multiple die are interconnected The upper surface of the bridge 120 is parallel to the first surface 132 of the semiconductor package substrate 130. In an embodiment, the upper surface of the multi-die interconnect bridge 120 can be coplanar with the first surface 132 of the semiconductor package substrate 130. In other embodiments, the upper surface of the multi-die interconnect bridge 120 may protrude or recess from the first surface 132 of the semiconductor package substrate 130 therein. In other embodiments, the multi-die interconnect bridge 120 can include one or more structures (conductors, vias, etc.) that are integrally formed with the semiconductor package substrate 130. In such embodiments, the upper surface of the multi-die interconnect bridge 120 can be coplanar with the first surface 132 of the semiconductor package substrate 130. In some embodiments, the multi-die interconnect bridge 120 can be electrically coupled to one or more circuits and/or conductive elements 172 disposed in the semiconductor package substrate 130. In some embodiments, the multi-die interconnect bridge 120 can be electrically coupled to one or more of: contact elements 174 disposed on the upper surface 132 of the semiconductor package substrate 130 and/or disposed in a semiconductor package Contact element 176 on lower surface 134 of substrate 130.Semiconductor package substrate 130 can include any number and/or combination of electronic components, semiconductor devices, and/or logic elements formed into one or more circuits. In some embodiments, semiconductor package substrate 130 can include any number of staggered patterned conductive and dielectric layers. Any number of conductive structures 170A-170n (solder balls, solder bumps, clips, wires, etc. - collectively referred to as "conductive structures 170") can physically and/or electrically couple the semiconductor package substrate to, for example, a printed circuit A substrate 140 of a board, a mother board, a daughter board or the like. In at least some embodiments, substrate 140 can form all or a portion of a processor-based electronic device, such as a portable electronic device or smart phone.2A is a plan view of an illustrative semiconductor package 200 including four semiconductor dies 110A-110D having a plurality of die interconnects electrically coupled to a single center, in accordance with at least one embodiment described herein. Corresponding PHY layer transceivers 210A-210D of bridge 120 (collectively, "transceiver 210"). 2B is a schematic diagram of a communication path provided by a single centrally located multi-die interconnect bridge 120 as depicted in FIG. 2A, in accordance with at least one embodiment described herein. As depicted in Figures 2A and 2B, using only a single PHY layer transceiver 210 on each of the semiconductor dies 110, there is a direct bidirectional communication path 122A between the single semiconductor die 110A and the remaining semiconductor dies 110B-110D. -122F.In an embodiment, only a single transceiver 210 is used on each die, and the multi-die interconnect bridge 120 permits direct two-way communication between any two of the semiconductor dies 110A-110D. This represents a significant reduction in the surface area of the die dedicated to transceiver 210 as compared to previous designs where each communication path 122 between two semiconductor dies 110 requires a separate transceiver on each die. . Thus, instead of a single transceiver as depicted in FIG. 2A, in a conventional multi-die semiconductor package arrangement, semiconductor die 110A will have a first transceiver for conductively coupling to semiconductor die 110B and for conducting The second transceiver is coupled to the semiconductor die 110C. Input/output contacts can be deployed in peripheral regions 220A-220D of each of semiconductor dies 110A-110D.3 is a plan view of a system 300 including a semiconductor package 102 in which only a total of nine semiconductor dies will be used using only four multi-die interconnect bridges 120A-120D, in accordance with at least one embodiment described herein. The 110A-110I are communicatively coupled together. As depicted in FIG. 3, each of the multi-die interconnect bridges 120A-120D is conductively coupled to four different semiconductor dies 110. Thus, communication between any two of the nine semiconductor dies 110 requires communication through at most a single intermediate semiconductor die 110.Using the configuration depicted in FIG. 3, the systems and methods described herein use only communication paths 122A and 122B that traverse two multi-die interconnect bridges 120A and 120D and a single intermediate semiconductor die 110E, providing diagonally relative Communication between the dies 110A and 110I. Under a more conventional bridging architecture (shown by the dashed lines in FIG. 3), the interconnect bridges only electrically couple laterally (non-diagonally) adjacent semiconductor dies 110. Such an arrangement or architecture would require communication paths 320A, 320B, 320C, and 320D to pass a total of eight (8) transceivers 322A-322H and across three intermediate semiconductor dies 110D, 110G, 110H. Thus, the systems and methods described herein are solving multiple dies in comparison to conventional bridges that only laterally conductively couple semiconductor dies (as opposed to current systems and methods of laterally and diagonally electrically coupling semiconductor dies) In the semiconductor package, while facing major communication and performance issues, power consumption is reduced, latency is reduced, available semiconductor die areas are increased, and performance is improved.4 is a plan view of a system 400 including a semiconductor package 102 in which only six multi-die interconnect bridges 120A-120E will be used for a total of sixteen semiconductor tubes, in accordance with at least one embodiment described herein. The cores 110A-110P are communicatively coupled together. As depicted in FIG. 4, each of the multi-die interconnect bridges 120A-120E is conductively coupled to four different semiconductor dies 110. Thus, communication between any two of the sixteen semiconductor dies 110 requires communication through at most only two intermediate semiconductor dies 110.For example, using the configuration depicted in FIG. 4, the systems and methods described herein use communication across two intermediate semiconductor dies (110F and 110K) across three multi-die interconnect bridges (120A, 120C, and 120D). Paths 122A, 122B, and 122C provide communication between diagonally opposite dies 110A and 110P. Under a more conventional bridging architecture (shown by the dashed lines in FIG. 4), the interconnect bridges only electrically couple laterally (non-diagonally) adjacent semiconductor dies 110. Such an arrangement or architecture would require six communication paths 420A-420F and pass through a total of twelve transceivers 422A-422L and across five intermediate semiconductor dies (110E, 110I, 110M, 110N, and 110O). Thus, the systems and methods described herein are solving multiple dies in comparison to conventional bridges that only laterally conductively couple semiconductor dies (as opposed to current systems and methods of laterally and diagonally electrically coupling semiconductor dies) In the semiconductor package, while facing major communication and performance issues, power consumption is reduced, latency is reduced, available semiconductor die areas are increased, and performance is improved.5A is a plan view of an illustrative semiconductor package 500A having a first non-conventional configuration including conductively coupled to a single triangular multi-die interconnect bridge 120, in accordance with at least one embodiment described herein. Three rectangular semiconductor dies 110A-110C.5B is a plan view of an illustrative semiconductor package 500B having a second non-conventional configuration including electrically conductively coupled to a single cross-shaped multi-die interconnect bridge, in accordance with at least one embodiment described herein. Four rectangular semiconductor dies 110A-110C of 120.5C is a plan view of an illustrative semiconductor package 500C having a third non-conventional configuration including electrically conductively coupled to a single triangular multi-die interconnect bridge 120, in accordance with at least one embodiment described herein. Three triangular semiconductor dies 110A-110C.5D is a plan view of an illustrative semiconductor package 500D having a fourth non-conventional configuration including electrically conductively coupled to a single hexagonal multi-die interconnect bridge, in accordance with at least one embodiment described herein. The six triangular semiconductor dies 110A-110F of the device 120.5E is a plan view of an illustrative semiconductor package 500E having a fifth non-conventional configuration including electrically conductively coupled to a single cross-shaped multi-die interconnect bridge, in accordance with at least one embodiment described herein. Four semiconductor dies 110A-110D of 120.As depicted by the illustrative semiconductor package configurations of Figures 5A-5E, the systems and methods described herein are not limited to conventional geometries and can be adapted to a variety of semiconductor die shapes, sizes, and configurations. Similarly, multi-die interconnect bridge 120 can have any shape, size, or physical geometry that provides sufficient overlap with each semiconductor die 110 to permit semiconductor die 110 to be attached to more via conductive structure 154 The die interconnect bridge 120.FIG. 6A is a plan view of an illustrative system 600 including a semiconductor package 102 in which a single multi-die interconnect bridge 120 including active die 610 is conductively coupled, in accordance with at least one embodiment described herein. Four semiconductor dies 110A-110D. FIG. 6B is a cross-sectional elevational view of the illustrative semiconductor package 102 depicted in FIG. 6A along section line 6B-6B, in accordance with at least one embodiment described herein. In an embodiment, the multi-die interconnect bridge 120 can include passive electronic components. Passive electronic components include, but are not limited to, passive electrical components such as conductors, resistors, inductors, capacitors, and the like. One or more active components, such as active die 610, may be electrically coupled to multi-die interconnect bridge 120. Such active die 610 can include circuitry such as controller circuitry, repeater circuitry, filter circuitry, amplifier circuitry, and the like. Power for active die 610 may be supplied by one or more semiconductor dies 110 via multi-die interconnect bridge 120.In at least some embodiments, one or more signals 620A can be supplied to the multi-die interconnect bridge 120 by the first semiconductor die 110A. All or a portion of the signal may be provided to active die 610 as input signal 630. Active die 610 can provide output signal 640 to multi-die interconnect bridge 120. Signal 620B can then be provided to second semiconductor die 110B via multi-die interconnect bridge 120. For example, the first semiconductor die 110A can generate a signal 620A that is provided to the multi-die interconnect bridge 120. One or more filters (LC filters, RC filters, RL filters, RLC filters, etc.) may be formed in the multi-die interconnect bridge 120, including any number and/or combination of passive components , such as resistors, capacitors and / or inductors. The filtered signal forms an input signal 630 to the active repeater die 610. The higher energy output signal 640 from the repeater die is passed to the second semiconductor die 110B via the multi-die interconnect bridge 120 as an input signal 620A.While active component 610 is depicted as being electrically coupled to the upper surface of multi-die interconnect bridge 120, in other embodiments, active component 610 can be electrically coupled to multi-die interconnect bridge 120 The upper surface, the lower surface of the multi-die interconnect bridge 120, or any combination thereof.7 is a schematic diagram of an illustrative electronic device 700 including an in-chip system (SiC) 102, including an electrically conductively coupled graphics processing unit as described in FIGS. 1 through 6, in accordance with at least one embodiment described herein. 710, processor circuitry 712 and multi-die interconnect bridge 120 of system memory 740. The following discussion provides components for forming an illustrative electronic device 702, such as a smart phone, a wearable computing device, a portable computing device, or any similar device having at least one on-chip system 102 that includes a multi-die interconnect bridge 120. A brief general description. In an embodiment, multi-die interconnect bridge 120 may be partially or fully deployed in substrate 130 to which graphics processing unit 710, processor circuitry 712, and system memory 740 are operatively coupled and physically adhered.Electronic device 702 includes processor circuitry 712 that is capable of executing machine readable instruction sets 714, reading data and/or instructions 714 from one or more storage devices 760, and writing data to one or more storage devices 760. . Those skilled in the relevant art will appreciate that the illustrated embodiments, as well as other embodiments, can be practiced with other circuit-based device configurations, including portable electronic or handheld electronic devices, such as smart phones, portable computers, wearable computers. , microprocessor based or programmable consumer electronics, personal computers ("PC"), network PCs, minicomputers, mainframe computers, and the like.Processor circuitry 712 can include any number of hardwired or configurable circuits, some or all of which can include electronic components that are partially or fully deployed in a PC, server, or other computing system capable of executing processor readable instructions Programmable and/or configurable combination of semiconductor devices and/or logic elements.The electronic device 702 includes a bus or similar communication link 716 that communicatively couples and facilitates various system components (including SiC 102, one or more wireless I/O interfaces 720, one or more wired I/O interfaces 730, Exchange of information and/or data between one or more storage devices 760, and/or one or more network interfaces 770). The electronic device 702 may be referred to herein in the singular, but this is not intended to limit the embodiments to a single electronic device and/or system, as in some embodiments there may be a combination, include or include any number of More than one electronic device 702 of a circuit or device that can be communicatively coupled, collocated or remotely networked.SiC 102 includes a multi-die interconnect bridge 120 that is communicatively coupled to graphics processing unit 710, processor circuitry 712, and system memory 740. In an embodiment, a larger or smaller number of components may be included in the SiC 102. A graphics processing unit ("GPU") 710 can include any number and/or combination of systems and/or devices capable of generating video output signals at wired or wireless video output interface 711.Processor circuitry 712 can include any number, type, or combination of devices. At times, processor circuitry 712 may be implemented in whole or in part in the form of semiconductor devices such as diodes, transistors, inductors, capacitors, and resistors. Such implementations may include, but are not limited to, any current or future developed single or multi-core processor or microprocessor, such as: one or more system-on-a-chip (SOC); central processing unit (CPU); digital signal processor ( DSP); graphics processing unit (GPU); application specific integrated circuit (ASIC), programmable logic unit, field programmable gate array (FPGA), and the like. The construction and operation of the various blocks shown in Figure 7 are conventional designs unless otherwise stated. Accordingly, such blocks need not be described in further detail herein as they will be understood by those skilled in the relevant art. Bus 716 interconnecting at least some of the components of electronic device 702 can employ any known serial or parallel bus structure or architecture.System memory 740 can include read only memory ("ROM") 742 and random access memory ("RAM") 746. A portion of ROM 742 can be used to store or otherwise retain a basic input/output system ("BIOS") 744. The BIOS 744 provides basic functionality to the electronic device 702, for example, by causing the processor circuitry 712 to load one or more machine readable instruction sets 714. In an embodiment, at least one of the one or more machine readable instruction sets 714 causes at least a portion of the processor circuitry 712 to be provided, created, produced, converted, and/or used as a dedicated, specific, and specific machine, such as word processing. A machine, a digital image acquisition machine, a media playback machine, a game system, a communication device, or the like.The electronic device 702 can include at least one wireless input/output (I/O) interface 720. At least one wireless I/O interface 720 can be communicatively coupled to one or more physical output devices 722 (haptic devices, video displays, audio output devices, hard copy output devices, etc.). At least one wireless I/O interface 720 can be communicatively coupled to one or more physical input devices 724 (pointing device, touch screen, keyboard, haptic device, etc.). The at least one wireless I/O interface 720 can include any currently available or future developed wireless I/O interface. Example wireless I/O interfaces include, but are not limited to, BLUETOOTH®, Near Field Communication (NFC), and the like.Electronic device 702 can include one or more wired input/output (I/O) interfaces 730. At least one wired I/O interface 730 can be communicatively coupled to one or more physical output devices 722 (haptic devices, video displays, audio output devices, hard copy output devices, etc.). At least one wired I/O interface 730 can be communicatively coupled to one or more physical input devices 724 (pointing device, touch screen, keyboard, haptic device, etc.). Wired I/O interface 730 can include any currently available or future developed I/O interface. Example wired I/O interfaces include, but are not limited to, Universal Serial Bus (USB), and the like.Electronic device 702 can include one or more non-transitory data storage devices 760 that are communicatively coupled. Data storage device 760 can include one or more hard disk drives and/or one or more solid state storage devices. One or more data storage devices 760 can include any currently or future developed memory device, network storage device, and/or system. Non-limiting examples of such data storage devices 760 may include, but are not limited to, any current or future developed non-transitory memory devices or devices, such as one or more magnetic storage devices, one or more optical storage devices, one or more Resistive storage devices, one or more molecular storage devices, one or more quantum storage devices, or various combinations thereof. In some implementations, one or more data storage devices 760 can include one or more removable storage devices, such as one or more flash drives, flash memories, flash memory units, or capable of being communicatively coupled to The electronic device 702 and similar appliances or devices that are decoupled therefrom.One or more data storage devices 760 can include an interface or controller (not shown) that communicatively couples the respective storage device or system to bus 716. One or more data storage devices 760 can store, retain, or otherwise include a machine readable instruction set, a data structure, a program module, a data storage, a database, a logical structure, and/or a pair of processor circuitry 712 and/or Other data useful on or in one or more applications executed by processor circuitry 712. In some examples, one or more data storage devices 760 can be, for example, via bus 716 or via one or more wired communication interfaces 730 (eg, a universal serial bus or USB); one or more wireless communication interfaces 720 (eg, Bluetooth®, Near Field Communication or NFC); and/or one or more network interfaces 770 (IEEE 802.3 or Ethernet, IEEE 802.11 or WiFi®, etc.) are communicatively coupled to processor circuitry 712.The processor readable instruction set 714 and other programs, applications, logic sets, and/or modules may be stored in system memory 740 in whole or in part. Such an instruction set 714 can be transferred in whole or in part from one or more data storage devices 760. Instruction set 714 may be loaded, stored, or otherwise retained in system memory 740, in whole or in part, during execution by processor circuitry 712. The processor readable instruction set 714 can include machine readable and/or processor readable code, instructions, or the like, capable of providing the voice training functions and capabilities described herein.The electronic device 702 can include power management circuitry 750 that controls one or more operational aspects of the energy storage device 752. In an embodiment, energy storage device 752 may include one or more primary (ie, non-rechargeable) or secondary (ie, rechargeable) batteries or similar energy storage devices. In an embodiment, energy storage device 752 can include one or more supercapacitors or ultracapacitors. In an embodiment, power management circuitry 750 can change, adjust, or control the flow of energy from external power source 754 to energy storage device 752 and/or to electronic device 702. Power source 754 can include, but is not limited to, a solar energy system, a commercial power grid, a portable generator, an external energy storage device, or any combination thereof.For convenience, SiC 102, wireless I/O interface 720, wired I/O interface 730, power management circuitry 750, storage 760, and network interface 770 are illustrated as being communicatively coupled to each other via bus 716, thereby Provide connectivity between the above components. In alternative embodiments, the above components may be communicatively coupled in a different manner than that illustrated in FIG. For example, one or more of the above-described components can be directly coupled to other components or can be coupled to each other via one or more intermediate components (not shown). In another example, one or more of the above components can be integrated into SiC 102 and communicatively coupled to other components via multi-die interconnect bridge 120. In some embodiments, all or a portion of bus 716 can be omitted and components are coupled directly to each other using a suitable wired or wireless connection.8 is a high level flow diagram of an illustrative method 800 of fabricating a semiconductor package 102, such as an in-chip system, incorporating at least one of at least three semiconductor dies 110 communicatively coupled in accordance with at least one embodiment described herein. Multi-die interconnect bridge 120. The multi-die interconnect bridge 120 can be at least partially embedded or otherwise incorporated into the semiconductor package substrate 130. The multi-die interconnect bridge 120 electrically couples each of the at least three semiconductor dies 110 to each of the remaining semiconductor dies 110, advantageously providing the shortest distance between each of the semiconductor dies 110. In an embodiment, the multi-die interconnect bridge 120 is a passive bridge formed in, on or around the silicon die (ie, a bridge that does not include inherent active components). In an embodiment, the multi-die interconnect bridge 120 can be fabricated independently of the semiconductor package 102 and can be incorporated into the semiconductor package substrate 130 during the package fabrication or assembly process. Method 800 begins at 802.At 804, the multi-die interconnect bridge 120 is deployed, positioned, patterned, or otherwise adhered, soldered, or attached to the first surface of the semiconductor package substrate 130, deposited thereon, or at least partially Embed in it. In an embodiment, the multi-die interconnect bridge 120 can include any number and/or combination of passive components (conductors, resistors, capacitors, inductors, etc.) that are deployed with silicon dies. In an embodiment, the multi-die interconnect bridge 120 can include any number and/or combination of passive components that are integrally formed in, over, or around the semiconductor package substrate 130. In an embodiment, the multi-die interconnect bridge 120 can include any number and/or combination of passive components that are deployed as a single layer or multiple layers in a layered dielectric structure such as a circuit board.In some embodiments, the multi-die interconnect bridge 120 can be deployed in a recessed region formed on the first surface 132 of the semiconductor package substrate 130. In such embodiments, the upper surface of the multi-die interconnect bridge 120 may protrude above the first surface 132, may be recessed below the first surface 132, or may be with the first surface of the semiconductor package substrate 130 132 is coplanar. In some embodiments, the multi-die interconnect bridge 120 can be physically adhered to the first surface 132 of the semiconductor package substrate 130, for example, via chemical bonding.Multi-die interconnect bridge 120 can have any physical geometry, size, and/or shape. For example, the multi-die interconnect bridge 120 can have a rectangular, circular, elliptical, triangular, polygonal, or trapezoidal physical geometry. Multi-die interconnect bridge 120 can have any thickness, longitudinal and lateral dimensions. In at least some embodiments, the physical geometry, thickness, lateral dimensions, and longitudinal dimensions of the multi-die interconnect bridge 120 can be based at least in part on the multi-die interconnect bridge 120 and disposed on the semiconductor die 110 The physical size, shape, and/or configuration of the outer contact elements 112A-112n. In an embodiment, the multi-die interconnect bridge 120 can electrically couple each of the at least three semiconductor dies 110 to each of the remaining at least three semiconductor dies 110. In other embodiments, the multi-die interconnect bridge 120 can selectively electrically couple each of some or all of the at least three semiconductor dies 110 to at least some of the remaining at least three semiconductor dies 110 Each.At 806, at least three semiconductor dies 110 are electrically coupled to the multi-die interconnect bridge 120. In an embodiment, each of the semiconductor dies 110 can have, be patterned, deposited, formed, or otherwise in, on, around, or across at least a portion of an exterior surface of the respective semiconductor die 110 A plurality of contact elements 112 are deployed in other manners. Conductive structures including solder balls 152 and/or solder bumps 154 can be electrically coupled to some or all of contact elements 112. In at least some embodiments, at least some of the electrically conductive structures (eg, solder bumps 154) can be reflow soldered to electrically couple the semiconductor die 110 to the multi-die interconnect bridge 120. In at least some embodiments, at least some of the electrically conductive structures (eg, solder balls 152) can be reflow soldered to electrically couple the semiconductor die 110 to the semiconductor package substrate 130. In an embodiment, other conductive coupling methods can be used to electrically couple the semiconductor die 110 to the multi-die interconnect bridge 120.The multi-die interconnect bridge 120 occupies a first area on the first surface 132 of the half-layer package substrate 130. A minimum of the at least three semiconductor dies 110 occupies a second region on the first surface 132 of the semiconductor package substrate 130. In an embodiment, the first region (occupied by the multi-die interconnect bridge 120) is smaller than the second region (occupied by the smallest of the at least three semiconductor dies 110). Method 800 ends at 808.9 is a high level flow diagram of an illustrative method 900 of electrically coupling one or more active dies 610 to a passive multi-die interconnect bridge 120, in accordance with at least one embodiment described herein. Method 900 can be used in conjunction with method 800 discussed above with respect to FIG. In an embodiment, one or more active dies 610, such as one or more dies including control circuitry and/or repeater circuitry, may be electrically coupled to the passive multi-die interconnect bridge 120 To provide additional functionality. Method 900 begins at 902.At 904, active die 610 (ie, a die including at least one active electronic and/or semiconductor component) is conductively coupled to multi-die interconnect bridge 120. In an embodiment, at least a portion of the communication path 122 through the multi-die interconnect bridge 120 passes through the active die 610. In other embodiments, signals transmitted between the selected semiconductor dies 110 pass through the active die 610. For example, semiconductor dies 110A, 110B, and 110C are conductively coupled to multi-die interconnect bridge 120. Communication paths 122A-B between dies 110A and 110B pass through active dies 610 coupled to multi-die interconnect bridge 120, while communication paths 122A-C and 110B between dies 110A and 110C Communication path 122B-C with 110C communicates via multi-die interconnect bridge 120 but does not pass active die 610. In other embodiments, all of the communication paths 122 through the multi-die interconnect bridge 120 pass through the active die 610. Method 900 ends at 912.Although Figures 8 and 9 illustrate various operations in accordance with one or more embodiments, it is to be understood that not all of the operations depicted in Figures 8 and 9 are required for other embodiments. Indeed, it is fully contemplated herein that in other embodiments of the present disclosure, the operations depicted in Figures 8 and 9 and/or other operations described herein may be combined in a manner not specifically shown in any of the figures. However, it is still completely consistent with this disclosure. Therefore, the claims relating to features and/or operations not specifically shown in the drawings are considered to be within the scope and content of the present disclosure.As used in this application and in the claims, a list of items with the term "and/or" may mean any combination of the listed items. For example, the phrase "A, B, and/or C" can mean A; B; C; A and B; A and C; B and C; or A, B, and C. As used in this application and in the claims, a list of items with the term "at least one" may mean any combination of the listed items. For example, the phrase "at least one of A, B, or C" can mean A; B; C; A and B; A and C; B and C; or A, B, and C.Any of the operations described herein can be implemented in a system including one or more media (eg, non-transitory storage media) in which instructions have been stored, either individually or in combination, by one or more The method is executed when the processor executes. Here, the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Accordingly, it is contemplated that the operations described herein can be distributed across multiple physical devices, such as processing structures at more than one different physical location. The storage medium may include any type of tangible medium such as any type of disk (including hard disk, floppy disk, optical disk, compact disk read only memory (CD-ROM), rewritable compact disk (CD-RW), and magneto-optical disk) , semiconductor devices such as read only memory (ROM), random access memory (RAM) such as dynamic and static RAM, erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM) , flash memory, solid state disk (SSD), embedded multimedia card (eMMC), secure digital input/output (SDIO) card, magnetic or optical card, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software executed by a programmable control device.Accordingly, the present disclosure relates to systems for electrically coupling at least three semiconductor dies included in a semiconductor package using a multi-die interconnect bridge that is embedded, deployed, or otherwise integrated into a semiconductor package substrate method. Multi-die interconnect bridges are passive devices that include passive electronic components such as conductors, resistors, capacitors, and inductors. A multi-die interconnect bridge communicatively couples each semiconductor die included in at least three semiconductor dies to each of at least some of the remaining at least three semiconductor dies. Active silicon dies, such as containing control circuitry and/or repeater circuitry, can be coupled to the multi-die interconnect bridge to provide additional functionality. The multi-die interconnect bridge occupies a first area on the surface of the half-layer package substrate. A minimum of the at least three semiconductor dies coupled to the multi-die interconnect bridge 120 occupies a second region on a surface of the semiconductor package substrate, wherein the second region is larger than the first region.The application provides the following technical solutions:Technical Solution 1. A semiconductor package comprising:a semiconductor package substrate having a first surface separated by a certain thickness and a second surface laterally opposite;At least three semiconductor dies coupled to the semiconductor package substrate;Wherein the smallest of the at least three semiconductor dies occupies a first physical area on the first surface of the semiconductor package substrate;a multi-die interconnect bridge comprising a second disposed adjacent the first surface of the semiconductor package substrate and occupying the first surface of the semiconductor package substrate One or more electrically conductive members of the physical region;Wherein the multi-die interconnect bridge electrically couples each of the at least three semiconductor dies to each of the remaining at least three semiconductor dies;The second physical area occupied by the multi-die interconnect bridge is smaller than the first physical area of the smallest of the at least three semiconductor dies.The semiconductor package of claim 1, wherein the one or more conductive members included in the multi-die interconnect bridge electrically couple the at least three semiconductor dies without By any intermediate semiconductor die that is included in at least three silicon dies.The semiconductor package of claim 1 further comprising an active die communicably coupled to the multi-die interconnect bridge.The semiconductor package of claim 1, wherein the active die comprises at least one of the following: a control circuitry and a repeater circuitry.The semiconductor package of claim 1, wherein the multi-die interconnect bridge defines a shortest between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies distance.The semiconductor package of claim 1, wherein the multi-die interconnect bridge comprises a silicon die at least partially embedded in the first surface of the semiconductor package substrate.The semiconductor package of claim 1, wherein the multi-die interconnect bridge comprises a silicon bridge integrally formed with the semiconductor package substrate.Technical Solution 8. A method of fabricating a semiconductor package, comprising:Deploying a multi-die interconnect bridge comprising a plurality of conductive members adjacent a first surface of a semiconductor package substrate, the multi-die interconnect bridge occupies the semiconductor package substrate a first physical area of the first surface;Conductively coupling each of the at least three semiconductor dies to the multi-die interconnect bridge such that the plurality of conductive members electrically couple each of the at least three semiconductor dies to the remaining At least three semiconductor dies;Wherein the smallest of the at least three semiconductor dies occupies a second physical region on the first surface of the semiconductor package substrate;The first physical area occupied by the multi-die interconnect bridge is smaller than the second physical area of the smallest of the at least three semiconductor dies.The method of claim 8, wherein the forming the multi-die interconnect bridge including the plurality of conductive members near the first surface of the semiconductor package substrate further comprises:Forming a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate such that the plurality of electrically conductive members included in the multi-die interconnect bridge are electrically coupled At least three semiconductor dies are described without passing through any intermediate semiconductor dies that are included in the at least three silicon dies.The method of claim 8, further comprising:At least one active semiconductor die is conductively coupled to the multi-die interconnect bridge.The method of claim 10, wherein electrically coupling the at least one active semiconductor die to the multi-die interconnect bridge comprises:At least one active semiconductor die including at least one of control circuitry and repeater circuitry is electrically coupled to the multi-die interconnect bridge.The method of claim 10, wherein electrically coupling each of the at least three semiconductor dies to the multi-die interconnect bridge further comprises:Each of the at least three semiconductor dies is conductively coupled to the multi-die interconnect bridge defining a shortest distance between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies Device.The method of claim 8, wherein the deploying the multi-die interconnect bridge including the plurality of conductive members near the first surface of the semiconductor package substrate comprises:A silicon die is at least partially embedded in the first surface of the semiconductor package substrate to provide the multi-die interconnect bridge.The method of claim 8, wherein the deploying the multi-die interconnect bridge including the plurality of conductive members near the first surface of the semiconductor package substrate comprises:An integrated silicon bridge is formed in the thickness of the semiconductor package substrate.Technical Solution 15. A semiconductor package fabrication system, comprising:Means for deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent a first surface of a semiconductor package substrate, the multi-die interconnect bridge occupying the a first physical region of the first surface of the semiconductor package substrate;Conductively coupling each of the at least three semiconductor dies to the multi-die interconnect bridge such that the plurality of conductive members electrically couple each of the at least three semiconductor dies to the remaining Describe the components of at least three semiconductor dies;Wherein the smallest of the at least three semiconductor dies occupies a second physical region on the first surface of the semiconductor package substrate;The first physical area occupied by the multi-die interconnect bridge is smaller than the second physical area of the smallest of the at least three semiconductor dies.The system of claim 15 wherein the means for forming a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate further comprises:A multi-die interconnect bridge for forming a plurality of conductive members including a first surface of a semiconductor package substrate such that the plurality of conductive members included in the multi-die interconnect bridge are conductively coupled The at least three semiconductor dies do not pass through components of any intermediate semiconductor dies that are included in the at least three silicon dies.Technical Solution 17. The system of claim 15, further comprising:A component for electrically coupling at least one active semiconductor die to the multi-die interconnect bridge.The system of claim 17, wherein the means for electrically coupling the at least one active semiconductor die to the multi-die interconnect bridge comprises:A component for electrically coupling at least one active semiconductor die including at least one of a control circuitry or a repeater circuitry to the multi-die interconnect bridge.The system of claim 15 wherein the means for electrically coupling each of the at least three semiconductor dies to the multi-die interconnect bridge further comprises:Conductively coupling each of the at least three semiconductor dies to the plurality of die dies defining a shortest distance between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies Connect the components of the bridge.The system of claim 15 wherein the means for deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate comprises:A component for at least partially embedding a silicon die in the first surface of the semiconductor package substrate to provide the multi-die interconnect bridge.The system of claim 15 wherein the means for deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate comprises:A component for forming an integrated silicon bridge in the thickness of the semiconductor package substrate.Technical Solution 22. An electronic device comprising:A printed circuit board;a semiconductor package electrically conductively coupled to the printed circuit board, the semiconductor package comprising:a semiconductor package substrate coupled to the printed circuit board, the semiconductor package substrate having a first surface separated by a thickness and a laterally opposite second surface;At least three semiconductor dies that are included in the semiconductor package and that are coupled to the first surface of the semiconductor package substrate;Wherein the smallest of the at least three semiconductor dies occupies a first physical area on the first surface of the semiconductor package substrate;a multi-die interconnect bridge disposed adjacent the first surface of the semiconductor package substrate, the multi-die interconnect bridge including one or more conductive members and occupying a portion of the semiconductor package substrate a second physical area of the first surface;Wherein the multi-die interconnect bridge electrically couples each of the at least three semiconductor dies to each of the remaining at least three semiconductor dies;The second physical area occupied by the multi-die interconnect bridge is smaller than the first physical area of the smallest of the at least three semiconductor dies.The electronic device of claim 22, wherein the one or more conductive members included in the multi-die interconnect bridge electrically couple the at least three semiconductor dies without By any intermediate semiconductor die that is included in at least three silicon dies.The electronic device of claim 23, wherein the semiconductor package further comprises an active die communicably coupled to the multi-die interconnect bridge.The electronic device of claim 24, wherein the active die comprises at least one of the following: a control circuitry and a repeater circuitry.The following examples relate to further embodiments. The following examples of the present disclosure may include subject matter material, such as at least one apparatus, method, at least one machine readable medium for storing instructions that, when executed, cause a machine to perform an action based on the method, A component for performing an action based on the method, and/or a system for providing an externally accessible test wire bond in a semiconductor package mounted on a substrate.According to Example 1, a semiconductor package is provided. The semiconductor package includes: a semiconductor package substrate having a first surface separated by a certain thickness and a laterally opposite second surface; at least three semiconductor dies coupled to the semiconductor package substrate; wherein the at least a minimum of the three semiconductor dies occupying a first physical area on the first surface of the semiconductor package substrate; and a multi-die interconnect bridge comprising a deployment One or more electrically conductive members adjacent the first surface of the semiconductor package substrate and occupying a second physical region of the first surface of the semiconductor package substrate; wherein the multi-die interconnect bridges Each of the at least three semiconductor dies is conductively coupled to each of the remaining at least three semiconductor dies; and wherein the second physics is occupied by the multi-die interconnect bridge The region is smaller than the first physical region of the smallest of the at least three semiconductor dies.Example 2 can include the element of Example 1, wherein the one or more conductive members included in the multi-die interconnect bridge electrically couple the at least three semiconductor dies without being included in at least Any intermediate semiconductor die of the three silicon dies.Example 3 can include the elements of any of Examples 1 and 2, and the semiconductor package can additionally include an active die communicatively coupled to the multi-die interconnect bridge.Example 4 can include the elements of any of examples 1 to 4, wherein the active die comprises control circuitry.Example 5 may include the element of any of examples 1 to 4, wherein the active die comprises a repeater die.Example 6 may include the element of any one of examples 1 to 5, wherein the multi-die interconnect bridge defines between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies The shortest distance.Example 7 may include the element of any of examples 1 to 6, wherein the multi-die interconnect bridge comprises a silicon die at least partially embedded in the first surface of the semiconductor package substrate.Example 8 may include the element of any one of examples 1 to 7, wherein the multi-die interconnect bridge comprises a silicon bridge integrally formed with the semiconductor package substrate.According to Example 9, a semiconductor package fabrication method is provided, comprising deploying a multi-die interconnect bridge comprising a plurality of conductive members adjacent a first surface of a semiconductor package substrate, a multi-die interconnect bridge occupies a first physical region of the first surface of the semiconductor package substrate; and conductively couples each of the at least three semiconductor dies to the multi-die interconnect bridge Having the plurality of electrically conductive members electrically couple each of the at least three semiconductor dies to the remaining at least three semiconductor dies; wherein a minimum of the at least three semiconductor dies occupies a second physical region on the first surface of the semiconductor package substrate; and wherein the first physical region occupied by the multi-die interconnect bridge is smaller than in the at least three semiconductor dies The second physical area of the smallest die.Example 10 may include the element of Example 9, wherein forming the multi-die interconnect bridge including the plurality of conductive members near the first surface of the semiconductor package substrate further comprises: forming a plurality of regions near the first surface including the semiconductor package substrate Multi-die interconnect bridges of conductive members such that the plurality of conductive members included in the multi-die interconnect bridge electrically couple the at least three semiconductor dies without being included Any intermediate semiconductor die of the at least three silicon dies.Example 11 can include the element of any of examples 9 or 10, and the method can additionally include electrically coupling the at least one active semiconductor die to the multi-die interconnect bridge.The example 12 may include the element of any one of examples 9 to 11, wherein electrically coupling the at least one active semiconductor die to the multi-die interconnect bridge may include: at least one active to include control circuitry A semiconductor die is conductively coupled to the multi-die interconnect bridge.The example 13 may include the element of any one of examples 9 to 12, wherein electrically coupling the at least one active semiconductor die to the multi-die interconnect bridge may include: having at least one of the transponder dies included A source semiconductor die is conductively coupled to the multi-die interconnect bridge.The example 14 may include the element of any one of examples 9 to 13, wherein electrically coupling each of the at least three semiconductor dies to the multi-die interconnect bridge further comprises: at least three semiconductor dies Each of the plurality of die interconnect bridges electrically conductively coupled to define a shortest distance between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies.The example 15 may include the element of any one of examples 9 to 14, wherein deploying a multi-die interconnect bridge including a plurality of conductive members near the first surface of the semiconductor package substrate may include: the semiconductor package substrate At least a portion of the first surface is embedded in a silicon die to provide the multi-die interconnect bridge.The example 16 may include the element of any one of examples 9 to 15, wherein deploying a multi-die interconnect bridge including a plurality of conductive members near a first surface of the semiconductor package substrate may include: the semiconductor package substrate An integral silicon bridge is formed in the thickness.According to Example 17, a semiconductor package fabrication system is provided. The system can include: means for deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent a first surface of a semiconductor package substrate, the multiple die inter a bridge occupies a first physical area of the first surface of the semiconductor package substrate; and for electrically coupling each of the at least three semiconductor dies to the multi-die interconnect bridge such that A plurality of electrically conductive members electrically conductively coupling each of the at least three semiconductor dies to a component of the remaining at least three semiconductor dies; wherein a minimum of the at least three semiconductor dies occupies the a second physical region on the first surface of the semiconductor package substrate; and wherein the first physical region occupied by the multi-die interconnect bridge is less than a minimum of the at least three semiconductor dies The second physical area of the die.Example 18 can include the elements of Example 17, wherein the means for forming a multi-die interconnect bridge comprising a plurality of conductive members adjacent the first surface of the semiconductor package substrate can further comprise: for forming a semiconductor-containing package a multi-die interconnect bridge of a plurality of conductive members adjacent the first surface of the substrate such that the plurality of conductive members included in the multi-die interconnect bridge electrically couple the at least three semiconductors The die does not pass through the components of any intermediate semiconductor die that are included in the at least three silicon dies.Example 19 can include the element of any of examples 17 or 18, and the system can further include: means for electrically coupling the at least one active semiconductor die to the multi-die interconnect bridge.The example 20 may include the element of any one of examples 17 to 19, wherein the means for electrically coupling the at least one active semiconductor die to the multi-die interconnect bridge may comprise: for inclusion At least one active semiconductor die of the control circuitry is conductively coupled to components of the multi-die interconnect bridge.The example 21 may include the element of any one of examples 17 to 20, wherein the means for electrically coupling the at least one active semiconductor die to the multi-die interconnect bridge may comprise: for inclusion At least one active semiconductor die of the repeater die is conductively coupled to a component of the multi-die interconnect bridge.The example 22 may include the element of any one of examples 17 to 21, wherein the means for electrically coupling each of the at least three semiconductor dies to the multi-die interconnect bridge may further comprise: Each of the at least three semiconductor dies is electrically coupled to the plurality of die interconnects defining a shortest distance between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies The components of the bridge.The example 23 may include the element of any one of examples 17 to 22, wherein the means for deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate may include: A silicon die is at least partially embedded in the first surface of the semiconductor package substrate to provide features of the multi-die interconnect bridge.The example 24 may include the element of any one of examples 17 to 23, wherein the means for deploying a multi-die interconnect bridge comprising a plurality of electrically conductive members adjacent the first surface of the semiconductor package substrate may include: A component of the integrated silicon bridge is formed in the thickness of the semiconductor package substrate.According to Example 25, an electronic device is provided. The electronic device can include: a printed circuit board; a semiconductor package electrically coupled to the printed circuit board, the semiconductor package comprising: a semiconductor package substrate coupled to the printed circuit board, the semiconductor package substrate having a first surface separated by a thickness and a laterally opposite second surface; at least three semiconductor dies included in the semiconductor package and coupled to the first surface of the semiconductor package substrate; a minimum of the at least three semiconductor dies occupying a first physical region on the first surface of the semiconductor package substrate; and being disposed adjacent the first surface of the semiconductor package substrate a multi-die interconnect bridge comprising one or more electrically conductive members and occupying a second physical region of the first surface of the semiconductor package substrate; wherein the multi-die An interconnect bridge electrically coupling each of the at least three semiconductor dies to each of the remaining at least three semiconductor dies; and wherein the plurality of dies are Even the second physical bridge region is less than the minimum occupied by said at least three die semiconductor die first physical region.Example 26 can include the element of example 25, wherein the one or more electrically conductive members included in the multi-die interconnect bridge electrically couple the at least three semiconductor dies without being included in at least Any intermediate semiconductor die of the three silicon dies.The example 27 can include the element of any of examples 25 or 26, wherein the semiconductor package further comprises an active die communicatively coupled to the multi-die interconnect bridge.The example 28 can include the elements of any one of examples 25 to 27, wherein the active die comprises control circuitry.The example 29 can include the elements of any of examples 25-28, wherein the active die comprises a repeater circuitry.The example 30 can include the element of any one of examples 25 to 29, wherein the silicon bridge defines a shortest distance between each of the at least three semiconductor dies and the remaining of the at least three semiconductor dies.The example 31 can include the element of any one of examples 25 to 30, wherein the multi-die interconnect bridge includes a silicon die at least partially embedded in the first surface of the semiconductor package substrate.The example 32 may include the element of any one of examples 25 to 31, wherein the multi-die interconnect bridge comprises a silicon bridge integrally formed with the semiconductor package substrate.The terms and expressions used herein are used to describe and not to limit the terms, and in the use of such terms and expressions, it is not intended to exclude any equivalents of the features (or parts thereof) shown and described, And it will be recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. |
A fin field effect transistor (FinFET) includes a reversed T-shaped fin. The FinFET further includes source and drain regions formed adjacent the reversed T-shaped fin. The FinFET further includes a dielectric layer formed adjacent surfaces of the fin and a gate formed adjacent the dielectric layer. |
What is claimed is:1. A fin field effect transistor (FinFET), comprising:a reversed T-shaped fin, wherein the reversed T-shaped fin comprises an upper portion and a lower portion, wherein a height of the upper portion ranges from about 200 Ȧ to about 1500 Ȧ and wherein a height of the lower portion ranges from about 100 Ȧ to about 1000 Ȧ;source and drain regions formed adjacent the reversed T-shaped fin;a dielectric layer formed adjacent surfaces of the fin; anda gate formed adjacent the dielectric layer.2. The FinFET of claim 1, wherein the fin comprises at least one of silicon and germanium.3. The FinFET of claim 1, wherein the dielectric layer comprises at least one of SiO, SiO2, SiN, SiON, HFO2, ZrO2, Al2O3, HfSiO(x) ZnS and MgF2.4. The FinFET of claim 1, wherein a width of the upper portion ranges from about 100 Ȧ to about 1000 Ȧ.5. The FinFET of claim 4, wherein a width of the lower portion ranges from about 100 Ȧ to about 1000 Ȧ.6. The FinFET of claim 1, wherein the gate comprises polysilicon.7. The FinFET of claim 1, wherein the gate comprises a metal.8. The FinFET of claim 7, wherein the metal comprises TiN.9. The FinFET of claim 1, wherein a thickness of the dielectric layer ranges from about 10 Ȧ to about 50 Ȧ.10. The FinFET of claim 1, wherein a thickness of the gate ranges from about 200 Ȧ to about 1000 Ȧ.11. A semiconductor device, comprising:a fin structure including an upper portion and a lower portion, a width of the upper portion of the fin structure being smaller than a width of the lower portion of the fin structure, wherein the width of the upper portion ranges from about 100 Ȧ to about 1000 Ȧ and wherein the width of the lower portion ranges from about 100 Ȧ to about 1000 Ȧ;source and drain regions formed adjacent the fin structure;a dielectric layer formed over the fin structure; anda gate formed over the dielectric layer.12. The semiconductor device of claim 11, wherein a height of the upper portion ranges from about 100 Ȧ to about 1000 Ȧ and wherein a height of the lower portion ranges from about 100 Ȧ to about 1000 Ȧ.13. The semiconductor device of claim 11, wherein a thickness of the dielectric layer ranges from about 10 Ȧ to about 50 Ȧ and a thickness of the gate ranges from about 200 Ȧ to about 1000 Ȧ. |
TECHNICAL FIELDThe present invention relates generally to transistors and, more particularly, to fin field effect transistors (FinFETs).BACKGROUND ARTThe escalating demands for high density and performance associated with ultra large scale integration semiconductor devices require design features, such as gate lengths, below 100 nanometers (nm), high reliability and increased manufacturing throughput. The reduction of design features below 100 nm challenges the limitations of conventional methodology.For example, when the gate length of conventional planar metal oxide semiconductor field effect transistors (MOSFETs) is scaled below 100 nm, problems associated with short channel effects, such as excessive leakage between the source and drain, become increasingly difficult to overcome. In addition, mobility degradation and a number of process issues also make it difficult to scale conventional MOSFETs to include increasingly smaller device features. New device structures are, therefore, being explored to improve FET performance and allow further device scaling.Double-gate MOSFETs represent structures that have been considered as candidates for succeeding existing planar MOSFETs. In double-gate MOSFETs, two gates may be used to control short channel effects. A FinFET is a recent double-gate structure that exhibits good short channel behavior. A FinFET includes a channel formed in a vertical fin. The FinFET structure may be fabricated using layout and process techniques similar to those used for conventional planar MOSFETs.DISCLOSURE OF THE INVENTIONImplementations consistent with the present invention provide a reversed T-shaped FinFET. The exemplary FinFET includes a fin formed in a reversed T-shape and a dielectric layer formed over surfaces of the fin to conform with the shape of the fin. A gate is further formed over the dielectric layer to conform with the shape of the fin. A FinFET having a reversed T-shape, consistent with the invention, achieves better current drivability and short channel control than other conventional shaped FinFETs.Additional advantages and other features of the invention will be set forth in part in the description which follows and, in part, will become apparent to those having ordinary skill in the art upon examination of the following, or may be learned from the practice of the invention. The advantages and features of the invention may be realized and obtained as particularly pointed out in the appended claims.According to the present invention, the foregoing and other advantages are achieved in part by a fin field effect transistor (FinFET) that includes a reversed T-shaped fin. The FinFET further includes source and drain regions formed adjacent the reversed T-shaped fin and a dielectric layer formed adjacent surfaces of the fin. The FinFET also includes a gate formed adjacent the dielectric layer.According to another aspect of the invention, a method of forming a fin field effect transistor (FinFET) is provided. The method includes forming a reversed T-shaped fin and forming source and drain regions adjacent the reversed T-shaped fin. The method further includes forming a dielectric layer adjacent surfaces of the fin and forming a gate adjacent the dielectric layer.According to a further aspect of the invention, a semiconductor device is provided. The semiconductor device includes a fin structure including an upper portion and a lower portion, a width of the upper portion of the fin structure being smaller than a width of the lower portion of the fin structure. The semiconductor device further includes source and drain regions formed adjacent the fin structure, a dielectric layer formed over the fin structure, and a gate formed over the dielectric layer.Other advantages and features of the present invention will become readily apparent to those skilled in this art from the following detailed description. The embodiments shown and described provide illustration of the best mode contemplated for carrying out the invention. The invention is capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawings are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSReference is made to the attached drawings, wherein elements having the same reference number designation may represent like elements throughout.FIG. 1 illustrates exemplary layers of a silicon-on-insulator (SOI) wafer that may be used for forming a fin of a FinFET consistent with the present invention;FIGS. 2A and 2B illustrate the formation of a mesa from the fin layer of FIG. 1 consistent with the invention;FIGS. 3A and 3B illustrate the formation of a TEOS layer adjacent the mesa of FIGS. 2A and 2B consistent with the invention;FIGS. 4A and 4B illustrate the formation of a T-shaped fin from the mesa of FIGS. 3A and 3B consistent with the invention;FIGS. 5A and 5B illustrate the removal of the TEOS layer of FIGS. 4A and 4B consistent with the invention;FIG. 6 illustrates a cross-sectional view of a dielectric layer formed adjacent the T-shaped fin of FIGS. 5A and 5B consistent with the invention;FIG. 7 illustrates the formation of a gate layer over the T-shaped fin of FIG. 6 consistent with the invention;FIG. 8 illustrates the formation of another mesa from the gate layer of FIG. 7 consistent with the present invention;FIGS. 9A and 9B illustrate the formation of a gate from the mesa of FIG. 8 consistent with the present invention;FIG. 10 illustrates a starting buried oxide layer, seed layer, and oxide layer consistent with another embodiment of the invention;FIG. 11 illustrates formation of a trench within the oxide layer of FIG. 10 consistent with another embodiment of the invention;FIGS. 12A and 12B illustrate the formation of a stresser layer and a strained channel layer in the trench of FIG. 11 consistent with another embodiment of the invention;FIG. 13 illustrates the removal of the oxide layer of FIGS. 12A and 12B consistent with another embodiment of the invention;FIG. 14 illustrates the oxidization of the stresser layer of FIG. 13 consistent with another embodiment of the invention;FIGS. 15A and 15B illustrate the removal of the oxide layer and stresser layer of FIG. 14 consistent with another embodiment of the invention.BEST MODE FOR CARRYING OUT THE INVENTIONThe following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and their equivalents.Consistent with the present invention, an exemplary reversed T-shaped FinFET is provided that achieves better current drivability and short channel control than conventional shaped FinFETs.FIG. 1 illustrates a cross-section of a silicon on insulator (SOI) wafer 100 formed in accordance with an exemplary embodiment of the present invention. SOI wafer 100, consistent with the present invention, may include a buried oxide layer 110 formed on a substrate 115. A fin layer 105 may further be formed on buried oxide layer 110. The thickness of fin layer 105 may range, for example, from about 200 Ȧ to about 1500 Ȧ and the thickness of buried oxide layer 110 may range, for example, from about 1000 Ȧ to about 3000 Ȧ. Fin layer 105 and substrate 115 may include, for example, silicon, though other semiconducting materials, such as germanium, may be used.As shown in FIGS. 2A and 2B, a mesa 205 may be formed from fin layer 105. Mesa 205 may be formed, for example, using a rectangular active mask and conventional etching processes. For example, a conventional photoresist material may be patterned and etched to define a rectangular mask having dimensions ranging from about 100 Ȧ to about 1000 Ȧ in length and about 100 Ȧ to about 1000 Ȧ in width. The areas not covered by the mask may then be etched, with the etching terminating on buried oxide layer 110. The photoresist material may then be removed.As shown in FIGS. 3A and 3B, a layer 305 of tetraethylorthosilicate (TEOS), or any other dielectric material, may then be formed around mesa 205. Layer 305 may then be polished back to make the upper surface of layer 305 co-planar with the upper surface of mesa 205 using, for example, a conventional chemical-mechanical polishing (CMP) process, as shown in FIG. 3A. The thickness of layer 305 may range, for example, from about 200 Ȧ to about 2000 Ȧ.As further shown in FIGS. 4A and 4B, an active mask 400 may be formed over mesa 205. Mask 400 may be formed using a conventional photoresist material and its length may extend about 100 nm beyond mesa 205 on each end, and its width may range from about 100 Ȧ to about 1000 Ȧ after photoresist trimming. Mask 400 may be used to etch away exposed portions of mesa 205 to a depth d1 on either side of mask 400, where depth d1 may range from about 100 Ȧ to about 1000 Ȧ below the upper surface of mesa 205. In one implementation, the depth d1 may be about 1000 Ȧ. Etching the exposed portions of mesa 205 produces a reversed T-shaped fin 405, as shown in FIG. 4B. Subsequent to etching, mask 400 may be removed.TEOS layer 305 may then be removed from around fin 405, as shown in FIGS. 5A and 5B, leaving reversed T-shaped fin 405. As further shown in FIG. 6, a layer 605 of gate insulation may then be formed over fin 405. Gate insulation layer 605 may be thermally grown or deposited using conventional deposition processes. Gate insulation layer 605 may include SiO, SiO2, SiN, SiON, HFO2, ZrO2, Al2O3, HfSiO(x) ZnS, MgF2, or other high-K dielectric materials. The thickness of layer 605 may range, for example, from about 10 Ȧ to about 50 Ȧ.A layer of gate material 705 may then be formed over reversed T-shaped fin 405, as shown in FIG. 7. Gate material 705 may include, for example, polysilicon or a metal material, such as, for example, TiN, though other materials may be used. The thickness of layer 705 may range, for example, from about 200 Ȧ to about 1000 Ȧ.Another mesa 805, comprising the gate material 705, may be formed, for example, using a rectangular active mask and conventional etching processes. For example, a conventional photoresist material may be patterned and etched to define a rectangular mask (not shown) having dimensions ranging from about 100 Ȧ to about 1000 Ȧ in length and about 100 Ȧ to about 1000 Ȧ in width. The areas not covered by the mask may then be etched, with the etching terminating on buried oxide layer 110. The photoresist material may then be removed.As further shown in FIGS. 9A and 9B, another active mask (not shown) may be formed over mesa 805. The mask may be formed using a conventional photoresist material and its length may extend about 100 nm beyond mesa 805 on each end, and its width may range from about 100 Ȧ to about 1000 Ȧ after photoresist trimming. The mask may be used to etch away exposed portions of mesa 805 to a depth d2 on either side of the mask, where depth d2 may range from about 100 Ȧ to about 1000 Ȧ below the upper surface of mesa 805. In one implementation, the depth d2 may be about 1000 Ȧ. Etching the exposed portions of mesa 805 conforms gate material 705 to the reversed T-shape of fin 405, as shown in FIG. 9B. Subsequent to etching, the mask may be removed.Source and drain regions (not shown) may be formed adjacent respective ends of fin 405. The source and drain regions may be formed by, for example, deposition of a layer of semi-conducting material over fin 405. The source 2 and drain regions may be formed from the layer of semi-conducting material using, for example, conventional photolithographic and etching processes. One skilled in the art will recognize, however, that other existing techniques may be used for forming the source and drain regions. For example, the source and drain regions may be formed by patterning and etching fin 405. The source and drain regions may include a material such as, for example, silicon, germanium, or silicon-germanium (Si-Ge).The reverse T-shaped FinFET, formed in accordance with the exemplary process described above, achieves optimized current drivability and short channel control, particularly as compared to more conventionally shaped FinFETs.Exemplary Silicon-Germanium Fin Stresser Removal for U-Gate/Round Gate FinfetFIGS. 10-15 illustrate an exemplary process for forming a strained fin for a U-gate/round gate FinFET using a silicon-germanium fin stresser consistent with another embodiment of the invention. As shown in FIG. 10, the exemplary process may begin with the formation of a seed layer 910 and an oxide layer 905 on a buried oxide (BOX) layer 915. Seed layer 910 may include, for example, germanium (Ge), though other semiconducting materials may be used, and may be formed using, for example, conventional deposition processes. Seed layer 910 may range, for example, from about 200 Ȧ to about 1000 Ȧ in thickness. Oxide layer 905 may include, for example, SiO or SiO2, though other oxide materials may be used, and may be formed, for example, from a conventional CVD process. Oxide layer 905 may range, for example, from about 800 Ȧ to about 1200 Ȧ in thickness.As shown in FIG. 11, a trench 1005 may be formed in oxide layer 905 using, for example, conventional photolithographic and etching processes. Trench 1105 may range, for example, from about 500 Ȧ to about 5000 Ȧ in width. As further shown in FIGS. 12A and 12B, a stresser layer 1110 may be formed within trench 1005. Stresser layer 1110 may be formed, for example, using selective epitaxy and may range from about 100 Ȧ to about 1000 Ȧ in thickness. A strained channel 1105 may then be formed over stresser layer 1110. Strained channel layer 1105 may be formed, for example, using selective epitaxy and may range from about 100 Ȧ to about 1000 Ȧ in thickness. After formation of layer 1105, excess material of layer 1105 may be polished off using, for example, a conventional CMP process.Stresser layer 1110 may include a crystalline material with a lattice constant larger than the lattice constant of a crystalline material selected for strained channel layer 1105. If, for example, silicon is selected for the strained channel layer, stresser layer 1110 may include a crystalline material with a lattice constant larger than the lattice constant of silicon. Stresser layer 1110 may include, for example, SixGe(1-x) with x approximately equal to 0.7. Other values of x may be appropriately selected. One skilled in the art will recognize that crystalline materials other than SixGe(1-x) may be used such that the material's lattice constant is larger than the lattice constant of the crystalline material selected for the strained channel layer. Since strained channel layer 1105 may include a crystalline material that is lattice constant mismatched with the crystalline material of stresser layer 1110, tensile strain is induced within strained channel layer 1105, which increases carrier mobility. Increasing the carrier mobility, in turn, increases the drive current of the resulting FinFET transistor, thus, improving FinFET performance.As further shown in FIG. 13, oxide layer 905 may be removed using, for example, a conventional etching process to form a fin 1305. Stresser layer 1110 may then be oxidized, as shown in FIG. 14, with an oxide layer 1405. Stresser layer 1110 may be oxidized, for example, with SiO or SiO2, though other oxide materials may be used. Oxide layer 1405 and stresser layer 1110 may then be removed, as shown in FIGS. 15A and 15B, using, for example, a conventional etching process. After removal of stresser layer 1005, fin 1305 may include strained channel 1105 left remaining between the FinFET source 1505 and drain 1510 regions. The open portion 1515 below strained channel 1105, resulting from etching away oxide layer 1405 and stresser layer 1110, may be used for subsequently forming a U-shaped or round FinFET gate (not shown) to complete the FinFET structure. The exemplary process, as described above with respect to FIGS. 10-15, thus, can be used to produce a U-gate or round gate FinFET that has a strained channel with increased carrier mobility and increased drive current.In the previous descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., in order to provide a thorough understanding of the present invention. However, the present invention can be practiced without resorting to the details specifically set forth herein. In other instances, well known processing structures have not been described in detail, in order not to unnecessarily obscure the thrust of the present invention. In practicing the present invention, conventional photolithographic, etching and deposition techniques may be employed, and hence, the details of such techniques have not been set forth herein in detail.Only the preferred embodiments of the invention and a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the invention is capable of use in various other combinations and environments and is capable of modifications within the scope of the inventive concept as expressed herein. |
A processor includes a core that includes logic to determine that an instruction will require strided data converted from source data in memory, logic to load source data into a plurality of preliminary vector registers, and logic to apply permute instructions to the contents of the preliminary vector registers to cause corresponding indexed elements from a plurality of structures to be loaded into respective source vector registers. The strided data is to include corresponding indexed elements from the plurality of structures in the source data to be loaded into a same register to be used by the core to execute the instruction. The plurality of preliminary vector registers are to be loaded with a first indexed layout of elements. A common register of the preliminary vector registers are to be loaded with a second indexed layout of elements. |
CLAIMSWhat is claimed is:1. A processor, comprising:a front end to receive an instruction;a decoder to decode the instruction;a core to execute the instruction, including:a first logic to determine that the instruction will require strided data converted from source data in memory, the strided data to include corresponding indexed elements from a plurality of structures in the source data to be loaded into a same register to be used to execute the instruction;a second logic to load source data into a plurality of preliminary vector registers with a first indexed layout of elements and a second indexed layout of elements; wherein:a plurality of the preliminary vector registers are to be loaded with the first indexed layout of elements; anda common register of the preliminary vector registers are to be loaded with the second indexed layout of elements;a third logic to apply permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective source vector registers; anda retirement unit to retire the instruction.2. The processor of Claim 1, wherein the core further includes a fourth logic to execute the instruction upon one or more source vector registers upon completion of conversion of source data to strided data.3. The processor of Claim 1, wherein the core further includes:a fourth logic to create an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored; a fifth logic to selectively store results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register;a sixth logic to selectively preserve indices of the index value for subsequent use of the index vector.4. The processor of Claim 1, wherein the core further includes:a fourth logic to create an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored;a fifth logic to selectively store results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register;a sixth logic to selectively preserve indices of the index vector for a second permute instruction; anda seventh logic to apply a second permute instruction with the preserved indices of the index vector to indicate elements of a third preliminary vector register and the common vector register to be permuted.5. The processor of Claim 1, wherein:the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors; andeight permute operations are to be applied to contents of the preliminary vector registers to yield contents of the respective source vector registers.6. The processor of Claim 1, wherein:the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors; and two permute operations are to be applied to contents of the common vector register to yield contents of the respective source vector registers.7. The processor of Claim 1, wherein:the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors; andthe core further includes a fourth logic to create six index vectors to be used with permute instructions yield contents of the source vector registers.8. A system, comprising:a front end to receive an instruction;a decoder to decode the instruction;a core to execute the instruction, including:a first logic to determine that the instruction will require strided data converted from source data in memory, the strided data to include corresponding indexed elements from a plurality of structures in the source data to be loaded into a same register to be used to execute the instruction;a second logic to load source data into a plurality of preliminary vector registers with a first indexed layout of elements and a second indexed layout of elements; wherein:a plurality of the preliminary vector registers are to be loaded with the first indexed layout of elements; anda common register of the preliminary vector registers are to be loaded with the second indexed layout of elements;a third logic to apply permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective source vector registers; anda retirement unit to retire the instruction.9. The system of Claim 8, wherein the core further includes a fourth logic to execute the instruction upon one or more source vector registers upon completion of conversion of source data to strided data. 10. The system of Claim 8, wherein the core further includes:a fourth logic to create an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored;a fifth logic to selectively store results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register;a sixth logic to selectively preserve indices of the index value for subsequent use of the index vector.11. The system of Claim 8, wherein the core further includes:a fourth logic to create an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored;a fifth logic to selectively store results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register;a sixth logic to selectively preserve indices of the index vector for a second permute instruction; anda seventh logic to apply a second permute instruction with the preserved indices of the index vector to indicate elements of a third preliminary vector register and the common vector register to be permuted.12. The system of Claim 8, wherein:the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors; andeight permute operations are to be applied to contents of the preliminary vector registers to yield contents of the respective source vector registers.13. The system of Claim 8, wherein:the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors; andtwo permute operations are to be applied to contents of the common vector register to yield contents of the respective source vector registers.14. The system of Claim 8, wherein:the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors; andthe core further includes a fourth logic to create six index vectors to be used with permute instructions yield contents of the source vector registers.15. A method comprising, within a processor:receiving an instruction;decoding the instruction;executing the instruction, including:determining that the instruction will require strided data converted from source data in memory, the strided data to include corresponding indexed elements from a plurality of structures in the source data to be loaded into a same register to be used to execute the instruction;loading source data into a plurality of preliminary vector registers with a first indexed layout of elements and a second indexed layout of elements; wherein:a plurality of the preliminary vector registers are to be loaded with the first indexed layout of elements; and a common register of the preliminary vector registers are to be loaded with the second indexed layout of elements; andapplying permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective source vector registers16. The method of Claim 15, further comprising executing the instruction upon one or more source vector registers upon completion of conversion of source data to strided data.17. The method of Claim 15, further comprising:creating an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored; selectively storing results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register; selectively preserving indices of the index value for subsequent use of the index vector. 18. The method of Claim 15, wherein the core further includes:creating an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored; selectively storing results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register; selectively preserving indices of the index vector for a second permute instruction; andapplying a second permute instruction with the preserved indices of the index vector to indicate elements of a third preliminary vector register and the common vector register to be permuted.19. The method of Claim 15, wherein:the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors; andeight permute operations are to be applied to contents of the preliminary vector registers to yield contents of the respective source vector registers.20. The method of Claim 15, wherein:the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors; andtwo permute operations are to be applied to contents of the common vector register to yield contents of the respective source vector registers.21. An apparatus, comprising means for performing any of the methods of Claims 15-20. |
INSTRUCTION AND LOGIC FOR PERMUTE WITH OUT OF ORDERLOADINGFIELD OF THE INVENTION[0001] The present disclosure pertains to the field of processing logic, microprocessors, and associated instruction set architecture that, when executed by the processor or other processing logic, perform logical, mathematical, or other functional operations. DESCRIPTION OF RELATED ART[0002] Multiprocessor systems are becoming more and more common. Applications of multiprocessor systems include dynamic domain partitioning all the way down to desktop computing. In order to take advantage of multiprocessor systems, code to be executed may be separated into multiple threads for execution by various processing entities. Each thread may be executed in parallel with one another. Instructions as they are received on a processor may be decoded into terms or instruction words that are native, or more native, for execution on the processor. Processors may be implemented in a system on chip. Data structures that are organized in tuples of three to five elements may be used in media applications, High Performance Computing applications, and molecular dynamics applications.DESCRIPTION OF THE FIGURES[0003] Embodiments are illustrated by way of example and not limitation in the Figures of the accompanying drawings:[0004] FIGURE 1A is a block diagram of an exemplary computer system formed with a processor that may include execution units to execute an instruction, in accordance with embodiments of the present disclosure;[0005] FIGURE IB illustrates a data processing system, in accordance with embodiments of the present disclosure;[0006] FIGURE 1C illustrates other embodiments of a data processing system for performing text string comparison operations; [0007] FIGURE 2 is a block diagram of the micro-architecture for a processor that may include logic circuits to perform instructions, in accordance with embodiments of the present disclosure;[0008] FIGURE 3A illustrates various packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure;[0009] FIGURE 3B illustrates possible in-register data storage formats, in accordance with embodiments of the present disclosure;[0010] FIGURE 3C illustrates various signed and unsigned packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure;[0011] FIGURE 3D illustrates an embodiment of an operation encoding format;[0012] FIGURE 3E illustrates another possible operation encoding format having forty or more bits, in accordance with embodiments of the present disclosure;[0013] FIGURE 3F illustrates yet another possible operation encoding format, in accordance with embodiments of the present disclosure;[0014] FIGURE 4A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline, in accordance with embodiments of the present disclosure;[0015] FIGURE 4B is a block diagram illustrating an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor, in accordance with embodiments of the present disclosure;[0016] FIGURE 5A is a block diagram of a processor, in accordance with embodiments of the present disclosure;[0017] FIGURE 5B is a block diagram of an example implementation of a core, in accordance with embodiments of the present disclosure;[0018] FIGURE 6 is a block diagram of a system, in accordance with embodiments of the present disclosure;[0019] FIGURE 7 is a block diagram of a second system, in accordance with embodiments of the present disclosure;[0020] FIGURE 8 is a block diagram of a third system in accordance with embodiments of the present disclosure; [0021] FIGURE 9 is a block diagram of a system-on-a-chip, in accordance with embodiments of the present disclosure;[0022] FIGURE 10 illustrates a processor containing a central processing unit and a graphics processing unit which may perform at least one instruction, in accordance with embodiments of the present disclosure;[0023] FIGURE 11 is a block diagram illustrating the development of IP cores, in accordance with embodiments of the present disclosure;[0024] FIGURE 12 illustrates how an instruction of a first type may be emulated by a processor of a different type, in accordance with embodiments of the present disclosure;[0025] FIGURE 13 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with embodiments of the present disclosure;[0026] FIGURE 14 is a block diagram of an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0027] FIGURE 15 is a more detailed block diagram of an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0028] FIGURE 16 is a block diagram of an execution pipeline for an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0029] FIGURE 17 is a block diagram of an electronic device for utilizing a processor, in accordance with embodiments of the present disclosure;[0030] FIGURE 18 is an illustration of an example system for instructions and logic for permute sequences of instructions or operations, according to embodiments of the present disclosure;[0031] FIGURE 19 illustrates an example processor core of a data processing system that performs vector operations, in accordance with embodiments of the present disclosure;[0032] FIGURE 20 is a block diagram illustrating an example extended vector register file, in accordance with embodiments of the present disclosure; [0033] FIGURE 21 is an illustration of the results of data conversion, according to embodiments of the present disclosure;[0034] FIGURE 22 is an illustration of operation of blend and permute instructions, according to embodiments of the present disclosure;[0035] FIGURE 23 is an illustration of operation of permute instructions, according to embodiments of the present disclosure;[0036] FIGURE 24 is an illustration of operation of data conversion using multiple gathers for an array of eight structures, according to embodiment of the present disclosure;[0037] FIGURE 25 is an illustration of naive operation of data conversion for an array of eight structures, according to embodiments of the present disclosure;[0038] FIGURE 26 is an illustration of operation of a system to perform data conversion using permute operations, in accordance with embodiments of the present disclosure;[0039] FIGURE 27 is a more detailed view of the operation of a system as pictured to perform data conversion using permute operations, according to embodiments of the present disclosure;[0040] FIGURE 28 is an illustration of further operation of a system to perform data conversion using out-of-order loads and fewer permute operations, in accordance with embodiments of the present disclosure;[0041] FIGURE 29 is a more detailed view of the operation of system to perform data conversion using permute operations, according to embodiments of the present disclosure;[0042] FIGURE 30 is an illustration of example operation of a system to perform data conversion using even fewer permute operations, according to embodiments of the present disclosure;[0043] FIGURE 31 illustrates an example method for performing permute operations to fulfill data conversion, according to embodiments of the present disclosure; and [0044] FIGURE 32 illustrates another example method for performing permute operations to fulfill data conversion, according to embodiments of the present disclosure. DETAILED DESCRIPTION[0045] The following description describes embodiments of instructions and processing logic for performing permute sequences of operation on a processing apparatus. The permute sequences may be part of a striding operation, such as Stride-5. Such a processing apparatus may include an out-of-order processor. In the following description, numerous specific details such as processing logic, processor types, microarchitectural conditions, events, enablement mechanisms, and the like are set forth in order to provide a more thorough understanding of embodiments of the present disclosure. It will be appreciated, however, by one skilled in the art that the embodiments may be practiced without such specific details. Additionally, some well- known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring embodiments of the present disclosure.[0046] Although the following embodiments are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments of the present disclosure may be applied to other types of circuits or semiconductor devices that may benefit from higher pipeline throughput and improved performance. The teachings of embodiments of the present disclosure are applicable to any processor or machine that performs data manipulations. However, the embodiments are not limited to processors or machines that perform 512-bit, 256-bit, 128-bit, 64-bit, 32-bit, or 16-bit data operations and may be applied to any processor and machine in which manipulation or management of data may be performed. In addition, the following description provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide examples of embodiments of the present disclosure rather than to provide an exhaustive list of all possible implementations of embodiments of the present disclosure. [0047] Although the below examples describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present disclosure may be accomplished by way of a data or instructions stored on a machine- readable, tangible medium, which when performed by a machine cause the machine to perform functions consistent with at least one embodiment of the disclosure. In one embodiment, functions associated with embodiments of the present disclosure are embodied in machine-executable instructions. The instructions may be used to cause a general-purpose or special-purpose processor that may be programmed with the instructions to perform the steps of the present disclosure. Embodiments of the present disclosure may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to embodiments of the present disclosure. Furthermore, steps of embodiments of the present disclosure might be performed by specific hardware components that contain fixed-function logic for performing the steps, or by any combination of programmed computer components and fixed-function hardware components.[0048] Instructions used to program logic to perform embodiments of the present disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions may be distributed via a network or by way of other computer-readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer- readable medium may include any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).[0049] A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as may be useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, designs, at some stage, may reach a level of data representing the physical placement of various devices in the hardware model. In cases wherein some semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine-readable medium. A memory or a magnetic or optical storage such as a disc may be the machine-readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or retransmission of the electrical signal is performed, a new copy may be made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.[0050] In modern processors, a number of different execution units may be used to process and execute a variety of code and instructions. Some instructions may be quicker to complete while others may take a number of clock cycles to complete. The faster the throughput of instructions, the better the overall performance of the processor. Thus it would be advantageous to have as many instructions execute as fast as possible. However, there may be certain instructions that have greater complexity and require more in terms of execution time and processor resources, such as floating point instructions, load/store operations, data moves, etc. [0051] As more computer systems are used in internet, text, and multimedia applications, additional processor support has been introduced over time. In one embodiment, an instruction set may be associated with one or more computer architectures, including data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O).[0052] In one embodiment, the instruction set architecture (ISA) may be implemented by one or more micro-architectures, which may include processor logic and circuits used to implement one or more instruction sets. Accordingly, processors with different micro-architectures may share at least a portion of a common instruction set. For example, Intel® Pentium 4 processors, Intel® Core™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale CA implement nearly identical versions of the x86 instruction set (with some extensions that have been added with newer versions), but have different internal designs. Similarly, processors designed by other processor development companies, such as ARM Holdings, Ltd., MIPS, or their licensees or adopters, may share at least a portion of a common instruction set, but may include different processor designs. For example, the same register architecture of the ISA may be implemented in different ways in different micro-architectures using new or well-known techniques, including dedicated physical registers, one or more dynamically allocated physical registers using a register renaming mechanism (e.g., the use of a Register Alias Table (RAT), a Reorder Buffer (ROB) and a retirement register file. In one embodiment, registers may include one or more registers, register architectures, register files, or other register sets that may or may not be addressable by a software programmer.[0053] An instruction may include one or more instruction formats. In one embodiment, an instruction format may indicate various fields (number of bits, location of bits, etc.) to specify, among other things, the operation to be performed and the operands on which that operation will be performed. In a further embodiment, some instruction formats may be further defined by instruction templates (or sub-formats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields and/or defined to have a given field interpreted differently. In one embodiment, an instruction may be expressed using an instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and specifies or indicates the operation and the operands upon which the operation will operate.[0054] Scientific, financial, auto-vectorized general purpose, RMS (recognition, mining, and synthesis), and visual and multimedia applications (e.g., 2D/3D graphics, image processing, video compression/decompression, voice recognition algorithms and audio manipulation) may require the same operation to be performed on a large number of data items. In one embodiment, Single Instruction Multiple Data (SIMD) refers to a type of instruction that causes a processor to perform an operation on multiple data elements. SIMD technology may be used in processors that may logically divide the bits in a register into a number of fixed-sized or variable-sized data elements, each of which represents a separate value. For example, in one embodiment, the bits in a 64-bit register may be organized as a source operand containing four separate 16-bit data elements, each of which represents a separate 16-bit value. This type of data may be referred to as 'packed' data type or 'vector' data type, and operands of this data type may be referred to as packed data operands or vector operands. In one embodiment, a packed data item or vector may be a sequence of packed data elements stored within a single register, and a packed data operand or a vector operand may a source or destination operand of a SFMD instruction (or 'packed data instruction' or a 'vector instruction'). In one embodiment, a SFMD instruction specifies a single vector operation to be performed on two source vector operands to generate a destination vector operand (also referred to as a result vector operand) of the same or different size, with the same or different number of data elements, and in the same or different data element order.[0055] SFMD technology, such as that employed by the Intel® Core™ processors having an instruction set including x86, MMX™, Streaming SIMD Extensions (SSE), SSE2, SSE3, SSE4.1, and SSE4.2 instructions, ARM processors, such as the ARM Cortex® family of processors having an instruction set including the Vector Floating Point (VFP) and/or NEON instructions, and MIPS processors, such as the Loongson family of processors developed by the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences, has enabled a significant improvement in application performance (Core™ and MMX™ are registered trademarks or trademarks of Intel Corporation of Santa Clara, Calif).[0056] In one embodiment, destination and source registers/data may be generic terms to represent the source and destination of the corresponding data or operation. In some embodiments, they may be implemented by registers, memory, or other storage areas having other names or functions than those depicted. For example, in one embodiment, "DEST1" may be a temporary storage register or other storage area, whereas "SRCl" and "SRC2" may be a first and second source storage register or other storage area, and so forth. In other embodiments, two or more of the SRC and DEST storage areas may correspond to different data storage elements within the same storage area (e.g., a SIMD register). In one embodiment, one of the source registers may also act as a destination register by, for example, writing back the result of an operation performed on the first and second source data to one of the two source registers serving as a destination registers.[0057] FIGURE 1A is a block diagram of an exemplary computer system formed with a processor that may include execution units to execute an instruction, in accordance with embodiments of the present disclosure. System 100 may include a component, such as a processor 102 to employ execution units including logic to perform algorithms for process data, in accordance with the present disclosure, such as in the embodiment described herein. System 100 may be representative of processing systems based on the PENTIUM®III, PENTR7M®4, Xeon™, Itanium®, XScale™ and/or StrongARM™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 100 may execute a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware circuitry and software. [0058] Embodiments are not limited to computer systems. Embodiments of the present disclosure may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that may perform one or more instructions in accordance with at least one embodiment.[0059] Computer system 100 may include a processor 102 that may include one or more execution units 108 to perform an algorithm to perform at least one instruction in accordance with one embodiment of the present disclosure. One embodiment may be described in the context of a single processor desktop or server system, but other embodiments may be included in a multiprocessor system. System 100 may be an example of a 'hub' system architecture. System 100 may include a processor 102 for processing data signals. Processor 102 may include a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. In one embodiment, processor 102 may be coupled to a processor bus 110 that may transmit data signals between processor 102 and other components in system 100. The elements of system 100 may perform conventional functions that are well known to those familiar with the art.[0060] In one embodiment, processor 102 may include a Level 1 (LI) internal cache memory 104. Depending on the architecture, the processor 102 may have a single internal cache or multiple levels of internal cache. In another embodiment, the cache memory may reside external to processor 102. Other embodiments may also include a combination of both internal and external caches depending on the particular implementation and needs. Register file 106 may store different types of data in various registers including integer registers, floating point registers, status registers, and instruction pointer register. [0061] Execution unit 108, including logic to perform integer and floating point operations, also resides in processor 102. Processor 102 may also include a microcode (ucode) ROM that stores microcode for certain macroinstructions. In one embodiment, execution unit 108 may include logic to handle a packed instruction set 109. By including the packed instruction set 109 in the instruction set of a general-purpose processor 102, along with associated circuitry to execute the instructions, the operations used by many multimedia applications may be performed using packed data in a general-purpose processor 102. Thus, many multimedia applications may be accelerated and executed more efficiently by using the full width of a processor's data bus for performing operations on packed data. This may eliminate the need to transfer smaller units of data across the processor's data bus to perform one or more operations one data element at a time.[0062] Embodiments of an execution unit 108 may also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 100 may include a memory 120. Memory 120 may be implemented as a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 120 may store instructions 119 and/or data 121 represented by data signals that may be executed by processor 102.[0063] A system logic chip 116 may be coupled to processor bus 110 and memory 120. System logic chip 116 may include a memory controller hub (MCH). Processor 102 may communicate with MCH 116 via a processor bus 110. MCH 1 16 may provide a high bandwidth memory path 118 to memory 120 for storage of instructions 119 and data 121 and for storage of graphics commands, data and textures. MCH 116 may direct data signals between processor 102, memory 120, and other components in system 100 and to bridge the data signals between processor bus 110, memory 120, and system I/O 122. In some embodiments, the system logic chip 116 may provide a graphics port for coupling to a graphics controller 112. MCH 116 may be coupled to memory 120 through a memory interface 118. Graphics card 112 may be coupled to MCH 116 through an Accelerated Graphics Port (AGP) interconnect 114. [0064] System 100 may use a proprietary hub interface bus 122 to couple MCH 116 to I/O controller hub (ICH) 130. In one embodiment, ICH 130 may provide direct connections to some I/O devices via a local I/O bus. The local I/O bus may include a high-speed I/O bus for connecting peripherals to memory 120, chipset, and processor 102. Examples may include the audio controller 129, firmware hub (flash BIOS) 128, wireless transceiver 126, data storage 124, legacy I/O controller 123 containing user input interface 125 (which may include a keyboard interface), a serial expansion port 127 such as Universal Serial Bus (USB), and a network controller 134. Data storage device 124 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0065] For another embodiment of a system, an instruction in accordance with one embodiment may be used with a system on a chip. One embodiment of a system on a chip comprises of a processor and a memory. The memory for one such system may include a flash memory. The flash memory may be located on the same die as the processor and other system components. Additionally, other logic blocks such as a memory controller or graphics controller may also be located on a system on a chip.[0066] FIGURE IB illustrates a data processing system 140 which implements the principles of embodiments of the present disclosure. It will be readily appreciated by one of skill in the art that the embodiments described herein may operate with alternative processing systems without departure from the scope of embodiments of the disclosure.[0067] Computer system 140 comprises a processing core 159 for performing at least one instruction in accordance with one embodiment. In one embodiment, processing core 159 represents a processing unit of any type of architecture, including but not limited to a CISC, a RISC or a VLIW type architecture. Processing core 159 may also be suitable for manufacture in one or more process technologies and by being represented on a machine-readable media in sufficient detail, may be suitable to facilitate said manufacture.[0068] Processing core 159 comprises an execution unit 142, a set of register files 145, and a decoder 144. Processing core 159 may also include additional circuitry (not shown) which may be unnecessary to the understanding of embodiments of the present disclosure. Execution unit 142 may execute instructions received by processing core 159. In addition to performing typical processor instructions, execution unit 142 may perform instructions in packed instruction set 143 for performing operations on packed data formats. Packed instruction set 143 may include instructions for performing embodiments of the disclosure and other packed instructions. Execution unit 142 may be coupled to register file 145 by an internal bus. Register file 145 may represent a storage area on processing core 159 for storing information, including data. As previously mentioned, it is understood that the storage area may store the packed data might not be critical. Execution unit 142 may be coupled to decoder 144. Decoder 144 may decode instructions received by processing core 159 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, execution unit 142 performs the appropriate operations. In one embodiment, the decoder may interpret the opcode of the instruction, which will indicate what operation should be performed on the corresponding data indicated within the instruction.[0069] Processing core 159 may be coupled with bus 141 for communicating with various other system devices, which may include but are not limited to, for example, synchronous dynamic random access memory (SDRAM) control 146, static random access memory (SRAM) control 147, burst flash memory interface 148, personal computer memory card international association (PCMCIA)/compact flash (CF) card control 149, liquid crystal display (LCD) control 150, direct memory access (DMA) controller 151, and alternative bus master interface 152. In one embodiment, data processing system 140 may also comprise an I/O bridge 154 for communicating with various I/O devices via an I/O bus 153. Such I/O devices may include but are not limited to, for example, universal asynchronous receiver/transmitter (UART) 155, universal serial bus (USB) 156, Bluetooth wireless UART 157 and I/O expansion interface 158.[0070] One embodiment of data processing system 140 provides for mobile, network and/or wireless communications and a processing core 159 that may perform SFMD operations including a text string comparison operation. Processing core 159 may be programmed with various audio, video, imaging and communications algorithms including discrete transformations such as a Walsh-Hadamard transform, a fast Fourier transform (FFT), a discrete cosine transform (DCT), and their respective inverse transforms; compression/decompression techniques such as color space transformation, video encode motion estimation or video decode motion compensation; and modulation/demodulation (MODEM) functions such as pulse coded modulation (PCM).[0071] FIGURE 1C illustrates other embodiments of a data processing system that performs SFMD text string comparison operations. In one embodiment, data processing system 160 may include a main processor 166, a SIMD coprocessor 161, a cache memory 167, and an input/output system 168. Input/output system 168 may optionally be coupled to a wireless interface 169. SIMD coprocessor 161 may perform operations including instructions in accordance with one embodiment. In one embodiment, processing core 170 may be suitable for manufacture in one or more process technologies and by being represented on a machine-readable media in sufficient detail, may be suitable to facilitate the manufacture of all or part of data processing system 160 including processing core 170.[0072] In one embodiment, SIMD coprocessor 161 comprises an execution unit 162 and a set of register files 164. One embodiment of main processor 166 comprises a decoder 165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment for execution by execution unit 162. In other embodiments, SIMD coprocessor 161 also comprises at least part of decoder 165 (shown as 165B) to decode instructions of instruction set 163. Processing core 170 may also include additional circuitry (not shown) which may be unnecessary to the understanding of embodiments of the present disclosure.[0073] In operation, main processor 166 executes a stream of data processing instructions that control data processing operations of a general type including interactions with cache memory 167, and input/output system 168. Embedded within the stream of data processing instructions may be SIMD coprocessor instructions. Decoder 165 of main processor 166 recognizes these SIMD coprocessor instructions as being of a type that should be executed by an attached SIMD coprocessor 161. Accordingly, main processor 166 issues these SFMD coprocessor instructions (or control signals representing SIMD coprocessor instructions) on the coprocessor bus 166. From coprocessor bus 171, these instructions may be received by any attached SIMD coprocessors. In this case, SIMD coprocessor 161 may accept and execute any received SIMD coprocessor instructions intended for it.[0074] Data may be received via wireless interface 169 for processing by the SIMD coprocessor instructions. For one example, voice communication may be received in the form of a digital signal, which may be processed by the SFMD coprocessor instructions to regenerate digital audio samples representative of the voice communications. For another example, compressed audio and/or video may be received in the form of a digital bit stream, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples and/or motion video frames. In one embodiment of processing core 170, main processor 166, and a SFMD coprocessor 161 may be integrated into a single processing core 170 comprising an execution unit 162, a set of register files 164, and a decoder 165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment.[0075] FIGURE 2 is a block diagram of the micro-architecture for a processor 200 that may include logic circuits to perform instructions, in accordance with embodiments of the present disclosure. In some embodiments, an instruction in accordance with one embodiment may be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment, in-order front end 201 may implement a part of processor 200 that may fetch instructions to be executed and prepares the instructions to be used later in the processor pipeline. Front end 201 may include several units. In one embodiment, instruction prefetcher 226 fetches instructions from memory and feeds the instructions to an instruction decoder 228 which in turn decodes or interprets the instructions. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called "microinstructions" or "micro-operations" (also called micro op or uops) that the machine may execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that may be used by the micro-architecture to perform operations in accordance with one embodiment. In one embodiment, trace cache 230 may assemble decoded uops into program ordered sequences or traces in uop queue 234 for execution. When trace cache 230 encounters a complex instruction, microcode ROM 232 provides the uops needed to complete the operation.[0076] Some instructions may be converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete an instruction, decoder 228 may access microcode ROM 232 to perform the instruction. In one embodiment, an instruction may be decoded into a small number of micro ops for processing at instruction decoder 228. In another embodiment, an instruction may be stored within microcode ROM 232 should a number of micro-ops be needed to accomplish the operation. Trace cache 230 refers to an entry point programmable logic array (PLA) to determine a correct microinstruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from micro-code ROM 232. After microcode ROM 232 finishes sequencing micro-ops for an instruction, front end 201 of the machine may resume fetching micro-ops from trace cache 230.[0077] Out-of-order execution engine 203 may prepare instructions for execution. The out-of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic in allocator/register renamer 215 allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic in allocator/register renamer 215 renames logic registers onto entries in a register file. The allocator 215 also allocates an entry for each uop in one of the two uop queues, one for memory operations (memory uop queue 207) and one for non- memory operations (integer/floating point uop queue 205), in front of the instruction schedulers: memory scheduler 209, fast scheduler 202, slow/general floating point scheduler 204, and simple floating point scheduler 206. Uop schedulers 202, 204, 206, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. Fast scheduler 202 of one embodiment may schedule on each half of the main clock cycle while the other schedulers may only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.[0078] Register files 208, 210 may be arranged between schedulers 202, 204, 206, and execution units 212, 214, 216, 218, 220, 222, 224 in execution block 211. Each of register files 208, 210 perform integer and floating point operations, respectively. Each register file 208, 210, may include a bypass network that may bypass or forward just completed results that have not yet been written into the register file to new dependent uops. Integer register file 208 and floating point register file 210 may communicate data with the other. In one embodiment, integer register file 208 may be split into two separate register files, one register file for low-order thirty -two bits of data and a second register file for high order thirty-two bits of data. Floating point register file 210 may include 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0079] Execution block 211 may contain execution units 212, 214, 216, 218, 220, 222, 224. Execution units 212, 214, 216, 218, 220, 222, 224 may execute the instructions. Execution block 211 may include register files 208, 210 that store the integer and floating point data operand values that the micro-instructions need to execute. In one embodiment, processor 200 may comprise a number of execution units: address generation unit (AGU) 212, AGU 214, fast ALU 216, fast ALU 218, slow ALU 220, floating point ALU 222, floating point move unit 224. In another embodiment, floating point execution blocks 222, 224, may execute floating point, MMX, SIMD, and SSE, or other operations. In yet another embodiment, floating point ALU 222 may include a 64-bit by 64-bit floating point divider to execute divide, square root, and remainder micro-ops. In various embodiments, instructions involving a floating point value may be handled with the floating point hardware. In one embodiment, ALU operations may be passed to high-speed ALU execution units 216, 218. High-speed ALUs 216, 218 may execute fast operations with an effective latency of half a clock cycle. In one embodiment, most complex integer operations go to slow ALU 220 as slow ALU 220 may include integer execution hardware for long-latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations may be executed by AGUs 212, 214. In one embodiment, integer ALUs 216, 218, 220 may perform integer operations on 64-bit data operands. In other embodiments, ALUs 216, 218, 220 may be implemented to support a variety of data bit sizes including sixteen, thirty-two, 128, 256, etc. Similarly, floating point units 222, 224 may be implemented to support a range of operands having bits of various widths. In one embodiment, floating point units 222, 224, may operate on 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions.[0080] In one embodiment, uops schedulers 202, 204, 206, dispatch dependent operations before the parent load has finished executing. As uops may be speculatively scheduled and executed in processor 200, processor 200 may also include logic to handle memory misses. If a data load misses in the data cache, there may be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations might need to be replayed and the independent ones may be allowed to complete. The schedulers and replay mechanism of one embodiment of a processor may also be designed to catch instruction sequences for text string comparison operations.[0081] The term "registers" may refer to the on-board processor storage locations that may be used as part of instructions to identify operands. In other words, registers may be those that may be usable from the outside of the processor (from a programmer's perspective). However, in some embodiments registers might not be limited to a particular type of circuit. Rather, a register may store data, provide data, and perform the functions described herein. The registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store 32-bit integer data. A register file of one embodiment also contains eight multimedia SIMD registers for packed data. For the discussions below, the registers may be understood to be data registers designed to hold packed data, such as 64-bit wide MMX™ registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, may operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128-bit wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology may hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point data may be contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.[0082] In the examples of the following figures, a number of data operands may be described. FIGURE 3A illustrates various packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure. FIGURE 3 A illustrates data types for a packed byte 310, a packed word 320, and a packed doubleword (dword) 330 for 128-bit wide operands. Packed byte format 310 of this example may be 128 bits long and contains sixteen packed byte data elements. A byte may be defined, for example, as eight bits of data. Information for each byte data element may be stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits may be used in the register. This storage arrangement increases the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation may now be performed on sixteen data elements in parallel.[0083] Generally, a data element may include an individual piece of data that is stored in a single register or memory location with other data elements of the same length. In packed data sequences relating to SSEx technology, the number of data elements stored in a XMM register may be 128 bits divided by the length in bits of an individual data element. Similarly, in packed data sequences relating to MMX and SSE technology, the number of data elements stored in an MMX register may be 64 bits divided by the length in bits of an individual data element. Although the data types illustrated in FIGURE 3A may be 128 bits long, embodiments of the present disclosure may also operate with 64-bit wide or other sized operands. Packed word format 320 of this example may be 128 bits long and contains eight packed word data elements. Each packed word contains sixteen bits of information. Packed doubleword format 330 of FIGURE 3A may be 128 bits long and contains four packed doubleword data elements. Each packed doubleword data element contains thirty-two bits of information. A packed quadword may be 128 bits long and contain two packed quad-word data elements.[0084] FIGURE 3B illustrates possible in-register data storage formats, in accordance with embodiments of the present disclosure. Each packed data may include more than one independent data element. Three packed data formats are illustrated; packed half 341, packed single 342, and packed double 343. One embodiment of packed half 341, packed single 342, and packed double 343 contain fixed-point data elements. For another embodiment one or more of packed half 341, packed single 342, and packed double 343 may contain floating-point data elements. One embodiment of packed half 341 may be 128 bits long containing eight 16-bit data elements. One embodiment of packed single 342 may be 128 bits long and contains four 32-bit data elements. One embodiment of packed double 343 may be 128 bits long and contains two 64-bit data elements. It will be appreciated that such packed data formats may be further extended to other register lengths, for example, to 96-bits, 160-bits, 192-bits, 224-bits, 256-bits or more.[0085] FIGURE 3C illustrates various signed and unsigned packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure. Unsigned packed byte representation 344 illustrates the storage of an unsigned packed byte in a SIMD register. Information for each byte data element may be stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits may be used in the register. This storage arrangement may increase the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation may now be performed on sixteen data elements in a parallel fashion. Signed packed byte representation 345 illustrates the storage of a signed packed byte. Note that the eighth bit of every byte data element may be the sign indicator. Unsigned packed word representation 346 illustrates how word seven through word zero may be stored in a SIMD register. Signed packed word representation 347 may be similar to the unsigned packed word in-register representation 346. Note that the sixteenth bit of each word data element may be the sign indicator. Unsigned packed doubleword representation 348 shows how doubleword data elements are stored. Signed packed doubleword representation 349 may be similar to unsigned packed doubleword in- register representation 348. Note that the necessary sign bit may be the thirty-second bit of each doubleword data element.[0086] FIGURE 3D illustrates an embodiment of an operation encoding (opcode). Furthermore, format 360 may include register/memory operand addressing modes corresponding with a type of opcode format described in the "IA-32 Intel Architecture Software Developer's Manual Volume 2: Instruction Set Reference," which is available from Intel Corporation, Santa Clara, CA on the world-wide-web (www) at intel.com/design/litcentr. In one embodiment, an instruction may be encoded by one or more of fields 361 and 362. Up to two operand locations per instruction may be identified, including up to two source operand identifiers 364 and 365. In one embodiment, destination operand identifier 366 may be the same as source operand identifier 364, whereas in other embodiments they may be different. In another embodiment, destination operand identifier 366 may be the same as source operand identifier 365, whereas in other embodiments they may be different. In one embodiment, one of the source operands identified by source operand identifiers 364 and 365 may be overwritten by the results of the text string comparison operations, whereas in other embodiments identifier 364 corresponds to a source register element and identifier 365 corresponds to a destination register element. In one embodiment, operand identifiers 364 and 365 may identify 32-bit or 64-bit source and destination operands.[0087] FIGURE 3E illustrates another possible operation encoding (opcode) format 370, having forty or more bits, in accordance with embodiments of the present disclosure. Opcode format 370 corresponds with opcode format 360 and comprises an optional prefix byte 378. An instruction according to one embodiment may be encoded by one or more of fields 378, 371, and 372. Up to two operand locations per instruction may be identified by source operand identifiers 374 and 375 and by prefix byte 378. In one embodiment, prefix byte 378 may be used to identify 32-bit or 64-bit source and destination operands. In one embodiment, destination operand identifier 376 may be the same as source operand identifier 374, whereas in other embodiments they may be different. For another embodiment, destination operand identifier 376 may be the same as source operand identifier 375, whereas in other embodiments they may be different. In one embodiment, an instruction operates on one or more of the operands identified by operand identifiers 374 and 375 and one or more operands identified by operand identifiers 374 and 375 may be overwritten by the results of the instruction, whereas in other embodiments, operands identified by identifiers 374 and 375 may be written to another data element in another register. Opcode formats 360 and 370 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD fields 363 and 373 and by optional scale-index-base and displacement bytes.[0088] FIGURE 3F illustrates yet another possible operation encoding (opcode) format, in accordance with embodiments of the present disclosure. 64-bit single instruction multiple data (SIMD) arithmetic operations may be performed through a coprocessor data processing (CDP) instruction. Operation encoding (opcode) format 380 depicts one such CDP instruction having CDP opcode fields 382 and 389. The type of CDP instruction, for another embodiment, operations may be encoded by one or more of fields 383, 384, 387, and 388. Up to three operand locations per instruction may be identified, including up to two source operand identifiers 385 and 390 and one destination operand identifier 386. One embodiment of the coprocessor may operate on eight, sixteen, thirty-two, and 64-bit values. In one embodiment, an instruction may be performed on integer data elements. In some embodiments, an instruction may be executed conditionally, using condition field 381. For some embodiments, source data sizes may be encoded by field 383. In some embodiments, Zero (Z), negative (N), carry (C), and overflow (V) detection may be done on SIMD fields. For some instructions, the type of saturation may be encoded by field 384.[0089] FIGURE 4A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline, in accordance with embodiments of the present disclosure. FIGURE 4B is a block diagram illustrating an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor, in accordance with embodiments of the present disclosure. The solid lined boxes in FIGURE 4A illustrate the in-order pipeline, while the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline. Similarly, the solid lined boxes in FIGURE 4B illustrate the in-order architecture logic, while the dashed lined boxes illustrates the register renaming logic and out-of-order issue/execution logic.[0090] In FIGURE 4A, a processor pipeline 400 may include a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 416, a write-back/mem ory -write stage 418, an exception handling stage 422, and a commit stage 424.[0091] In FIGURE 4B, arrows denote a coupling between two or more units and the direction of the arrow indicates a direction of data flow between those units. FIGURE 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both may be coupled to a memory unit 470.[0092] Core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. In one embodiment, core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like.[0093] Front end unit 430 may include a branch prediction unit 432 coupled to an instruction cache unit 434. Instruction cache unit 434 may be coupled to an instruction translation lookaside buffer (TLB) 436. TLB 436 may be coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. Decode unit 440 may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which may be decoded from, or which otherwise reflect, or may be derived from, the original instructions. The decoder may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read-only memories (ROMs), etc. In one embodiment, instruction cache unit 434 may be further coupled to a level 2 (L2) cache unit 476 in memory unit 470. Decode unit 440 may be coupled to a rename/allocator unit 452 in execution engine unit 450.[0094] Execution engine unit 450 may include rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler units 456. Scheduler units 456 represent any number of different schedulers, including reservations stations, central instruction window, etc. Scheduler units 456 may be coupled to physical register file units 458. Each of physical register file units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. Physical register file units 458 may be overlapped by retirement unit 454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using one or more reorder buffers and one or more retirement register files, using one or more future files, one or more history buffers, and one or more retirement register files; using register maps and a pool of registers; etc.). Generally, the architectural registers may be visible from the outside of the processor or from a programmer's perspective. The registers might not be limited to any known particular type of circuit. Various different types of registers may be suitable as long as they store and provide data as described herein. Examples of suitable registers include, but might not be limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. Retirement unit 454 and physical register file units 458 may be coupled to execution clusters 460. Execution clusters 460 may include a set of one or more execution units 462 and a set of one or more memory access units 464. Execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler units 456, physical register file units 458, and execution clusters 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments may be implemented in which only the execution cluster of this pipeline has memory access units 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[0095] The set of memory access units 464 may be coupled to memory unit 470, which may include a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which may be coupled to data TLB unit 472 in memory unit 470. L2 cache unit 476 may be coupled to one or more other levels of cache and eventually to a main memory.[0096] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement pipeline 400 as follows: 1) instruction fetch 438 may perform fetch and length decoding stages 402 and 404; 2) decode unit 440 may perform decode stage 406; 3) rename/allocator unit 452 may perform allocation stage 408 and renaming stage 410; 4) scheduler units 456 may perform schedule stage 412; 5) physical register file units 458 and memory unit 470 may perform register read/memory read stage 414; execution cluster 460 may perform execute stage 416; 6) memory unit 470 and physical register file units 458 may perform write-back/memory -write stage 418; 7) various units may be involved in the performance of exception handling stage 422; and 8) retirement unit 454 and physical register file units 458 may perform commit stage 424.[0097] Core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA). [0098] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads) in a variety of manners. Multithreading support may be performed by, for example, including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof. Such a combination may include, for example, time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology.[0099] While register renaming may be described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include a separate instruction and data cache units 434/474 and a shared L2 cache unit 476, other embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that may be external to the core and/or the processor. In other embodiments, all of the caches may be external to the core and/or the processor.[00100] FIGURE 5A is a block diagram of a processor 500, in accordance with embodiments of the present disclosure. In one embodiment, processor 500 may include a multicore processor. Processor 500 may include a system agent 510 communicatively coupled to one or more cores 502. Furthermore, cores 502 and system agent 510 may be communicatively coupled to one or more caches 506. Cores 502, system agent 510, and caches 506 may be communicatively coupled via one or more memory control units 552. Furthermore, cores 502, system agent 510, and caches 506 may be communicatively coupled to a graphics module 560 via memory control units 552.[00101] Processor 500 may include any suitable mechanism for interconnecting cores 502, system agent 510, and caches 506, and graphics module 560. In one embodiment, processor 500 may include a ring-based interconnect unit 508 to interconnect cores 502, system agent 510, and caches 506, and graphics module 560. In other embodiments, processor 500 may include any number of well-known techniques for interconnecting such units. Ring-based interconnect unit 508 may utilize memory control units 552 to facilitate interconnections.[00102] Processor 500 may include a memory hierarchy comprising one or more levels of caches within the cores, one or more shared cache units such as caches 506, or external memory (not shown) coupled to the set of integrated memory controller units 552. Caches 506 may include any suitable cache. In one embodiment, caches 506 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.[00103] In various embodiments, one or more of cores 502 may perform multi- threading. System agent 510 may include components for coordinating and operating cores 502. System agent unit 510 may include for example a power control unit (PCU). The PCU may be or include logic and components needed for regulating the power state of cores 502. System agent 510 may include a display engine 512 for driving one or more externally connected displays or graphics module 560. System agent 510 may include an interface 514 for communications busses for graphics. In one embodiment, interface 514 may be implemented by PCI Express (PCIe). In a further embodiment, interface 514 may be implemented by PCI Express Graphics (PEG). System agent 510 may include a direct media interface (DMI) 516. DMI 516 may provide links between different bridges on a motherboard or other portion of a computer system. System agent 510 may include a PCIe bridge 518 for providing PCIe links to other elements of a computing system. PCIe bridge 518 may be implemented using a memory controller 520 and coherence logic 522.[00104] Cores 502 may be implemented in any suitable manner. Cores 502 may be homogenous or heterogeneous in terms of architecture and/or instruction set. In one embodiment, some of cores 502 may be in-order while others may be out-of-order. In another embodiment, two or more of cores 502 may execute the same instruction set, while others may execute only a subset of that instruction set or a different instruction set.[00105] Processor 500 may include a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, XScale™ or StrongARM™ processor, which may be available from Intel Corporation, of Santa Clara, Calif. Processor 500 may be provided from another company, such as ARM Holdings, Ltd, MIPS, etc. Processor 500 may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. Processor 500 may be implemented on one or more chips. Processor 500 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or MOS.[00106] In one embodiment, a given one of caches 506 may be shared by multiple ones of cores 502. In another embodiment, a given one of caches 506 may be dedicated to one of cores 502. The assignment of caches 506 to cores 502 may be handled by a cache controller or other suitable mechanism. A given one of caches 506 may be shared by two or more cores 502 by implementing time-slices of a given cache 506.[00107] Graphics module 560 may implement an integrated graphics processing subsystem. In one embodiment, graphics module 560 may include a graphics processor. Furthermore, graphics module 560 may include a media engine 565. Media engine 565 may provide media encoding and video decoding.[00108] FIGURE 5B is a block diagram of an example implementation of a core 502, in accordance with embodiments of the present disclosure. Core 502 may include a front end 570 communicatively coupled to an out-of-order engine 580. Core 502 may be communicatively coupled to other portions of processor 500 through cache hierarchy 503.[00109] Front end 570 may be implemented in any suitable manner, such as fully or in part by front end 201 as described above. In one embodiment, front end 570 may communicate with other portions of processor 500 through cache hierarchy 503. In a further embodiment, front end 570 may fetch instructions from portions of processor 500 and prepare the instructions to be used later in the processor pipeline as they are passed to out-of-order execution engine 580.[00110] Out-of-order execution engine 580 may be implemented in any suitable manner, such as fully or in part by out-of-order execution engine 203 as described above. Out-of-order execution engine 580 may prepare instructions received from front end 570 for execution. Out-of-order execution engine 580 may include an allocate module 582. In one embodiment, allocate module 582 may allocate resources of processor 500 or other resources, such as registers or buffers, to execute a given instruction. Allocate module 582 may make allocations in schedulers, such as a memory scheduler, fast scheduler, or floating point scheduler. Such schedulers may be represented in FIGURE 5B by resource schedulers 584. Allocate module 582 may be implemented fully or in part by the allocation logic described in conjunction with FIGURE 2. Resource schedulers 584 may determine when an instruction is ready to execute based on the readiness of a given resource's sources and the availability of execution resources needed to execute an instruction. Resource schedulers 584 may be implemented by, for example, schedulers 202, 204, 206 as discussed above. Resource schedulers 584 may schedule the execution of instructions upon one or more resources. In one embodiment, such resources may be internal to core 502, and may be illustrated, for example, as resources 586. In another embodiment, such resources may be external to core 502 and may be accessible by, for example, cache hierarchy 503. Resources may include, for example, memory, caches, register files, or registers. Resources internal to core 502 may be represented by resources 586 in FIGURE 5B. As necessary, values written to or read from resources 586 may be coordinated with other portions of processor 500 through, for example, cache hierarchy 503. As instructions are assigned resources, they may be placed into a reorder buffer 588. Reorder buffer 588 may track instructions as they are executed and may selectively reorder their execution based upon any suitable criteria of processor 500. In one embodiment, reorder buffer 588 may identify instructions or a series of instructions that may be executed independently. Such instructions or a series of instructions may be executed in parallel from other such instructions. Parallel execution in core 502 may be performed by any suitable number of separate execution blocks or virtual processors. In one embodiment, shared resources— such as memory, registers, and caches— may be accessible to multiple virtual processors within a given core 502. In other embodiments, shared resources may be accessible to multiple processing entities within processor 500.[00111] Cache hierarchy 503 may be implemented in any suitable manner. For example, cache hierarchy 503 may include one or more lower or mid-level caches, such as caches 572, 574. In one embodiment, cache hierarchy 503 may include an LLC 595 communicatively coupled to caches 572, 574. In another embodiment, LLC 595 may be implemented in a module 590 accessible to all processing entities of processor 500. In a further embodiment, module 590 may be implemented in an uncore module of processors from Intel, Inc. Module 590 may include portions or subsystems of processor 500 necessary for the execution of core 502 but might not be implemented within core 502. Besides LLC 595, Module 590 may include, for example, hardware interfaces, memory coherency coordinators, interprocessor interconnects, instruction pipelines, or memory controllers. Access to RAM 599 available to processor 500 may be made through module 590 and, more specifically, LLC 595. Furthermore, other instances of core 502 may similarly access module 590. Coordination of the instances of core 502 may be facilitated in part through module 590.[00112] FIGURES 6-8 may illustrate exemplary systems suitable for including processor 500, while FIGURE 9 may illustrate an exemplary system on a chip (SoC) that may include one or more of cores 502. Other system designs and implementations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set- top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, may also be suitable. In general, a huge variety of systems or electronic devices that incorporate a processor and/or other execution logic as disclosed herein may be generally suitable.[00113] FIGURE 6 illustrates a block diagram of a system 600, in accordance with embodiments of the present disclosure. System 600 may include one or more processors 610, 615, which may be coupled to graphics memory controller hub (GMCH) 620. The optional nature of additional processors 615 is denoted in FIGURE 6 with broken lines.[00114] Each processor 610,615 may be some version of processor 500. However, it should be noted that integrated graphics logic and integrated memory control units might not exist in processors 610,615. FIGURE 6 illustrates that GMCH 620 may be coupled to a memory 640 that may be, for example, a dynamic random access memory (DRAM). The DRAM may, for at least one embodiment, be associated with a nonvolatile cache.[00115] GMCH 620 may be a chipset, or a portion of a chipset. GMCH 620 may communicate with processors 610, 615 and control interaction between processors 610, 615 and memory 640. GMCH 620 may also act as an accelerated bus interface between the processors 610, 615 and other elements of system 600. In one embodiment, GMCH 620 communicates with processors 610, 615 via a multi-drop bus, such as a frontside bus (FSB) 695.[00116] Furthermore, GMCH 620 may be coupled to a display 645 (such as a flat panel display). In one embodiment, GMCH 620 may include an integrated graphics accelerator. GMCH 620 may be further coupled to an input/output (I/O) controller hub (ICH) 650, which may be used to couple various peripheral devices to system 600. External graphics device 660 may include a discrete graphics device coupled to ICH 650 along with another peripheral device 670.[00117] In other embodiments, additional or different processors may also be present in system 600. For example, additional processors 610, 615 may include additional processors that may be the same as processor 610, additional processors that may be heterogeneous or asymmetric to processor 610, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor. There may be a variety of differences between the physical resources 610, 615 in terms of a spectrum of metrics of merit including architectural, micro-architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst processors 610, 615. For at least one embodiment, various processors 610, 615 may reside in the same die package.[00118] FIGURE 7 illustrates a block diagram of a second system 700, in accordance with embodiments of the present disclosure. As shown in FIGURE 7, multiprocessor system 700 may include a point-to-point interconnect system, and may include a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. Each of processors 770 and 780 may be some version of processor 500 as one or more of processors 610,615. [00119] While FIGURE 7 may illustrate two processors 770, 780, it is to be understood that the scope of the present disclosure is not so limited. In other embodiments, one or more additional processors may be present in a given processor.[00120] Processors 770 and 780 are shown including integrated memory controller units 772 and 782, respectively. Processor 770 may also include as part of its bus controller units point-to-point (P-P) interfaces 776 and 778; similarly, second processor 780 may include P-P interfaces 786 and 788. Processors 770, 780 may exchange information via a point-to-point (P-P) interface 750 using P-P interface circuits 778, 788. As shown in FIGURE 7, IMCs 772 and 782 may couple the processors to respective memories, namely a memory 732 and a memory 734, which in one embodiment may be portions of main memory locally attached to the respective processors.[00121] Processors 770, 780 may each exchange information with a chipset 790 via individual P-P interfaces 752, 754 using point to point interface circuits 776, 794, 786, 798. In one embodiment, chipset 790 may also exchange information with a high- performance graphics circuit 738 via a high-performance graphics interface 739.[00122] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[00123] Chipset 790 may be coupled to a first bus 716 via an interface 796. In one embodiment, first bus 716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.[00124] As shown in FIGURE 7, various I/O devices 714 may be coupled to first bus 716, along with a bus bridge 718 which couples first bus 716 to a second bus 720. In one embodiment, second bus 720 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 720 including, for example, a keyboard and/or mouse 722, communication devices 727 and a storage unit 728 such as a disk drive or other mass storage device which may include instructions/code and data 730, in one embodiment. Further, an audio I/O 724 may be coupled to second bus 720. Note that other architectures may be possible. For example, instead of the point-to-point architecture of FIGURE 7, a system may implement a multi-drop bus or other such architecture.[00125] FIGURE 8 illustrates a block diagram of a third system 800 in accordance with embodiments of the present disclosure. Like elements in FIGURES 7 and 8 bear like reference numerals, and certain aspects of FIGURE 7 have been omitted from FIGURE 8 in order to avoid obscuring other aspects of FIGURE 8.[00126] FIGURE 8 illustrates that processors 770, 780 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. For at least one embodiment, CL 872, 882 may include integrated memory controller units such as that described above in connection with FIGURES 5 and 7. In addition. CL 872, 882 may also include I/O control logic. FIGURE 8 illustrates that not only memories 732, 734 may be coupled to CL 872, 882, but also that I/O devices 814 may also be coupled to control logic 872, 882. Legacy I/O devices 815 may be coupled to chipset 790.[00127] FIGURE 9 illustrates a block diagram of a SoC 900, in accordance with embodiments of the present disclosure. Similar elements in FIGURE 5 bear like reference numerals. Also, dashed lined boxes may represent optional features on more advanced SoCs. An interconnect units 902 may be coupled to: an application processor 910 which may include a set of one or more cores 502A-N and shared cache units 506; a system agent unit 510; a bus controller units 916; an integrated memory controller units 914; a set or one or more media processors 920 which may include integrated graphics logic 908, an image processor 924 for providing still and/or video camera functionality, an audio processor 926 for providing hardware audio acceleration, and a video processor 928 for providing video encode/decode acceleration; an static random access memory (SRAM) unit 930; a direct memory access (DMA) unit 932; and a display unit 940 for coupling to one or more external displays.[00128] FIGURE 10 illustrates a processor containing a central processing unit (CPU) and a graphics processing unit (GPU), which may perform at least one instruction, in accordance with embodiments of the present disclosure. In one embodiment, an instruction to perform operations according to at least one embodiment could be performed by the CPU. In another embodiment, the instruction could be performed by the GPU. In still another embodiment, the instruction may be performed through a combination of operations performed by the GPU and the CPU. For example, in one embodiment, an instruction in accordance with one embodiment may be received and decoded for execution on the GPU. However, one or more operations within the decoded instruction may be performed by a CPU and the result returned to the GPU for final retirement of the instruction. Conversely, in some embodiments, the CPU may act as the primary processor and the GPU as the co-processor.[00129] In some embodiments, instructions that benefit from highly parallel, throughput processors may be performed by the GPU, while instructions that benefit from the performance of processors that benefit from deeply pipelined architectures may be performed by the CPU. For example, graphics, scientific applications, financial applications and other parallel workloads may benefit from the performance of the GPU and be executed accordingly, whereas more sequential applications, such as operating system kernel or application code may be better suited for the CPU.[00130] In FIGURE 10, processor 1000 includes a CPU 1005, GPU 1010, image processor 1015, video processor 1020, USB controller 1025, UART controller 1030, SPI/SDIO controller 1035, display device 1040, memory interface controller 1045, MIPI controller 1050, flash memory controller 1055, dual data rate (DDR) controller 1060, security engine 1065, and I2S/I2C controller 1070. Other logic and circuits may be included in the processor of FIGURE 10, including more CPUs or GPUs and other peripheral interface controllers.[00131] One or more aspects of at least one embodiment may be implemented by representative data stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine-readable medium ("tape") and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. For example, IP cores, such as the Cortex™ family of processors developed by ARM Holdings, Ltd. and Loongson IP cores developed the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences may be licensed or sold to various customers or licensees, such as Texas Instruments, Qualcomm, Apple, or Samsung and implemented in processors produced by these customers or licensees.[00132] FIGURE 11 illustrates a block diagram illustrating the development of IP cores, in accordance with embodiments of the present disclosure. Storage 1100 may include simulation software 1120 and/or hardware or software model 1110. In one embodiment, the data representing the IP core design may be provided to storage 1100 via memory 1140 (e.g., hard disk), wired connection (e.g., internet) 1150 or wireless connection 1160. The IP core information generated by the simulation tool and model may then be transmitted to a fabrication facility 1165 where it may be fabricated by a 3rdparty to perform at least one instruction in accordance with at least one embodiment.[00133] In some embodiments, one or more instructions may correspond to a first type or architecture (e.g., x86) and be translated or emulated on a processor of a different type or architecture (e.g., ARM). An instruction, according to one embodiment, may therefore be performed on any processor or processor type, including ARM, x86, MIPS, a GPU, or other processor type or architecture.[00134] FIGURE 12 illustrates how an instruction of a first type may be emulated by a processor of a different type, in accordance with embodiments of the present disclosure. In FIGURE 12, program 1205 contains some instructions that may perform the same or substantially the same function as an instruction according to one embodiment. However the instructions of program 1205 may be of a type and/or format that is different from or incompatible with processor 1215, meaning the instructions of the type in program 1205 may not be able to execute natively by the processor 1215. However, with the help of emulation logic, 1210, the instructions of program 1205 may be translated into instructions that may be natively be executed by the processor 1215. In one embodiment, the emulation logic may be embodied in hardware. In another embodiment, the emulation logic may be embodied in a tangible, machine-readable medium containing software to translate instructions of the type in program 1205 into the type natively executable by processor 1215. In other embodiments, emulation logic may be a combination of fixed-function or programmable hardware and a program stored on a tangible, machine-readable medium. In one embodiment, the processor contains the emulation logic, whereas in other embodiments, the emulation logic exists outside of the processor and may be provided by a third party. In one embodiment, the processor may load the emulation logic embodied in a tangible, machine-readable medium containing software by executing microcode or firmware contained in or associated with the processor.[00135] FIGURE 13 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with embodiments of the present disclosure. In the illustrated embodiment, the instruction converter may be a software instruction converter, although the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIGURE 13 shows a program in a high level language 1302 may be compiled using an x86 compiler 1304 to generate x86 binary code 1306 that may be natively executed by a processor with at least one x86 instruction set core 1316. The processor with at least one x86 instruction set core 1316 represents any processor that may perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. x86 compiler 1304 represents a compiler that may be operable to generate x86 binary code 1306 (e.g., object code) that may, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1316. Similarly, FIGURE 13 shows the program in high level language 1302 may be compiled using an alternative instruction set compiler 1308 to generate alternative instruction set binary code 1310 that may be natively executed by a processor without at least one x86 instruction set core 1314 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). Instruction converter 1312 may be used to convert x86 binary code 1306 into code that may be natively executed by the processor without an x86 instruction set core 1314. This converted code might not be the same as alternative instruction set binary code 1310; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, instruction converter 1312 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 1306.[00136] FIGURE 14 is a block diagram of an instruction set architecture 1400 of a processor, in accordance with embodiments of the present disclosure. Instruction set architecture 1400 may include any suitable number or kind of components.[00137] For example, instruction set architecture 1400 may include processing entities such as one or more cores 1406, 1407 and a graphics processing unit 1415. Cores 1406, 1407 may be communicatively coupled to the rest of instruction set architecture 1400 through any suitable mechanism, such as through a bus or cache. In one embodiment, cores 1406, 1407 may be communicatively coupled through an L2 cache control 1408, which may include a bus interface unit 1409 and an L2 cache 1411. Cores 1406, 1407 and graphics processing unit 1415 may be communicatively coupled to each other and to the remainder of instruction set architecture 1400 through interconnect 1410. In one embodiment, graphics processing unit 1415 may use a video code 1420 defining the manner in which particular video signals will be encoded and decoded for output.[00138] Instruction set architecture 1400 may also include any number or kind of interfaces, controllers, or other mechanisms for interfacing or communicating with other portions of an electronic device or system. Such mechanisms may facilitate interaction with, for example, peripherals, communications devices, other processors, or memory. In the example of FIGURE 14, instruction set architecture 1400 may include a liquid crystal display (LCD) video interface 1425, a subscriber interface module (SIM) interface 1430, a boot ROM interface 1435, a synchronous dynamic random access memory (SDRAM) controller 1440, a flash controller 1445, and a serial peripheral interface (SPI) master unit 1450. LCD video interface 1425 may provide output of video signals from, for example, GPU 1415 and through, for example, a mobile industry processor interface (MIPI) 1490 or a high-definition multimedia interface (HDMI) 1495 to a display. Such a display may include, for example, an LCD. SIM interface 1430 may provide access to or from a SIM card or device. SDRAM controller 1440 may provide access to or from memory such as an SDRAM chip or module 1460. Flash controller 1445 may provide access to or from memory such as flash memory 1465 or other instances of RAM. SPI master unit 1450 may provide access to or from communications modules, such as a Bluetooth module 1470, highspeed 3G modem 1475, global positioning system module 1480, or wireless module 1485 implementing a communications standard such as 802.11.[00139] FIGURE 15 is a more detailed block diagram of an instruction set architecture 1500 of a processor, in accordance with embodiments of the present disclosure. Instruction architecture 1500 may implement one or more aspects of instruction set architecture 1400. Furthermore, instruction set architecture 1500 may illustrate modules and mechanisms for the execution of instructions within a processor.[00140] Instruction architecture 1500 may include a memory system 1540 communicatively coupled to one or more execution entities 1565. Furthermore, instruction architecture 1500 may include a caching and bus interface unit such as unit 1510 communicatively coupled to execution entities 1565 and memory system 1540. In one embodiment, loading of instructions into execution entities 1565 may be performed by one or more stages of execution. Such stages may include, for example, instruction prefetch stage 1530, dual instruction decode stage 1550, register rename stage 1555, issue stage 1560, and writeback stage 1570.[00141] In one embodiment, memory system 1540 may include an executed instruction pointer 1580. Executed instruction pointer 1580 may store a value identifying the oldest, undispatched instruction within a batch of instructions. The oldest instruction may correspond to the lowest Program Order (PO) value. A PO may include a unique number of an instruction. Such an instruction may be a single instruction within a thread represented by multiple strands. A PO may be used in ordering instructions to ensure correct execution semantics of code. A PO may be reconstructed by mechanisms such as evaluating increments to PO encoded in the instruction rather than an absolute value. Such a reconstructed PO may be known as an "RPO." Although a PO may be referenced herein, such a PO may be used interchangeably with an RPO. A strand may include a sequence of instructions that are data dependent upon each other. The strand may be arranged by a binary translator at compilation time. Hardware executing a strand may execute the instructions of a given strand in order according to the PO of the various instructions. A thread may include multiple strands such that instructions of different strands may depend upon each other. A PO of a given strand may be the PO of the oldest instruction in the strand which has not yet been dispatched to execution from an issue stage. Accordingly, given a thread of multiple strands, each strand including instructions ordered by PO, executed instruction pointer 1580 may store the oldest— illustrated by the lowest number— PO in the thread.[00142] In another embodiment, memory system 1540 may include a retirement pointer 1582. Retirement pointer 1582 may store a value identifying the PO of the last retired instruction. Retirement pointer 1582 may be set by, for example, retirement unit 454. If no instructions have yet been retired, retirement pointer 1582 may include a null value.[00143] Execution entities 1565 may include any suitable number and kind of mechanisms by which a processor may execute instructions. In the example of FIGURE 15, execution entities 1565 may include ALU/multiplication units (MUL) 1566, ALUs 1567, and floating point units (FPU) 1568. In one embodiment, such entities may make use of information contained within a given address 1569. Execution entities 1565 in combination with stages 1530, 1550, 1555, 1560, 1570 may collectively form an execution unit.[00144] Unit 1510 may be implemented in any suitable manner. In one embodiment, unit 1510 may perform cache control. In such an embodiment, unit 1510 may thus include a cache 1525. Cache 1525 may be implemented, in a further embodiment, as an L2 unified cache with any suitable size, such as zero, 128k, 256k, 512k, 1M, or 2M bytes of memory. In another, further embodiment, cache 1525 may be implemented in error-correcting code memory. In another embodiment, unit 1510 may perform bus interfacing to other portions of a processor or electronic device. In such an embodiment, unit 1510 may thus include a bus interface unit 1520 for communicating over an interconnect, intraprocessor bus, interprocessor bus, or other communication bus, port, or line. Bus interface unit 1520 may provide interfacing in order to perform, for example, generation of the memory and input/output addresses for the transfer of data between execution entities 1565 and the portions of a system external to instruction architecture 1500.[00145] To further facilitate its functions, bus interface unit 1520 may include an interrupt control and distribution unit 1511 for generating interrupts and other communications to other portions of a processor or electronic device. In one embodiment, bus interface unit 1520 may include a snoop control unit 1512 that handles cache access and coherency for multiple processing cores. In a further embodiment, to provide such functionality, snoop control unit 1512 may include a cache-to-cache transfer unit that handles information exchanges between different caches. In another, further embodiment, snoop control unit 1512 may include one or more snoop filters 1514 that monitors the coherency of other caches (not shown) so that a cache controller, such as unit 1510, does not have to perform such monitoring directly. Unit 1510 may include any suitable number of timers 1515 for synchronizing the actions of instruction architecture 1500. Also, unit 1510 may include an AC port 1516.[00146] Memory system 1540 may include any suitable number and kind of mechanisms for storing information for the processing needs of instruction architecture 1500. In one embodiment, memory system 1540 may include a load store unit 1546 for storing information such as buffers written to or read back from memory or registers. In another embodiment, memory system 1540 may include a translation lookaside buffer (TLB) 1545 that provides look-up of address values between physical and virtual addresses. In yet another embodiment, memory system 1540 may include a memory management unit (MMU) 1544 for facilitating access to virtual memory. In still yet another embodiment, memory system 1540 may include a prefetcher 1543 for requesting instructions from memory before such instructions are actually needed to be executed, in order to reduce latency.[00147] The operation of instruction architecture 1500 to execute an instruction may be performed through different stages. For example, using unit 1510 instruction prefetch stage 1530 may access an instruction through prefetcher 1543. Instructions retrieved may be stored in instruction cache 1532. Prefetch stage 1530 may enable an option 1531 for fast-loop mode, wherein a series of instructions forming a loop that is small enough to fit within a given cache are executed. In one embodiment, such an execution may be performed without needing to access additional instructions from, for example, instruction cache 1532. Determination of what instructions to prefetch may be made by, for example, branch prediction unit 1535, which may access indications of execution in global history 1536, indications of target addresses 1537, or contents of a return stack 1538 to determine which of branches 1557 of code will be executed next. Such branches may be possibly prefetched as a result. Branches 1557 may be produced through other stages of operation as described below. Instruction prefetch stage 1530 may provide instructions as well as any predictions about future instructions to dual instruction decode stage 1550.[00148] Dual instruction decode stage 1550 may translate a received instruction into microcode-based instructions that may be executed. Dual instruction decode stage 1550 may simultaneously decode two instructions per clock cycle. Furthermore, dual instruction decode stage 1550 may pass its results to register rename stage 1555. In addition, dual instruction decode stage 1550 may determine any resulting branches from its decoding and eventual execution of the microcode. Such results may be input into branches 1557.[00149] Register rename stage 1555 may translate references to virtual registers or other resources into references to physical registers or resources. Register rename stage 1555 may include indications of such mapping in a register pool 1556. Register rename stage 1555 may alter the instructions as received and send the result to issue stage 1560.[00150] Issue stage 1560 may issue or dispatch commands to execution entities 1565. Such issuance may be performed in an out-of-order fashion. In one embodiment, multiple instructions may be held at issue stage 1560 before being executed. Issue stage 1560 may include an instruction queue 1561 for holding such multiple commands. Instructions may be issued by issue stage 1560 to a particular processing entity 1565 based upon any acceptable criteria, such as availability or suitability of resources for execution of a given instruction. In one embodiment, issue stage 1560 may reorder the instructions within instruction queue 1561 such that the first instructions received might not be the first instructions executed. Based upon the ordering of instruction queue 1561, additional branching information may be provided to branches 1557. Issue stage 1560 may pass instructions to executing entities 1565 for execution.[00151] Upon execution, writeback stage 1570 may write data into registers, queues, or other structures of instruction set architecture 1500 to communicate the completion of a given command. Depending upon the order of instructions arranged in issue stage 1560, the operation of writeback stage 1570 may enable additional instructions to be executed. Performance of instruction set architecture 1500 may be monitored or debugged by trace unit 1575.[00152] FIGURE 16 is a block diagram of an execution pipeline 1600 for an instruction set architecture of a processor, in accordance with embodiments of the present disclosure. Execution pipeline 1600 may illustrate operation of, for example, instruction architecture 1500 of FIGURE 15.[00153] Execution pipeline 1600 may include any suitable combination of steps or operations. In 1605, predictions of the branch that is to be executed next may be made. In one embodiment, such predictions may be based upon previous executions of instructions and the results thereof. In 1610, instructions corresponding to the predicted branch of execution may be loaded into an instruction cache. In 1615, one or more such instructions in the instruction cache may be fetched for execution. In 1620, the instructions that have been fetched may be decoded into microcode or more specific machine language. In one embodiment, multiple instructions may be simultaneously decoded. In 1625, references to registers or other resources within the decoded instructions may be reassigned. For example, references to virtual registers may be replaced with references to corresponding physical registers. In 1630, the instructions may be dispatched to queues for execution. In 1640, the instructions may be executed. Such execution may be performed in any suitable manner. In 1650, the instructions may be issued to a suitable execution entity. The manner in which the instruction is executed may depend upon the specific entity executing the instruction. For example, at 1655, an ALU may perform arithmetic functions. The ALU may utilize a single clock cycle for its operation, as well as two shifters. In one embodiment, two ALUs may be employed, and thus two instructions may be executed at 1655. At 1660, a determination of a resulting branch may be made. A program counter may be used to designate the destination to which the branch will be made. 1660 may be executed within a single clock cycle. At 1665, floating point arithmetic may be performed by one or more FPUs. The floating point operation may require multiple clock cycles to execute, such as two to ten cycles. At 1670, multiplication and division operations may be performed. Such operations may be performed in four clock cycles. At 1675, loading and storing operations to registers or other portions of pipeline 1600 may be performed. The operations may include loading and storing addresses. Such operations may be performed in four clock cycles. At 1680, write-back operations may be performed as required by the resulting operations of 1655-1675.[00154] FIGURE 17 is a block diagram of an electronic device 1700 for utilizing a processor 1710, in accordance with embodiments of the present disclosure. Electronic device 1700 may include, for example, a notebook, an ultrabook, a computer, a tower server, a rack server, a blade server, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device.[00155] Electronic device 1700 may include processor 1710 communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. Such coupling may be accomplished by any suitable kind of bus or interface, such as I2C bus, system management bus (SMBus), low pin count (LPC) bus, SPI, high definition audio (HDA) bus, Serial Advance Technology Attachment (SATA) bus, USB bus (versions 1, 2, 3), or Universal Asynchronous Receiver/Transmitter (UART) bus.[00156] Such components may include, for example, a display 1724, a touch screen 1725, a touch pad 1730, a near field communications (NFC) unit 1745, a sensor hub 1740, a thermal sensor 1746, an express chipset (EC) 1735, a trusted platform module (TPM) 1738, BlOS/firmware/flash memory 1722, a digital signal processor 1760, a drive 1720 such as a solid state disk (SSD) or a hard disk drive (HDD), a wireless local area network (WLAN) unit 1750, a Bluetooth unit 1752, a wireless wide area network (WW AN) unit 1756, a global positioning system (GPS) 1775, a camera 1754 such as a USB 3.0 camera, or a low power double data rate (LPDDR) memory unit 1715 implemented in, for example, the LPDDR3 standard. These components may each be implemented in any suitable manner.[00157] Furthermore, in various embodiments other components may be communicatively coupled to processor 1710 through the components discussed above. For example, an accelerometer 1741, ambient light sensor (ALS) 1742, compass 1743, and gyroscope 1744 may be communicatively coupled to sensor hub 1740. A thermal sensor 1739, fan 1737, keyboard 1736, and touch pad 1730 may be communicatively coupled to EC 1735. Speakers 1763, headphones 1764, and a microphone 1765 may be communicatively coupled to an audio unit 1762, which may in turn be communicatively coupled to DSP 1760. Audio unit 1762 may include, for example, an audio codec and a class D amplifier. A SEVI card 1757 may be communicatively coupled to WW AN unit 1756. Components such as WLAN unit 1750 and Bluetooth unit 1752, as well as WW AN unit 1756 may be implemented in a next generation form factor (NGFF).[00158] FIGURE 18 is an illustration of an example system 1800 for instructions and logic for permute sequences of instructions or operations, according to embodiments of the present disclosure. Embodiments of the present disclosure involve instructions and processing logic for executing permute operations. In one embodiment, the number of permute operations needed for certain data conversions may be reduced or minimized using out-of-order loads. In yet another embodiment, the number of permute operations needed for certain data conversions may be reduced by using permute operations that can partially or fully (through masking) reuse an index vector as a destination vector, allowing it to function in essence as a three-source permute instruction.[00159] The operations that cause data conversion performed by permuting may implement instruction striding, wherein multiple operations are applied to different elements of a structure simultaneously. For example, the operations may implement in part a Stride-5 operation, although the principles of the present disclosure may be applied to stride operations on a different number of elements. In one embodiment, the operations might be made on five elements of the same type. Each different structure within the array may be denoted by a different shading or color, and each element within a given structure may be shown by its number (0...4).[00160] More specifically, the need to implement striding operations may arise when converting an array-of-structures (AOS) data format into a structure-of-arrays (SOA) data format. Such operations are shown briefly in FIGURE 21. Given an array 2102 in memory or in cache, data for five separate structures may be contiguously (whether physically or virtually) arranged in memory. In one embodiment, each structure (Structure l ...Structure8) may have the same format as one another. The eight structures may each be, for example, a five-element structure, wherein each element is, for example, a double. In other examples, each element of the structure could be a float, single, or other data type. Each element may be of a same data type. Array 2102 may be referenced by a base location r in its memory.[00161] The process of converting AOS to SOA may be performed. System 1800 may perform such a conversion in an efficient manner.[00162] As a result, a structure of arrays 2104 may result. Each array ( Array 1... Array 4) may be loaded into a different destination, such as a register or memory or cache location. Each array may include, for example, all the first elements from the structures, all the second elements from the structures, all the third elements from the structures, all the fourth elements from the structures, or all the fifth elements from the structure.[00163] By arranging the structure of arrays 2104 into different registers, each with all of the particularly indexed elements from all of the structures of the array of structures 2102, additional operations may be performed on each register with increased efficiency. For example, in a loop of executing code, the first element of each structure might be added to a second element of each structure, or the third element of each structure might be analyzed. By isolating all such elements into a single register or other location, vector operations can be performed. Such vector operations, using SIMD techniques, could perform the addition, analysis, or other execution upon all elements of the array at a single time, in a clock cycle. Transformation of AOS to SOA format may allow vectorized operations such as these. [00164] Returning to FIGURE 18, system 1800 may perform the AOS-SOA conversion shown in FIGURE 21. In one embodiment, system 1800 may utilize permute operations in a sequence in order to perform the AOS-SOA conversion. In a further embodiment, system 1800 may utilize an optimized or improved permute sequence when compared to other systems that use permute sequences by use of specific combinations of permute functions that can selectively reuse part or all of an index vector as a destination vector. In yet another, further embodiment, system 1800 may utilize out-of-order (OOO) loads to reduce or minimize a number of permutes needed to perform the AOS-SOA conversion.[00165] The AOS-SOA conversion may be made upon any suitable trigger. In one embodiment, system 1800 may perform AOS-SOA conversion upon a specific instruction in instruction stream 1802 that such conversion is to be performed. In another embodiment, system 1800 may infer that AOS-SOA conversion should be performed based upon the proposed execution of another instruction from instruction stream 1802. For example, upon determination that a stride operation, a vector operation, or an operation upon strided data is to be performed, system 1800 may recognize that such execution will be more efficiently executed with data that is converted to strided data and perform AOS-SOA conversion. Any suitable portion of system 1800 may determine that AOS-SOA conversion is to be performed, such as a front end, a decoder, a dynamic translator, or other suitable portions, such as a just-in- time interpreter or compiler.[00166] In some systems, an AOS-SOA conversion may be performed by gather instructions. In other systems, an AOS-SOA conversion may be performed by load, blend, and permute instructions. However, system 1800 may efficiently perform the conversion using permute instructions that reduce the total number of permute instructions that are needed.[00167] System 1800 may include a processor, SoC, integrated circuit, or other mechanism. For example, system 1800 may include processor 1804. Although processor 1804 is shown and described as an example in FIGURE 18, any suitable mechanism may be used. Processor 1804 may include any suitable mechanisms for executing vector operations that target vector registers, including those that operate on structures stored in the vector registers that contain multiple elements. In one embodiment, such mechanisms may be implemented in hardware. Processor 1804 may be implemented fully or in part by the elements described in FIGURES 1-17.[00168] Instructions to be executed on processor 1804 may be included in instruction stream 1802. Instruction stream 1802 may be generated by, for example, a compiler, just-in-time interpreter, or other suitable mechanism (which might or might not be included in system 1800), or may be designated by a drafter of code resulting in instruction stream 1802. For example, a compiler may take application code and generate executable code in the form of instruction stream 1802. Instructions may be received by processor 1804 from instruction stream 1802. Instruction stream 1802 may be loaded to processor 1804 in any suitable manner. For example, instructions to be executed by processor 1804 may be loaded from storage, from other machines, or from other memory, such as memory system 1830. The instructions may arrive and be available in resident memory, such as RAM, wherein instructions are fetched from storage to be executed by processor 1804. The instructions may be fetched from resident memory by, for example. In one embodiment, instruction stream 1802 may include an instruction 1822 that will trigger AOS-SOA conversion.[00169] Processor 1804 may include a front end 1806, which may include an instruction fetch pipeline stage and a decode pipeline stage. Front end 1806 may receive instructions with fetch unit 1808 and decode instructions from instruction stream 1802 using decode unit 1810. The decoded instructions may be dispatched, allocated, and scheduled for execution by an allocation stage of a pipeline (such as allocator 1814) and allocated to specific execution units 1816 for execution. One or more specific instructions to be executed by processor 1804 may be included in a library defined for execution by processor 1804. In another embodiment, specific instructions may be targeted by particular portions of processor 1804. For example, processor 1804 may recognize an attempt in instruction stream 1802 to execute a vector operation in software and may issue the instruction to a particular one of execution units 1816.[00170] During execution, access to data or additional instructions (including data or instructions resident in memory system 1830) may be made through memory subsystem 1820. Moreover, results from execution may be stored in memory subsystem 1820 and may subsequently be flushed to other portions of memory. Memory subsystem 1820 may include, for example, memory, RAM, or a cache hierarchy, which may include one or more Level 1 (LI) caches or Level 2 (L2) caches, some of which may be shared by multiple cores 1812 or processors 1804. After execution by execution units 1816, instructions may be retired by a writeback stage or retirement stage in retirement unit 1818. Various portions of such execution pipelining may be performed by one or more cores 1812.[00171] An execution unit 1816 that executes vector instructions may be implemented in any suitable manner. In one embodiment, an execution unit 1816 may include or may be communicatively coupled to memory elements to store information necessary to perform one or more vector operations. In one embodiment, an execution unit 1816 may include circuitry to perform strided operations upon stride5 or other data. For example, an execution unit 1816 may include circuitry to implement an instruction upon multiple elements of data simultaneously within a given clock cycle.[00172] In embodiments of the present disclosure, the instruction set architecture of processor 1804 may implement one or more extended vector instructions that are defined as Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions. Processor 1804 may recognize, either implicitly or through decoding and execution of specific instructions, that one of these extended vector operations is to be performed. In such cases, the extended vector operation may be directed to a particular one of the execution units 1816 for execution of the instruction. In one embodiment, the instruction set architecture may include support for 512-bit SIMD operations. For example, the instruction set architecture implemented by an execution unit 1816 may include 32 vector registers, each of which is 512 bits wide, and support for vectors that are up to 512 bits wide. The instruction set architecture implemented by an execution unit 1816 may include eight dedicated mask registers for conditional execution and efficient merging of destination operands. At least some extended vector instructions may include support for broadcasting. At least some extended vector instructions may include support for embedded masking to enable predication. [00173] At least some extended vector instructions may apply the same operation to each element of a vector stored in a vector register at the same time. Other extended vector instructions may apply the same operation to corresponding elements in multiple source vector registers. For example, the same operation may be applied to each of the individual data elements of a packed data item stored in a vector register by an extended vector instruction. In another example, an extended vector instruction may specify a single vector operation to be performed on the respective data elements of two source vector operands to generate a destination vector operand.[00174] In embodiments of the present disclosure, at least some extended vector instructions may be executed by a SIMD coprocessor within a processor core. For example, one or more of execution units 1816 within a core 1812 may implement the functionality of a SEVID coprocessor. The SIMD coprocessor may be implemented fully or in part by the elements described in FIGURES 1-17. In one embodiment, extended vector instructions that are received by processor 1804 within instruction stream 1802 may be directed to an execution unit 1816 that implements the functionality of a SEVID coprocessor.[00175] During execution, in response to an operation that may benefit from strided data, system 1800 may execute an instruction that causes AOS-SOA conversion 1830. Example operation of such conversion may be shown in the figures below.[00176] Some aspects of AOS-SOA conversion may utilize permute instructions. Permute instructions may selectively identify any combination of the elements of two or more source vectors to be stored in a destination vector. Moreover, the combination of the elements may be stored in any desired order. In order to perform such an operation, an index vector may be specified, wherein each element of the index vector specifies, for an element of the destination vector, which element among the combined sources will be stored in the destination vector.[00177] Several forms of permute instructions may be used. For example, a two- source permute instruction such as VPERMT2D may include a mask and three other operators or parameters. VPERMT2D may be called using, for example, VPERMT2D {mask} sourcel, index, source 2, although the order of parameters may be in any suitable arrangement. Sourcel, index, and source2 may all be vectors of the same size. The mask may be used to selective write to the destination. Thus, if mask is all l 's, all results will be written, but the binary mask may be set so as to selectively write a subset of the permutation. The permute operation will select values from the combination of sourcel and source2 to write to the destination. Either source or the index may also serve as the destination of the permutation. For example, sourcel may be used as the destination. In other examples, VPERMT2 may overwrite results on source registers, while VPERMI2 may overwrite results on index registers. The elements of the index may specify which elements of sourcel and source2 are to be written to the destination. A given element of the index at a given position may specify which of sourcel and source2 are to be written to the destination at a location in the destination at the given position. The element of the index may specify an offset within a combination of sourcel and source2 that will be written to the destination.[00178] For example, consider a call to VPERMT2D {mask = 01111111 } {sourcel = zmmO = {a b c d e f g h} {index = zmm31 = {-1 11 6 1 15 10 5 0} {source2 = zmml = i j k l m n o p}. The first seven elements of sourcel (zmmO) will be written according to the mask. Furthermore, index may specify offsets (from right to left) within the combination of sourcel and source2 that will be written to the destination. The combination may include the concatenation of source2 to sourcel, or {i j k 1 m n o p a b c d e f g h}. Thus, index may specify that the zeroth element of the destination will be written with the zeroth element of the combination of source2 and sourcel, or "h". The index may specify that the first element (of the destination will be written with the fifth element of the combination of source2 and sourcel, or "c". The index may specify (zero-based numbering) that the second element of the destination will be written with the tenth element of the combination of source2 and sourcel, or "n". The index may specify (zero-based numbering) that the third element of the destination will be written with the fifteenth element of the combination of source2 and sourcel, or "i". The index may specify (zero-based numbering) that the fourth element of the destination will be written with the first element of the combination of source2 and sourcel, or "g". The index may specify (zero-based numbering) that the fifth element of the destination will be written with the sixth element of the combination of source2 and sourcel, or "b". The index may specify (zero-based numbering) that the sixth element of the destination will be written with the eleventh element of the combination of source2 and source 1, or "m". The index may specify (zero-based numbering) that the seventh element of the destination will not be written, as it is specified with a Thus, as a result, the permute will yield {_ m b g i n c h} stored in source 1, the zmmO register.[00179] Different permute operations provide significant flexibility. For example, different permute operations shown in FIGURE 22 can be used to selectively the same element (the "x" element) from different registers, wherein the locations of such an element across the sources is known.[00180] In the present disclosure, example pseudocode, instructions, and parameters may be shown. However, other pseudocode, instructions, and parameters may be substituted and used as appropriate. The instructions may include Intel ® instructions that are used for example purposes.[00181] FIGURE 19 illustrates an example processor core 1900 of a data processing system that performs SIMD operations, in accordance with embodiments of the present disclosure. Processor 1900 may be implemented fully or in part by the elements described in FIGURES 1-18. In one embodiment, processor core 1900 may include a main processor 1920 and a SIMD coprocessor 1910. SIMD coprocessor 1910 may be implemented fully or in part by the elements described in FIGURES 1-17. In one embodiment, SFMD coprocessor 1910 may implement at least a portion of one of the execution units 1816 illustrated in FIGURE 18. In one embodiment, SIMD coprocessor 1910 may include a SIMD execution unit 1912 and an extended vector register file 1914. SIMD coprocessor 1910 may perform operations of extended SIMD instruction set 1916. Extended SIMD instruction set 1916 may include one or more extended vector instructions. These extended vector instructions may control data processing operations that include interactions with data resident in extended vector register file 1914.[00182] In one embodiment, main processor 1920 may include a decoder 1922 to recognize instructions of extended SIMD instruction set 1916 for execution by SFMD coprocessor 1910. In other embodiments, SIMD coprocessor 1910 may include at least part of decoder (not shown) to decode instructions of extended SIMD instruction set 1916. Processor core 1900 may also include additional circuitry (not shown) which may be unnecessary to the understanding of embodiments of the present disclosure.[00183] In embodiments of the present disclosure, main processor 1920 may execute a stream of data processing instructions that control data processing operations of a general type, including interactions with cache(s) 1924 and/or register file 1926. Embedded within the stream of data processing instructions may be SIMD coprocessor instructions of extended SIMD instruction set 1916. Decoder 1922 of main processor 1920 may recognize these SIMD coprocessor instructions as being of a type that should be executed by an attached SIMD coprocessor 1910. Accordingly, main processor 1920 may issue these SIMD coprocessor instructions (or control signals representing SIMD coprocessor instructions) on the coprocessor bus 1915. From coprocessor bus 1915, these instructions may be received by any attached SIMD coprocessor. In the example embodiment illustrated in FIGURE 19, SIMD coprocessor 1910 may accept and execute any received SIMD coprocessor instructions intended for execution on SFMD coprocessor 1910.[00184] In one embodiment, main processor 1920 and SFMD coprocessor 1920 may be integrated into a single processor core 1900 that includes an execution unit, a set of register files, and a decoder to recognize instructions of extended SFMD instruction set 1916.[00185] The example implementations depicted in FIGURES 18 and 19 are merely illustrative and are not meant to be limiting on the implementation of the mechanisms described herein for performing extended vector operations.[00186] FIGURE 20 is a block diagram illustrating an example extended vector register file 1914, in accordance with embodiments of the present disclosure. Extended vector register file 1914 may include 32 SIMD registers (ZMMO - ZMM31), each of which is 512-bit wide. The lower 256 bits of each of the ZMM registers are aliased to a respective 256-bit YMM register. The lower 128 bits of each of the YMM registers are aliased to a respective 128-bit XMM register. For example, bits 255 to 0 of register ZMMO (shown as 2001) are aliased to register YMMO, and bits 127 to 0 of register ZMMO are aliased to register XMM0. Similarly, bits 255 to 0 of register ZMM1 (shown as 2002) are aliased to register YMM1, bits 127 to 0 of register ZMM1 are aliased to register XMM1, bits 255 to 0 of register ZMM2 (shown as 2003) are aliased to register YMM2, bits 127 to 0 of the register ZMM2 are aliased to register XMM2, and so on.[00187] In one embodiment, extended vector instructions in extended SIMD instruction set 1916 may operate on any of the registers in extended vector register file 1914, including registers ZMM0 - ZMM31, registers YMM0 - YMM15, and registers XMM0 - XMM7. In another embodiment, legacy SEVID instructions implemented prior to the development of the Intel® AVX-512 instruction set architecture may operate on a subset of the YMM or XMM registers in extended vector register file 1914. For example, access by some legacy SIMD instructions may be limited to registers YMM0 - YMM15 or to registers XMM0 - XMM7, in some embodiments.[00188] In embodiments of the present disclosure, the instruction set architecture may support extended vector instructions that access up to four instruction operands. For example, in at least some embodiments, the extended vector instructions may access any of 32 extended vector registers ZMM0 - ZMM31 shown in FIGURE 20 as source or destination operands. In some embodiments, the extended vector instructions may access any one of eight dedicated mask registers. In some embodiments, the extended vector instructions may access any of sixteen general-purpose registers as source or destination operands.[00189] In embodiments of the present disclosure, encodings of the extended vector instructions may include an opcode specifying a particular vector operation to be performed. Encodings of the extended vector instructions may include an encoding identifying any of eight dedicated mask registers, kO - k7. Each bit of the identified mask register may govern the behavior of a vector operation as it is applied to a respective source vector element or destination vector element. For example, in one embodiment, seven of these mask registers (kl - k7) may be used to conditionally govern the per-data-element computational operation of an extended vector instruction. In this example, the operation is not performed for a given vector element if the corresponding mask bit is not set. In another embodiment, mask registers kl - k7 may be used to conditionally govern the per-element updates to the destination operand of an extended vector instruction. In this example, a given destination element is not updated with the result of the operation if the corresponding mask bit is not set.[00190] In one embodiment, encodings of the extended vector instructions may include an encoding specifying the type of masking to be applied to the destination (result) vector of an extended vector instruction. For example, this encoding may specify whether merging-masking or zero-masking is applied to the execution of a vector operation. If this encoding specifies merging-masking, the value of any destination vector element whose corresponding bit in the mask register is not set may be preserved in the destination vector. If this encoding specifies zero-masking, the value of any destination vector element whose corresponding bit in the mask register is not set may be replaced with a value of zero in the destination vector. In one example embodiment, mask register kO is not used as a predicate operand for a vector operation. In this example, the encoding value that would otherwise select mask kO may instead select an implicit mask value of all ones, thereby effectively disabling masking. In this example, mask register kO may be used for any instruction that takes one or more mask registers as a source or destination operand.[00191] One example of the use and syntax of an extended vector instruction is shown below:VADDPS zmml, zmm2, zmm3[00192] In one embodiment, the instruction shown above would apply a vector addition operation to all of the elements of the source vector registers zmm2 and zmm3. In one embodiment, the instruction shown above would store the result vector in destination vector register zmml . Alternatively, an instruction to conditionally apply a vector operation is shown below:VADDPS zmml {kl } {z}, zmm2, zmm3[00193] In this example, the instruction would apply a vector addition operation to the elements of the source vector registers zmm2 and zmm3 for which the corresponding bit in mask register kl is set. In this example, if the {z} modifier is set, the values of the elements of the result vector stored in destination vector register zmml corresponding to bits in mask register kl that are not set may be replaced with a value of zero. Otherwise, if the {z} modifier is not set, or if no {z} modifier is specified, the values of the elements of the result vector stored in destination vector register zmml corresponding to bits in mask register kl that are not set may be preserved.[00194] In one embodiment, encodings of some extended vector instructions may include an encoding to specify the use of embedded broadcast. If an encoding specifying the use of embedded broadcast is included for an instruction that loads data from memory and performs some computational or data movement operation, a single source element from memory may be broadcast across all elements of the effective source operand. For example, embedded broadcast may be specified for a vector instruction when the same scalar operand is to be used in a computation that is applied to all of the elements of a source vector. In one embodiment, encodings of the extended vector instructions may include an encoding specifying the size of the data elements that are packed into a source vector register or that are to be packed into a destination vector register. For example, the encoding may specify that each data element is a byte, word, doubleword, or quadword, etc. In another embodiment, encodings of the extended vector instructions may include an encoding specifying the data type of the data elements that are packed into a source vector register or that are to be packed into a destination vector register. For example, the encoding may specify that the data represents single or double precision integers, or any of multiple supported floating point data types.[00195] In one embodiment, encodings of the extended vector instructions may include an encoding specifying a memory address or memory addressing mode with which to access a source or destination operand. In another embodiment, encodings of the extended vector instructions may include an encoding specifying a scalar integer or a scalar floating point number that is an operand of the instruction. While several specific extended vector instructions and their encodings are described herein, these are merely examples of the extended vector instructions that may be implemented in embodiments of the present disclosure. In other embodiments, more fewer, or different extended vector instructions may be implemented in the instruction set architecture and their encodings may include more, less, or different information to control their execution. [00196] Data structures that are organized in tuples of three to five elements that can be accessed individually may be used in various applications. For examples, RGB (Red-Green-Blue) is a common format in many encoding schemes used in media applications. A data structure storing this type of information may consist of three data elements (an R component, a G component, and a B component), which are stored contiguously and are the same size (for example, they may all be 32-bit integers). A format that is common for encoding data in High Performance Computing applications includes two or more coordinate values that collectively represent a position within a multidimensional space. For example, a data structure may store X and Y coordinates representing a position within a 2D space or may store X, Y, and Z coordinates representing a position within a 3D space. Other common data structures having a higher number of elements may appear in these and other types of applications.[00197] In some cases, these types of data structures may be organized as arrays. In embodiments of the present disclosure, multiple ones of these data structures may be stored in a single vector register, such as one of the XMM, YMM, or ZMM vector registers described above. In one embodiment, the individual data elements within such data structures may be re-organized into vectors of like elements that can then be used in SIMD loops, as these elements might not be stored next to each other in the data structures themselves. An application may include instructions to operate on all of the data elements of one type in the same way and instructions to operate on all of the data elements of a different type in a different way. In one example, for an array of data structures that each include an R component, a G components, and a B component in an RGB color space, a different computational operation may be applied to the R components in each of the rows of the array (each data structures) than a computational operation that is applied to the G components or the B components in each of the rows of the array.[00198] In yet another example, many molecular dynamics applications operate on neighbor lists consisting of an array of XYZW data structures. In this example, each of the data structures may include an X component, a Y component, a Z component, and a W component. In embodiments of the present disclosure, in order to operate on individual ones of these types of components, one or more even or odd vector GET instructions may be used to extract the X values, Y values, Z values, and W values from the array of XYZW data structures into separate vectors that contain elements of the same type. As a result, one of the vectors may include all of the X values, one may include all of the Y values, one may include all of the Z values, and one may include all of the W values. In some cases, after operating on at least some of the data elements within these separate vectors, an application may include instructions that operate on the XYZW data structures as a whole. For example, after updating at least some of the X, Y, Z, or W values in the separate vectors, the application may include instructions that access one of the data structures to retrieve or operate on an XYZW data structure as a whole. In this case, one or more other instructions may be called in order to store the XYZW values back in their original format.[00199] In embodiments of the present disclosure, the instructions that may cause AOS to SOA conversion may be implemented by a processor core (such as core 1812 in system 1800) or by a SIMD coprocessor (such as SIMD coprocessor 1910) may include an instruction to perform an even vector GET operation or an odd vector GET operation. The instructions may store the extracted data elements into respective vectors containing the different data elements of a data structure in memory. In one embodiment, these instructions may be used to extract data elements from data structures whose data elements are stored together in contiguous locations within one or more source vector registers. In one embodiment, each of the multiple-element data structures may represent a row of an array.[00200] In embodiments of the present disclosure, different "lanes" within a vector register may be used to hold data elements of different types. In one embodiment, each lane may hold multiple data elements of a single type. In another embodiment, the data elements held in a single lane may not be of the same type, but they may be operated on by an application in the same way. For example, one lane may hold X values, one lane may hold Y values, and so on. In this context, the term "lane" may refer to a portion of the vector register that holds multiple data elements that are to be treated in the same way, rather than to a portion of the vector register that holds a single data element. In another embodiment, different "lanes" within a vector register may be used to hold the data elements of different data structures. In this context, the term "lane" may refer to a portion of the vector register that holds multiple data elements of a single data structure. In this example, the data elements stored in each lane may be of two or more different types. In one embodiment in which the vector registers are 512 bits wide, there may be four 128-bit lanes. For example, the lowest-order 128 bits within a 512- bit vector register may be referred as the first lane, the next 128 bits may be referred to as the second lane, and so on. In this example, each of the 128-bit lanes may store two 64-bit data elements, four 32-bit data elements, eight 16-bit data elements, or four 8-bit data elements. In another embodiment in which the vector registers are 512 bits wide, there may be two 256-bit lanes, each of which stores data elements of a respective data structure. In this example, each of the 256-bit lanes may store multiple data elements of up to 128 bits each.[00201] FIGURE 21 is an illustration of the results of AOS-SOA conversion 1830, according to embodiments of the present disclosure. As described above, given an array 2102 in memory or in cache, data for five separate structures may be contiguously (whether physically or virtually) arranged in memory. In one embodiment, each structure (Structure l ...Structure8) may have the same format as one another. The eight structures may each be, for example, a five-element structure, wherein each element is, for example, a double. In other examples, each element of the structure could be a float, single, or other data type. Each element may be of a same data type. Array 2102 may be referenced by a base location r in its memory.[00202] The process of converting AOS to SO A may be performed. System 1800 may perform such a conversion in an efficient manner.[00203] As a result, a structure of arrays 2104 may result. Each array ( Array 1... Array 4) may be loaded into a different destination, such as a register or memory or cache location. Each array may include, for example, all the first elements from the structures, all the second elements from the structures, all the third elements from the structures, all the fourth elements from the structures, or all the fifth elements from the structure.[00204] By arranging the structure of arrays 2104 into different registers, each with all of the particularly indexed elements from all of the structures of the array of structures 2102, additional operations may be performed on each register with increased efficiency. For example, in a loop of executing code, the first element of each structure might be added to a second element of each structure, or the third element of each structure might be analyzed. By isolating all such elements into a single register or other location, vector operations can be performed. Such vector operations, using SIMD techniques, could perform the addition, analysis, or other execution upon all elements of the array at a single time, in a clock cycle. Transformation of AOS to SOA format may allow vectorized operations such as these.[00205] FIGURE 22 is an illustration of operation of blend and permute instructions, according to embodiments of the present disclosure. The blend and permute instructions may be used to perform various aspects of AOS to SOA conversion.[00206] For example, given sources zmml and zmmO, each with register elements identified as x-, y-, z-, and w-coordinate elements, a permute instruction may be used to permute the x-coordinate and y-coordinate elements into a destination register. The destination register may include the source zmmO. As only seven x-coordinate and y- coordinate elements exist in the sources, a write to the last element of the destination may be masked off (mask = 0x7F). An index (stored in zmm31) may define which of the elements from the combination of zmml and zmmO are to be stored in zmmO, and in what order. For example, the index vector may include corresponding positions for the x-coordinate elements, to be stored in the least significant positions of the destination register, and the y-coordinate elements, to be stored in the next significant portions of the destination register. As a result VPERMT2D {0x7F} zmmO, zmm31 zmml may be called, resulting in zmmO storing the results as shown in FIGURE 22.[00207] In another example, given sources zmml and zmmO, each with register elements identified as x-, y-, z-, and w-coordinate elements, a permute instruction may be used to permute elements into a destination register. However, the order of the elements might not be arbitrarily selectable. For each relative position in the sources, an element from the source must be chosen to be written to the destination. The mask may define, for a given relative position in the sources, which source will be written to the destination. As a result VBLE DMPD {0x9c} zmm2, zmmO, zmml may be called, resulting in zmm2 storing the results as shown in FIGURE 22. [00208] Permute operations may be used to perform portions or all of the AOS-SOA conversion. These are described in more complete detail in subsequent figures. FIGURE 22 illustrates such operation on a smaller scale.[00209] Suppose it is a goal to obtain the x-coordinates stored in the registers zmmO, zmml, zmm2, and zmm3. Each register might include contents loaded from memory and may contain more than one x-coordinate, as each register includes contents from more than one structure. The contents of each register may include an x-coordinate (albeit an x-coordinate from various structures) in the same relative position in each register. These positions may be, for example, the zeroth and fifth locations in a given index. Accordingly, given the flexibility of different permute functions, a single index vector (stored in zmm4) may be used to perform various permute operations. The index vector may define that x values are located, for a combination of any two of the sources, in the same locations (indices 0, 5, 8, 13). The index vector may repeat these values and rely upon selective usage of permute operation (through masking) to arrive at the correct composition of the destination vector.[00210] For example, VPERMT2D may be called to permute zmm2 and zmm3 into zmm2 using the index zmm4. Furthermore, as these two source registers are the left- half of the source, their results may be stored in the left-half of the eventual destination. Accordingly, the permute operation may be masked with {OxFO} so that the left-half of zmm2 is filled with the x-coordinates from zmm2 and zmm3. VPERMI2D may be called to permute zmmO and zmml into zmm4 using the index zmm4. As these two source registers are the right-half of the source, their results may be stored in the right- half of the eventual destination. Accordingly, the permute operation may be masked with {OxOF} so that the right-half of zmm4 is filled with the x-coordinates from zmmO and zmml . Notably, each of the results in zmm2 and zmm4 include x-coordinates from their respective sources in-order. Two results in zmm2 and zmm4 may be blended. A blend operation such as VLENDMPD may be called to blend zmm4 and zmm2 into zmm5. The blend may use a mask of {OxFO} to indicate that, for the right-half, zmm4 values should be used, and for the left-half, zmm2 values should be used. The result may be a collection of the x-coordinates from the sources ordered in zmm5. [00211] FIGURE 23 is an illustration of operation of permute instructions, according to embodiments of the present disclosure. The permute instructions may be used to perform various aspects of AOS to SOA conversion. The operation of permute instructions may be improve the operation of blend and permute instructions shown in FIGURE 22 such that the same task may be accomplished using two permute instructions, instead of two permute instructions and a blend instruction.[00212] In one embodiment, operation of permute instructions to perform aspects of AOS to SOA conversion may rely upon a feature of permute instructions to reuse the index vector to store results. By selectively storing results in only part of the index vector and preserving the remainder of the index vector, an operation may be saved. As discussed above, as the same relative position of a given coordinate (such as the x- coordinate) may exist across multiple sources, reflecting portions of an AOS to convert, an index vector might repeat part of itself (such as { 13 8 5 0 13 8 5 0}) and the permute operation may be masked (such as with OxOF or OxFO} to arrive a destination vector with all x-coordinates. In such cases, the part of the index vector that repeats may be eliminated, and a permute operation masked for the remaining portion may be used. Conversely, data elements that are not needed may be overwritten with index values using a mask. The same write mask may be used with the permute instruction, which overwrites the index register as a destination, preserving some data values and overwriting unneeded index values with data combine from the other source registers. Consequently, the particular variant of permute instructions denoted by the "i" in VPERMI instructions may allow merging of writes that depositing of data values mixed with index control values, converting the two-source instruction effectively into a three-source permute instruction.[00213] For example, given the same source vectors zmm0-zmm3 of FIGURE 22, and a similar index vector { 13 8 5 0 13 8 5 0}, a call may be made to VPERM2I with zmmO and zmml as the sources, and zmm4 as the index. This permute instruction may write the results of the permute to the index vector as the destination. The permute operation may be masked (with OxOF) to write only to the four least significant elements of the index vector zmm4, preserving the existing values. As zmm4 includes a repeat of its indices, indicating the zeroth, fifth, eighth, and thirteenth locations of any combination of the sources will include x-coordinates, half of the index vector zmm4 will be sufficient for subsequent permute operations. Thus, zmm4 could be used again with the knowledge that half of it will be usable. The permute operation may thus copy the zeroth, fifth, eighth, and thirteenth elements of the combination of zmmO and zmml— specifically, the x-coordinates from these source registers— into the least significant four locations of zmm4, the index vector. The most four significant locations of zmm4 will be preserved, as they have been masked off in the permute operation.[00214] The resulting zmm4 register will serve as the index vector source for another call to VPERM2I. The zmm4 register will also be the destination of the permute operation. The other sources, zmm2 and zmm3, may be permuted according to the values of the left-half of zmm4, as the permute operation is masked with OxFO. Thus, the lowest significant four locations in zmm4, which store the x-coordinates from zmmO and zmm4, will be preserved. The additional elements (the x-coordinates) from zmm2 and zmm3 will be stored as the index values in the most significant four locations in zmm4 are overwritten. As a result, zmm4 will include the x-coordinates from all four sources, in-order. This result may be the same as that in FIGURE 22, but conducted with two permute operations rather than two permutes and a blend operation.[00215] The principles of this operation may be applied in the operations discussed further below.[00216] As shown in FIGURE 23, tuples of different elements in the array of structures may be converted so that resulting registers include elements of all the same type. These are referenced in FIGURE 23 as x-, y-, z-, w-, and v-elements or coordinates. These may be referenced by letter to avoid confusion with the offset numbers specified in the index vector.[00217] FIGURE 24 is an illustration of operation of AOS to SOA conversion using multiple gathers for an array of eight structures, wherein each structure includes five elements such as doubles, using gather operations.[00218] The conversion shown in FIGURE 24 may show a traditional sequence to perform the conversion with gather instructions. As with FIGURE 21, the top row may show the layout of the structure in memory where the enumeration of 0...4 may identify equivalent elements of each vector. Different colors or shading may indicate different structures laid out consecutively in memory. Each structure element may be five doubles, yielding forty bytes. Eight such elements may be considered, for a total of 320 bytes of data. The final result will have all 0th elements in a first register, all 1st components in a second register, and so on.[00219] The AOS may be loaded into the registers through the use of five gather instructions. Five KNORB operations may be used to set masks.[00220] First, gather indices may be created. They may be created with the pseudocode:_declspec (align(32)) const _int32 gather0_index[8] = {0, 5, 10, 15, 20, 25,30, 35};_declspec (align(32)) const _int32 gatherl_index[8] = { 1, 6, 11, 16, 21, 26, 31, 36};_declspec (align(32)) const _int32 gather2_index[8] = {2, 7, 12, 17, 22, 27, 32, 37};_declspec (align(32)) const _int32 gather3_index[8] = {3, 8, 13, 18, 23, 28, 33, 38};_declspec (align(32)) const _int32 gather4 _index[8] = {4, 9, 14, 19, 24, 29, 34, 39};[00221] The index for gatherO may identify, in the AOS, the relative location of each "0" element. The index for gatherl may identify, in the AOS, the relative location of each "1" element. The index for gather2 may identify, in the AOS, the relative location of each "2" element. The index for gather3 may identify, in the AOS, the relative location of each "3" element. The index for gather5 may identify, in the AOS, the relative location of each "4" element.[00222] Given these, KNORW may be called to generate masks, followed by five calls to VGATHERDPD. Each call to VGATHERDPD may gather packed values (in this case, of doubles) based upon the indices supplied to each call. The indices provided (r8+ [ymm5->ymm9]*8) may be used to identify particular locations in memory (from a base address rS, scaled by the size of the doubles) from where the values will be gathered and loaded into respective registers. The calls may be expressed in the following pseudocode:kxnorw kl, kO, kOkxnorw k2, kO, kOkxnorw k3, kO, kOkxnorw k4, kO, kOkxnorw k5, kO, kOvgatherdpd zmm4{kl }, zmmword ptr [r8+ymm9*8]vgatherdpd zmm3 {k2}, zmmword ptr [r8+ymm8*8]vgatherdpd zmm2{k3 }, zmmword ptr [r8+ymm7*8]vgatherdpd zmml {k4}, zmmword ptr [r8+ymm6*8]vgatherdpd zmm0{k5}, zmmword ptr [r8+ymm5*8][00223] FIGURE 25 is an illustration of operation of AOS to SOA conversion for an array of eight structures, wherein each structure includes five elements such as doubles, using gather operations. The conversion shown in FIGURE 25 may be referred to as a naive implementation with gather operations, as such a conversion might not be as efficient as other conversions shown in later figures. The operation in FIGURE 25 may implement the conversion shown in FIGURE 24.[00224] Given the AOS of eight doubles in memory, five load operations may be made to load data into registers. While each structure might include five elements, a load operation may be made in multiples of eight. Consequently, rather than load the eight structures into five registers wherein each register includes unused space, the eight structures may be loaded into five registers. Some structures may be broken up across multiple registers. The AOS to SOA conversion may then attempt to sort the contents of these eight registers so that all (eight) of the first elements of the structures are in a common register, all (eight) of the second elements of the structures are in a common register, and so on. In other examples, where structures with another number of elements (such as four) will be processed, four registers might be needed to be to store the results.[00225] Five additional loads may be performed to load data from the memory into the registers. However, these loads may be performed with masks so that only some of the contents of a given memory section are loaded into the respective registers. The specific masks may be selected according to those that are needed to filter the correct element (such as the first, second, third, fourth, or fifth) from a given segment into the register. As a given register will only contain the same indexed element (that is, all first elements, all second elements, etc.), the mask is selected to filter only that element into a corresponding register. In some cases, such as in the present figure, the same mask might be used in all of these load operations. For example, it may be observed that for these particular structures, a mask of {01000010} may uniquely identify a different indexed element (first elements, second elements, etc.) for different memory segments. Thus, applying this same mask to the original memory segments that were loaded from memory will yield the application of indexed elements. Applying the mask, then, to the appropriate register may copy the required elements (that is, the first, second, or other elements).[00226] The same process may be repeated for different masks and combination of sources, until the registers are each filled with respective elements (first elements, or second elements, and so on). The process may be repeated with five loads with a second mask, five loads with a third mask, and five loads with a fourth mask to accomplish the correct loading combinations. The result may be that each register is filled only with respective ones of first elements, second elements, third elements, fourth elements, or fifth elements of the original array of structures. However, the elements within a given register might not be ordered in the same way that they were ordered in the original array.[00227] Accordingly, a number of permute operations may be performed to reorder the contents of the registers to match the original order of the array of structures. For example, five permute operations may be performed. Interim registers may be used as needed. A separate index vector may be needed for each permute to provide the order of the original array. As a result, the contents of each register may be reordered according to the order of the original array. The result may be the converted AOS resulting in a SOA. The arrays may be represented in each respective register. The structure may be the combination of the arrays. [00228] In total, the operations of FIGURE 25 may include twenty-five move load operations, along with five permutes. Example pseudocode for FIGURE 25 shown below.vmovups zmm5, zmmword ptr [r8]vmovups zmml 1, zmmword ptr [r8+0x40]vmovups zmm7, zmmword ptr [r8+0x80]vmovups zmml 3, zmmword ptr [r8+0xc0]vmovups zmm9, zmmword ptr [r8+0xl00]vmovapd zmm5{k4}, zmmword ptr [r8+0xc0]vmovapd zmml 1 {k4}, zmmword ptr [r8+0xl00]vmovapd zmm7{k4}, zmmword ptr [r8]vmovapd zmml3 {k4 }, zmmword ptr [r8+0x40]vmovapd zmm9{k4}, zmmword ptr [r8+0x80]vmovapd zmm5{k3 }, zmmword ptr [r8+0x40]vmovapd zmml l {k3 }, zmmword ptr [r8+0x80]vmovapd zmm7{k3 }, zmmword ptr [r8+0xc0]vmovapd zmml3 {k3 }, zmmword ptr [r8+0xl00]vmovapd zmm9{k3 }, zmmword ptr [r8]vmovapd zmm5{k2 }, zmmword ptr [r8+0xl00]vmovapd zmml 1 {k2}, zmmword ptr [r8]vmovapd zmm7{k2}, zmmword ptr [r8+0x40]vmovapd zmml3 {k2}, zmmword ptr [r8+0x80]vmovapd zmm9{k2}, zmmword ptr [r8+0xc0]vmovapd zmm5{kl } , zmmword ptr [r8+0x80]vmovapd zmml 1 {kl }, zmmword ptr [r8+0xc0]vmovapd zmm7{kl }, zmmword ptr [r8+0xl00]vmovapd zmml3 {kl }, zmmword ptr [r8]vmovapd zmm9{kl }, zmmword ptr [r8+0x40]vpermpd zmm6, zmm4, zmm5vpermpd zmm8, zmm3, zmm7vpermpd zmmlO, zmm2, zmm9 vpermpd zmml2, zmml, zmml 1vpermpd zmml 4, zmmO, zmml 3[00229] FIGURE 26 is an illustration of operation of system 1800 to perform the conversion using permute operations, in accordance with embodiments of the present disclosure. The same AOS source may be used. The operation with permute instructions in FIGURE 26 may be more efficient than with the many move operations shown in FIGURE 25.[00230] First, the eight structures of the array may be loaded, unaligned, into five registers as previously shown. The registers may include mm0...mm4. This process may take five load operations. Some of the data to be permuted may be loaded into another register. That register is then partially overwritten with an index vector. The index vector may use half of the available space. The permute operation that results will be performed with a mask, so that the half with the original data elements are not overwritten, but are instead preserved. This may performed with a VPERMI instruction and may use its index vector parameter as a destination vector. Then, the same mask used to load the indices to the index vector register as the write mask so that only index values in the index vector register are overwritten.[00231] Using this technique on data that is loaded from memory with five loads into each register, with the original order preserved across the registers, a total of fourteen permute operations may be needed to perform the AOS-SOA conversion. To perform these fourteen permute operations, a total of thirteen different index vectors and three different masks may be needed.[00232] FIGURE 27 is a more detailed view of the operation of system 1800 as pictured in FIGURE 26 to perform the conversion using permute operations, according to embodiments of the present disclosure. FIGURE 27 also illustrates creation of some index vectors, wherein the index vectors contain some offsets to be used as parameters for permute as well as some data to be preserved. As shown in FIGURE 27, tuples of different elements in the array of structures may be converted so that resulting registers include elements of all the same type. These are referenced in FIGURE 27 as x-, y-, z-, w-, and v-elements or coordinates. These may be referenced by letter to avoid confusion with the offset numbers specified in the index vector. The conversion in the previous FIGURE 26 is equivalent to these, but the "0" elements in FIGURE 26 have been designated as "x" elements, "1" elements to "y" elements, and so forth.[00233] The operation of system 1800 in FIGURE 27 may be based upon the ability of some permute instructions to selectively overwrite components of the index vector parameter. By selectively overwriting part of the index vector, the index vector may continue to serve as the index vector and include additional source information that is a baseline. The same mask that is used to mask the writing of the index vector may be used in a next permute to mask the operation of the permute. The index may be used again. The operation of such a permute instruction is shown in FIGURE 23. The operation of system 1800 in FIGURE 27 may be more efficient than the operation shown in FIGURE 26.[00234] Index vectors may be initialized as:mmO {0,2,4,6,8,9,14,12}mml {9,11,13,15,3,2,7,5}mm3 {0,2,4,5,8,10,12,14}mm4 {9,11,13,15,1,3,5,7}mm5 {3,4,8,9,13,14,-,-}mm6 {2,3,7,8,12,13,-,-}mm7 {2,3,7,8,12,13,-,-}mm8 {0,1,5,6,8,9,10,11}mm9 {2,3,4,5,9,10,14,15}mmlO {0,2,4,6,8,10,12,14}mmll {1,3,5,7,9,11,13,15}mml2 {0,2,4,6,8,9,12,14}mml3 {1,3,5,7,10,11,13,15}mml4 {2-,12,7,8,3,-,13}mml5{4,-,-,5,ll,12,-,-}mml6 {0,3,2,1,6,5,4,7}[00235] For example, mm7 may be created as a permute of mm3 into mm2 using the mm7 index vector. As a result, mm7 may consolidate the "w" and "v" elements from these registers. [00236] The register mm2 may be permuted with mml using the vector index mm6, storing the results into mm6. As a result, mm6 may consolidate the "x" and "y" elements from these registers.[00237] As the register mm2 has had its "x", "y" "w", and "v" elements permuted into other locations, it only needs to retain its "z" elements. Accordingly, register mm2 may serve both as a source of "z" elements and be loaded with other index values and serve as an index vector for a subsequent permute. In particular, it may serve as an index vector for a permute operation wherein the "z" elements will be consolidated. Efficiency may be gained wherein register mm2 does not need to serve as a typical source in a permute, but may be added on as a de-facto third source for another permute operation to consolidate "z" elements from another two vectors. For example, mm2 may be loaded with offset values that identify the "z" element locations in mm3 and mm4. The register mm2 may be loaded with index elements in its locations that are not otherwise holding "z" elements. Subsequently, mm2 may be used as an index vector to permute the "z" elements from mm3 and mm4. The permute may have a write mask that matches the index vector elements stored in mm2, such as {OxBO}. Then, "z" elements from mm4 and mm3 may be stored into mm2, overwriting index elements but preserving the "z" elements already within mm2.[00238] The registers mmO and mml may be permuted with an index vector in mm5 to consolidate the "v" and "w" elements therein into mm5. The resulting register mm5 may itself be permuted with mm7, which contained the consolidation of "v" and "w" from mm2 and mm3. This permutation may be performed with a new index vector, mml3. However, mm 13 might not be big enough to hold all the "v" and "w" elements from all four original source registers. Accordingly, the "v" and "w" set that bridged the original mm2-mm3 may be dropped, but consolidated in other permute operations. The result may be performed with a permute instruction that stores the result back into mm5.[00239] The registers mm7 and mm4 may be permuted with a new index vector in mm9 to consolidate the "v" and "w" elements therein into mm9. This register mm9 with "v" and "w" elements may include the "v" and "w" element combination that bridged the original mm2-mm3 that is missing from mm5. Furthermore, mm9 and mm5 may each include the "v" and "w elements that are missing from the other register. Accordingly, these registers may be permuted twice according to different index vectors to return registers with all "v" elements or all "w" elements. For example, mm9 and mm5 may be permuted by index vector mml l, storing all "v" elements in mml l . In another example, mm9 and mm5 may be permuted by index vector mmlO, storing all "w" elements into mmlO. These may be copied back to original ones of mm0...mm4 as needed upon completion of the conversion.[00240] The registers mm3 and mm4 may be permuted to obtain the "z" elements. These may be permuted according to the contents of mm2, which, as shown above, may itself have been permuted to preserve "z" elements. Furthermore, mm2 may have been populated, in indices not containing "z" elements, with index values to reference "z" elements from mm3 and mm4. Accordingly, mm3 and mm4 may be permuted with mm2 as its index and store the results back in to mm2. Moreover, the permute may be performed with a mask, wherein the mask (OxBO) protects the already-existing "z" elements in mm2. Furthermore, the mask may also protect index elements not used in mm2 to obtain "z' elements from mm3 or mm4. In fact, these index elements, as Thus, at the end of the permute, mm2 may include the "z" elements consolidated from the original mm2, mm3, and mm4. Furthermore, mm2 may still retain two index elements to indicate the positions in subsequent permutes with mml and mmO to obtain their "z" elements.[00241] The resulting mm2 may include the "z" elements consolidated from permute operations upon the original mm2, mm3, and mm4. Furthermore, mm2 may include indices for identifying the position of "z" elements in mml and mmO. Thus, mm2 may be used as vector index for a permute of mml and mmO to consolidate the "z" elements from these additional registers. The permute may apply the mask (OxBD) based upon the location of "z" elements and indices within mm2. The result of the mask may be that the existing "z" elements are preserved while the indices indicating "z" element locations in mml and mmO are overwritten with such "z" elements. The result may be mm2, filled with "z" elements from the original array. However, the order of the "z" elements might not match the order as presented in the original array. A permute operation may be called on mm2 with a vector index to reorder the "z" elements therein. The resulting mm2 may be the "z" array. These may be copied back to original ones of mm0...mm4 as needed upon completion of the conversion.[00242] As discussed above, mm6 may include "x" and "y" elements permuted from mml and the original mm2. Furthermore, "x" and "y" elements may be permuted from mmO and mm6 using a new vector index in mm8. The result may be stored in mm8. The results may omit the "x" and "y" elements from the second half of the original mm2, as mm8 does not have room to store all "x" and "y" elements from the original mml, mm2, and mmO. However, these may be recovered from mm6 in a separate permute function as described below.[00243] The register mm3 may be converted to an index vector for use with mm4 and mm6 "x" and "y" element permute operation. However, mm3 may still retain its own "x" and "y" elements, using the other positions for the index vector values. A load or move function may be masked (0x39) to only edit the non-"x" and non-"y" elements in mm3. The index vector values may otherwise be loaded from a new index vector, mm 15. The result may still be referenced as mm3.[00244] The resulting mm3 may be used as an index vector and source for permute of mm4 and mm6 with respect to "x" and "y" elements. The same mask (0x39) may be used to perform writes of the permute back in to mm3, such that the "x" and "y" elements from mm4 and mm6 may be consolidated into mm3 at the locations that previously served as index values. This version of mm3 may include "x" and "y" elements from the original mm4, original mm3, and original second half of mm2.[00245] Meanwhile, mm8 may include "x" and "y" elements from the other original register contents. Accordingly, mm3 and mm8 may be permuted with two different permute operations, each with its own index, to yield an array of "x" elements and an array of "y" elements. Register contents may be copied back to original ones of mm0...mm4 as needed.[00246] Accordingly, the AOS-SOA conversion may be complete.[00247] Pseudocode to perform this conversion may be specified as:vmovups zmmlO, zmmword ptr [r8+0x40]vmovups zmml3, zmmword ptr [r8+0x80]vmovups zmml7, zmmword ptr [r8] vmovups zmml6, zmmword ptr [r8+0xc0]vmovups zmm20, zmmword ptr [r8+0xl00]vmovaps zmml 1, zmmlOvpermt2pd zmml l, zmm8, zmml 3vmovaps zmml 9, zmml 3vmovapd zmml3 {k3 }, zmmword ptr [rip+0x76f2]vpermt2pd zmml 9, zmm8, zmml 6vpermi2pd zmml3 {k3 }, zmml 7, zmmlOvmovapd zmml3 {k2}, zmmword ptr [rip+0x775c]vpermi2pd zmml3 {k2}, zmml 6, zmm20vmovapd zmml6{kl }, zmmword ptr [rip+0x77cc]vpermpd zmml 4, zmm4, zmml 3vpermi2pd zmml6{kl }, zmml l, zmm20vpermt2pd zmm20, zmm6, zmml 9vmovaps zmml 2, zmml 7vpermt2pd zmml 2, zmm9, zmmlOvpermt2pd zmml 7, zmm7, zmml lvpermt2pd zmml9, zmm5, zmml2vmovaps zmml 5, zmml 6vmovaps zmml 8, zmml 9vpermt2pd zmml 5, zmm3, zmml 7vpermt2pd zmml 7, zmm2, zmml 6vpermt2pd zmml 8, zmml, zmm20vpermt2pd zmm20, zmmO, zmml 9[00248] FIGURE 28 is an illustration of further operation of system 1800 to perform the conversion using out-of-order loads and fewer permute operations, in accordance with embodiments of the present disclosure. The operation of system 1800 in FIGURE 28 may augment the operation shown in FIGURE 27.[00249] The operation of system 1800 in FIGURE 28 may be based upon loading data from the array into the registers in an out-of-order manner. This loading may differ from the loading shown in FIGURE 27 and in other conversion examples and embodiments. The loading may be out-of-order in that once a first register is loaded with content from the array, the next register might be loaded with content that is not contiguous with the previously loaded content. In one embodiment, content may be loaded for registers, wherein the content begins at the first respective element of the structures.[00250] For example, the array of structures may include eight structures, each with five elements denoted in FIGURE 28 as "4 3 2 1 0". A load operation may load eight elements. Thus, a given load operation can load an entire structure and part of another. In previous examples of conversion, subsequent load operations loaded content from the point at which the previous load operation stopped. However, in one embodiment, content may be loaded from the same relative element in each structure for the first four loads. As a result, gaps may exist in the loaded content. Specifically, elements "3" and "4" are left off from every other structure. These elements that were left off may be loaded instead, collectively, into a single register.[00251] As a result, mmO through mm3 may have identical relative indices. Other loading schemes may be used depending upon the particular size of the structures and arrays. However, each may be performed according to the teachings of FIGURE 28 if they are designed so that multiple registers, after loading, include the same identical relative indices. Because multiple registers include the same identical relative indices, the number of permute operations may be reduced. Whereas FIGURE 27 was performed using fourteen permute operations, FIGURE 26 may accomplish the same conversion using ten permute operations. However, the number of load operations may need to be increased to accomplish the original loading shown in FIGURE 28. The skipped "4" and "5" elements of each structure may require such additional load operations. For example, eight total loads might be needed.[00252] FIGURE 29 is a more detailed view of the operation of system 1800 as pictured in FIGURE 28 to perform the conversion using permute operations, according to embodiments of the present disclosure. Elements may be referenced in FIGURE 29 as X-, y-, Z-, w-, and v-elements or coordinates. These may be referenced by letter to avoid confusion with the offset numbers specified in the index vector. The conversion in the previous FIGURE 28 is equivalent to these, but the "0" elements in FIGURE 28 have been designated as "x" elements, "1" elements to "y" elements, and so forth.[00253] To perform the loading, four loads without masking may be executed. The first eight elements of the array may be loaded to mmO using a load operation. Thus, mmO may include elements of different structures including "z y x v w z y x". An unaligned load may be called to load the first five elements of the third structure of the array and the first three elements of the fourth structure. Another load may be called to load the first five elements of the fifth structure of the array and the first three elements of the sixth array. Yet another load may be called to load the first five elements of the seventh structure of the array and the first three elements of the eighth structure. Each of these, mm0...mm3, may include elements of different structures including "z y x v w z y x".[00254] The loading may also include loading the elements that were skipped in the OOO loading described above. These include elements "w" and "v" of every even structure in the array. These may be loaded with four load operations, wherein each load operation uses a mask to identify the portion of the array segment that includes the missing "w" and "v" elements. The load operations may be made to mm4.[00255] The number of permutes may be simplified because mmO, mml, mm2, and mm3 each have the same elements arranged at the same relative locations therein. Accordingly, index vector, such as mm9 defined as " 12 8 5 0 12 8 5 0" may define the respective locations of "x" elements within any pair of mmO, mml, mm2, and mm3. Moreover, this index vector may be selectively overwritten during permute to allow it to be a source for a subsequent permute.[00256] For example, mmO and mml may be permuted so as to consolidate the "x" elements therein into the right-half of mm9. The selective write may be made through use of a mask such as (OxOF). The left-half of mm9 may maintain vector index values for "x" elements, which might be used in any combination of mmO, mml, mm2, and mm3. Thus, the resulting mm9 may be used again as a vector index and a de-facto source for a permute to consolidate "x" elements from mm2 and mm3 back into mm9. The permute may selectively write to the left-half of mm9 using a mask (OxFO), thus preserving the previously-written elements of "x" from the previous permute operation. The result may be that mm9 includes an array entirely of "x" elements. This was accomplished with two permute operations, a vector index, and two masks.[00257] The process performed on mmO, mml, mm2, and mm3 for the "x" elements may be repeated on mmO, mml, mm2, and mm3 for the "y" elements and the "z" elements, yielding arrays entirely of "y" elements and "z" elements. Each such process may require two permute operations and a vector index. The vector index for each process may be unique, wherein each vector index identifies the respective locations of "y" and "z" elements within the registers. While each such process may also require two masks, the same masks that were used for "x" permute operations may be reused for "y" and "z" permute operations.[00258] The process performed on mmO, mml, mm2, and mm3 for the "x", "y", and "z" elements may be repeated, but to consolidate "v" and "w" values into a register. The vector index for the permute functions may identify the locations of "v" and "w" (4 and 5, respectively). As a result, mm4 may include "v" and "w" components from four structures, while the result of the permute functions performed on mm0...mm3 (mm5, for example) may include the "v" and "w' components from the structures within these registers. Accordingly, mm4 and mm5 may be permuted with two separate VPERM instructions and two indices, each identifying the location of "v" and "w" within the combination of the registers. One such permute may yield an array of "v" elements, and the other permute may yield an array of "w" elements.[00259] The data conversion may thus be complete.[00260] Pseudocode to perform this conversion may be specified as:vmovups zmmlO, zmmword ptr [r8+0x40]vmovups zmm6, zmmword ptr [r8+0x50]vmovups zmm7, zmmword ptr [r8+0xa0]vmovups zmm8, zmmword ptr [r8]vmovups zmm9, zmmword ptr [r8+0xf0]vmovapd zmml0{k7}, zmmword ptr [r8+0x80]vmovapd zmml0{k6}, zmmword ptr [r8+0xc0]vmovaps zmml5, zmm2vpermi2pd zmml5{k3 }, zmm6, zmm7 vmovapd zmml0{k5}, zmmword ptr [r8+0xl00]vpermi2pd zmml5{kl }, zmm8, zmm9vmovaps zmml 1, zmm5vmovaps zmml 2, zmm4vmovaps zmml 3, zmm3vpermi2pd zmml l {k4}, zmm8, zmm6vpermi2pd zmml2{k4}, zmm8, zmm6vpermi2pd zmml3 {k4}, zmm8, zmm6vpermi2pd zmml l {k2}, zmm7, zmm9vpermi2pd zmml2{k2}, zmm7, zmm9vpermi2pd zmml3 {k2}, zmm7, zmm9vmovaps zmml 4, zmml 5vpermt2pd zmml 4, zmml, zmmlOvpermt2pd zmml 5, zmmO, zmmlO[00261] FIGURE 30 is an illustration of example operation of system 1800 to perform data conversion using even fewer permute operations, according to embodiments of the present disclosure. The operation shown in FIGURES 28-29 was made more efficient by reducing a required number of permute operations by arranging data in a particular manner before permuting; similarly, the operation shown in FIGURE 30 may be made more efficient by reducing a required number of load and permute operations by arranging data in yet another manner before permuting. In one embodiment, data may be loaded to reduce overall load and data permute operations by loading the data with gaps in vector registers. While a particular example number and kind of gaps are shown in FIGURE 30, others may be used.[00262] In one embodiment, data may be initially loaded into registers for data conversion with gaps that align with the vector position of certain elements in their final place. This may be performed using six move or load operations (VMOVUPS - from memory or cache, not counting moves between registers, as these have significantly less latency). These may use masks to accomplish the gaps and offset. This may be fewer than the load operations needed in FIGURES 28-29. [00263] As shown in FIGURE 30, data may be loaded from the array into six registers. A gap at the end of mmO and mml may be left. Accordingly, an extra register, mm5, may be needed to handle the overflow of the last two elements. Moreover, the gaps may cause an alignment of the "2" element in mm2 after loading that corresponds to its final position after data conversion. As this element is already loaded in its final place, no permute is necessary to extract this element for the array that will hold the "2" elements after data conversion. Permute operations may still be applied to consolidate "2" elements from mm3 and mm4, as well as those from mml and mmO.[00264] After mm2 is permuted with other registers to consolidate the "0", "1", "3", and "4" elements therein to the other registers, mm2 may be available to serve as both a vector index and a de-facto source for permute operations to consolidate "2" elements from mmO, mml, mm3, and mm4. The register mm2 may be loaded with vector index values identifying the location of "2" elements in these other registers. The already-set "2" element in mm2 may be preserved through masking, while during consolidation vector index elements may be reclaimed with written "2" elements from the other registers.[00265] As shown in FIGURE 30, mm5 includes a single instance of "4" and "3" elements after initial loading. The remaining space in mm5 may be used to populate indices of the relative location of "4" and "3" in combinations of mm0...mm4. Thus, mm5 might serve as a vector index and de-facto source for permutes of these other registers. The results may be stored within mm5 itself, selectively written to preserve "4" and "3" elements while overwriting index values that have been used.[00266] The vector permute operations shown in the previous figures may be applied to consolidate the respective identified elements within individual registers, resulting in arrays.[00267][00268] Pseudocode to perform this conversion may be specified as:vmovups zmm9, zmmword ptr [r8+0xl30] // load the last "3" and "4" into mm9vmovups zmmlO, zmmword ptr [r8] // load the lowest 8 elements to mm 10 vmovups zmml3, zmmword ptr [r8+0x38]// load 8 elements, starting with second " 1" to mml3vmovups zmm7, zmmword ptr [r8+0x70]// load 8 elements, starting with third "4", to mm7vmovups zmm5, zmmword ptr [r8+0xt>0]// load 8 elements, starting with fifth "2", to mm5vmovapd zmm9{k4}, zmmword ptr [rip+0x79a8]// load mm9 with indices, saving the existing "3" and "4"vmovups zmm6, zmmword ptr [r8+0xf0]// load 8 elements, starting with seventh "0", to mm6vpermi2pd zmm9{k4}, zmml3, zmm7// permute "3" and "4" from mm7 and mm 13 according to indices in mm9,// preserving "3" and "4" in mm9vmovaps zmml2, zmmlO// save mmlO to mml2vpermt2pd zmml2, zmm4, zmm7// permute values in mm7 and mm 12 according to index in mm4 vmovapd zmm7{k3 }, zmmword ptr [rip+0x79fb]// create index vector from mm7, saving unpermuted values vpermi2pd zmm7{k3 }, zmmlO, zmml3// permute values from mm 13 and mm 10 into mm7 according to mm7, // preserving existing elements in mm7vmovapd zmml0{k2}, zmmword ptr [rip+0x7a2b]// create index vector from mm 10, saving unpermuted values vmovapd zmml3 {k2}, zmmword ptr [rip+0x7a61]// create index vector from mm 13, saving unpermuted values vmovapd zmm7{kl }, zmmword ptr [rip+0x7a97]// create index vector from mm7, saving unpermuted values vpermi2pd zmml0{k2}, zmm5, zmm6// permute mm5 and mm6 into mm 10 according to indices in mm 10, // preserving existing elements in mm 10vpermi2pd zmml3 {k2}, zmm5, zmm6// permute mm5 and mm6 into mm 13 according to indices in mm 13,// preserving existing elements in mm 13vpermi2pd zmm7{kl }, zmm5, zmm6// permute mm5 and mm6 into mm7 according to indices in mm7,// preserving existing elements in mm7vmovaps zmm8, zmmlO // save mm 10 to mm8vmovaps zmml 1, zmml2 // save mml2 to mml 1vpermt2pd zmm8, zmm3, zmm9// permute mm8 and mm9 according to new vector identifying locations// of elements that need to be permutedvpermt2pd zmmlO, zmm2, zmm9// permute mm8 and mm9 according to new vector identifying locations// of elements that need to be permutedvpermt2pd zmml l, zmml, zmml 3// permute mml 1 and mm 13 according to new vector identifying locations// of elements that need to be permutedvpermt2pd zmml 3, zmmO, zmml 2// permute mm 13 and mm 12 according to new vector identifying locations// of elements that need to be permuted[00269] FIGURE 31 illustrates an example method 3100 for performing permute operations to fulfill AOS to SOA conversion, according to embodiments of the present disclosure. Method 3100 may be implemented by any suitable elements shown in FIGURES 1-30. Method 3100 may be initiated by any suitable criteria and may initiate operation at any suitable point. In one embodiment, method 3100 may initiate operation at 3105. Method 3100 may include greater or fewer steps than those illustrated. Moreover, method 3100 may execute its steps in an order different than those illustrated below. Method 3100 may terminate at any suitable step. Moreover, method 3100 may repeat operation at any suitable step. Method 3100 may perform any of its steps in parallel with other steps of method 3100, or in parallel with steps of other methods. Furthermore, method 3100 may be executed multiple times to perform multiple operations requiring strided data that needs to be converted.[00270] At 3105, in one embodiment, an instruction may be loaded and at 3110 the instruction may be decoded.[00271] At 3115, it may be determined that the instruction requires AOS-SOA conversion of data. Such data may include strided data. In one embodiment, the stride data may include Stride5 data. The instruction may be determined to require such data because vector operations on the data are to be performed. The data conversion may result in the data being in an appropriate format so that a vectorized operation may be applied simultaneously, in a clock cycle, to each element of a bank of data. The instruction may specifically identify that the AOS-SOA conversion is to be performed or it may be inferred from the desire to execute an instruction that the AOS-SOA is needed.[00272] At 3120, an array to be converted may be loaded into registers. In one embodiment, structures in the array may be loaded into registers such that as many registers as possible have the same element layout. For example, "1" elements are all in the same relative positions, "2" elements are all in the same relative positions, etc. The load operations may be performed with masks. The load operations may cut off certain elements from every other register that would have otherwise been loaded. These may be referenced as excess elements. The excess elements may be the same for every other register.[00273] At 3125, the excess elements may be loaded into a common register using mask load operations. A larger number of load operations may be performed as a consequence. This common register may have a different element layout than the registers with the common element layout.[00274] At 3130, index vectors may be generated for the common element layouts. An index vector may be created identifying relative positions in the common element layouts for a given element. The index vector may be used as an index vector and a partial source for a permute function to consolidate given elements. At 3135, permutes may be performed on registers with the common layout using these index vectors. 3135 may be repeated as necessary to generate arrays of elements within the common layout other than those among the excess element. These generated arrays may represent a partial output of the data conversion.[00275] At 3140, index vectors for the elements among the excess elements and the common register may be generated. The index vectors may also serve as de-facto sources. At 3145, permute may be performed on a combination of the common register and various appropriate results from 3135. The elements among the excess elements may be consolidated to arrays. These generated arrays may represent the remainder output of the data conversion.[00276] At 3150, the execution upon the different registers may be performed. As a given register is to be used with the vector instruction for execution, each element may be executed-upon in parallel. Results may be stored as necessary. At 3155, it may be determined if subsequent vector execution is to be performed on the same converted data. If so, method 3100 may return to 3150. Otherwise, method 3100 may proceed to 3160.[00277] At 3160, it may be determined whether additional execution is needed for other stride5 data. If so, method 3100 may proceed to 3120. Otherwise, at 3165 the instruction may be retired. Method 3100 may optionally repeat or terminate.[00278] FIGURE 32 illustrates another example method 3200 for performing permute operations to fulfill AOS to SOA conversion, according to embodiments of the present disclosure. Method 3200 may be implemented by any suitable elements shown in FIGURES 1-30. Method 3200 may be initiated by any suitable criteria and may initiate operation at any suitable point. In one embodiment, method 3200 may initiate operation at 3205. Method 3200 may include greater or fewer steps than those illustrated. Moreover, method 3200 may execute its steps in an order different than those illustrated below. Method 3200 may terminate at any suitable step. Moreover, method 3200 may repeat operation at any suitable step. Method 3200 may perform any of its steps in parallel with other steps of method 3200, or in parallel with steps of other methods. Furthermore, method 3200 may be executed multiple times to perform multiple operations requiring strided data that needs to be converted.[00279] At 3205, in one embodiment, an instruction may be loaded and at 3210 the instruction may be decoded.[00280] At 3215, it may be determined that the instruction requires AOS-SOA conversion of data. Such data may include strided data. In one embodiment, the stride data may include Stride5 data. The instruction may be determined to require such data because vector operations on the data are to be performed. The data conversion may result in the data being in an appropriate format so that a vectorized operation may be applied simultaneously, in a clock cycle, to each element of a bank of data. The instruction may specifically identify that the AOS-SOA conversion is to be performed or it may be inferred from the desire to execute an instruction that the AOS-SOA is needed.[00281] At 3220, an array to be converted may be prepared to be loaded into registers. The mapping of the array to the registers may be evaluated in view of the final conversion of data. One or more elements may be identified that can be initially loaded into a given vector register at a given location that matches the same position and vector register that is to contain the element after data conversion. At 3225, load operations may be performed to load the array into the registers such that the identified element is loaded to the designated register and position. Such load operations may require shifting of data or leaving gaps in various registers such that the alignment occurs. At 3230, permute operations may be performed to consolidate given elements from each of the registers into a single register. These arrays of elements may be generated and used for vector execution. However, the aligned element might not require a permute operation.[00282] At 3250, the execution upon the different registers may be performed. As a given register is to be used with the vector instruction for execution, each element may be executed-upon in parallel. Results may be stored as necessary. At 3255, it may be determined if subsequent vector execution is to be performed on the same converted data. If so, method 3200 may return to 3250. Otherwise, method 3200 may proceed to 3260. [00283] At 3260, it may be determined whether additional execution is needed for other stride5 data. If so, method 3200 may proceed to 3220. Otherwise, at 3265 the instruction may be retired. Method 3200 may optionally repeat or terminate.[00284] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[00285] Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system may include any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00286] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[00287] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine-readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[00288] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RW s), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[00289] Accordingly, embodiments of the disclosure may also include non- transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.[00290] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part-on and part-off processor.[00291] Thus, techniques for performing one or more instructions according to at least one embodiment are disclosed. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on other embodiments, and that such embodiments not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims. [00292] Some embodiments of the present disclosure include a processor. The processor may include a front end to receive an instruction, a decoder to decode the instruction, a core to execute the instruction, and a retirement unit to retire the instruction. In combination with any of the above embodiments, the core may include logic to determine that the instruction will require strided data converted from source data in memory. In combination with any of the above embodiments, the strided data is to include corresponding indexed elements from a plurality of structures in the source data to be loaded into a same register to be used to execute the instruction. In combination with any of the above embodiments, the core includes logic to load source data into a plurality of preliminary vector registers with a first indexed layout of elements and a second indexed layout of elements. In combination with any of the above embodiments, a plurality of the preliminary vector registers are to be loaded with the first indexed layout of elements. In combination with any of the above embodiments, a common register of the preliminary vector registers are to be loaded with the second indexed layout of elements. In combination with any of the above embodiments, the core includes logic to apply permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective source vector register. In combination with any of the above embodiments, the core further includes logic to execute the instruction upon one or more source vector registers upon completion of conversion of source data to strided data. In combination with any of the above embodiments, the core further includes logic to create an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored. In combination with any of the above embodiments, the core further includes logic to selectively store results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register. In combination with any of the above embodiments, the core further includes logic to selectively preserve indices of the index value for subsequent use of the index vector. In combination with any of the above embodiments, the core further includes logic to selectively preserve indices of the index vector for a second permute instruction. In combination with any of the above embodiments, the core further includes logic to apply a second permute instruction with the preserved indices of the index vector to indicate elements of a third preliminary vector register and the common vector register to be permuted. In combination with any of the above embodiments, the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors. In combination with any of the above embodiments, eight permute operations are to be applied to contents of the preliminary vector registers to yield contents of the respective source vector registers. In combination with any of the above embodiments, two permute operations are to be applied to contents of the common vector register to yield contents of the respective source vector registers. In combination with any of the above embodiments, the core further includes logic to create six index vectors to be used with permute instructions yield contents of the source vector registers.[00293] Some embodiments of the present disclosure may include a system. The system may include a front end to receive an instruction, a decoder to decode the instruction, a core to execute the instruction, and a retirement unit to retire the instruction. In combination with any of the above embodiments, the core may include logic to determine that the instruction will require strided data converted from source data in memory. In combination with any of the above embodiments, the strided data is to include corresponding indexed elements from a plurality of structures in the source data to be loaded into a same register to be used to execute the instruction. In combination with any of the above embodiments, the core includes logic to load source data into a plurality of preliminary vector registers with a first indexed layout of elements and a second indexed layout of elements. In combination with any of the above embodiments, a plurality of the preliminary vector registers are to be loaded with the first indexed layout of elements. In combination with any of the above embodiments, a common register of the preliminary vector registers are to be loaded with the second indexed layout of elements. In combination with any of the above embodiments, the core includes logic to apply permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective source vector register. In combination with any of the above embodiments, the core further includes logic to execute the instruction upon one or more source vector registers upon completion of conversion of source data to strided data. In combination with any of the above embodiments, the core further includes logic to create an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored. In combination with any of the above embodiments, the core further includes logic to selectively store results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register. In combination with any of the above embodiments, the core further includes logic to selectively preserve indices of the index value for subsequent use of the index vector. In combination with any of the above embodiments, the core further includes logic to selectively preserve indices of the index vector for a second permute instruction. In combination with any of the above embodiments, the core further includes logic to apply a second permute instruction with the preserved indices of the index vector to indicate elements of a third preliminary vector register and the common vector register to be permuted. In combination with any of the above embodiments, the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors. In combination with any of the above embodiments, eight permute operations are to be applied to contents of the preliminary vector registers to yield contents of the respective source vector registers. In combination with any of the above embodiments, two permute operations are to be applied to contents of the common vector register to yield contents of the respective source vector registers. In combination with any of the above embodiments, the core further includes logic to create six index vectors to be used with permute instructions yield contents of the source vector registers.[00294] Some embodiments of the present disclosure may include an apparatus. The apparatus may include means for receiving an instruction, decoding the instruction, and executing the instruction. In combination with any of the above embodiments, the apparatus may include means for determining that the instruction will require strided data converted from source data in memory. In combination with any of the above embodiments, the strided data is to include corresponding indexed elements from a plurality of structures in the source data to be loaded into a same register to be used to execute the instruction. In combination with any of the above embodiments, the apparatus may include means for loading source data into a plurality of preliminary vector registers with a first indexed layout of elements and a second indexed layout of elements. In combination with any of the above embodiments, a plurality of the preliminary vector registers are to be loaded with the first indexed layout of elements. In combination with any of the above embodiments, a common register of the preliminary vector registers are to be loaded with the second indexed layout of elements. In combination with any of the above embodiments, the apparatus may include means for applying permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective source vector register. In combination with any of the above embodiments, the apparatus may include means for executing the instruction upon one or more source vector registers upon completion of conversion of source data to strided data. In combination with any of the above embodiments, the apparatus may include means for creating an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored. In combination with any of the above embodiments, the apparatus may include means for selectively storing results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register. In combination with any of the above embodiments, the apparatus may include means for selectively preserving indices of the index value for subsequent use of the index vector. In combination with any of the above embodiments, the apparatus may include means for selectively preserving indices of the index vector for a second permute instruction. In combination with any of the above embodiments, the apparatus may include means for applying a second permute instruction with the preserved indices of the index vector to indicate elements of a third preliminary vector register and the common vector register to be permuted. In combination with any of the above embodiments, the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors. In combination with any of the above embodiments, eight permute operations are to be applied to contents of the preliminary vector registers to yield contents of the respective source vector registers. In combination with any of the above embodiments, two permute operations are to be applied to contents of the common vector register to yield contents of the respective source vector registers. In combination with any of the above embodiments, the apparatus may include means for creating six index vectors to be used with permute instructions yield contents of the source vector registers.[00295] Some embodiments of the present disclosure may include a method. The method may include receiving an instruction, decoding the instruction, and executing the instruction. In combination with any of the above embodiments, the method may include determining that the instruction will require strided data converted from source data in memory. In combination with any of the above embodiments, the strided data is to include corresponding indexed elements from a plurality of structures in the source data to be loaded into a same register to be used to execute the instruction. In combination with any of the above embodiments, the method may include loading source data into a plurality of preliminary vector registers with a first indexed layout of elements and a second indexed layout of elements. In combination with any of the above embodiments, a plurality of the preliminary vector registers are to be loaded with the first indexed layout of elements. In combination with any of the above embodiments, a common register of the preliminary vector registers are to be loaded with the second indexed layout of elements. In combination with any of the above embodiments, the method may include applying permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective source vector register. In combination with any of the above embodiments, the method may include executing the instruction upon one or more source vector registers upon completion of conversion of source data to strided data. In combination with any of the above embodiments, the method may include creating an index vector based upon the first indexed layout of elements with indices to indicate which elements of two preliminary vector registers are to be stored. In combination with any of the above embodiments, the method may include selectively storing results of a first permute instruction in the index vector, the first permute instruction to permute contents in the first indexed layout of elements between a first preliminary vector register and a second preliminary vector register. In combination with any of the above embodiments, the method may include selectively preserving indices of the index value for subsequent use of the index vector. In combination with any of the above embodiments, the method may include selectively preserving indices of the index vector for a second permute instruction. In combination with any of the above embodiments, the method may include applying a second permute instruction with the preserved indices of the index vector to indicate elements of a third preliminary vector register and the common vector register to be permuted. In combination with any of the above embodiments, the strided data is to include eight registers of vectors, each vector to include five elements that correspond with the other vectors. In combination with any of the above embodiments, eight permute operations are to be applied to contents of the preliminary vector registers to yield contents of the respective source vector registers. In combination with any of the above embodiments, two permute operations are to be applied to contents of the common vector register to yield contents of the respective source vector registers. In combination with any of the above embodiments, the method may include creating six index vectors to be used with permute instructions yield contents of the source vector registers. |
Shorting between a transistor gate electrode and associated source/drain regions due to metal silicide formation on the sidewall spacers is prevented by passivating the sidewall spacer surfaces with a solution of iodine and ethanol. Embodiments of the invention include spraying the wafer with or immersing the wafer in, a solution of iodine in ethanol. |
What is claimed is: 1. A method of manufacturing a semiconductor device, the method comprising:providing an intermediate product comprising: a silicon-containing gate electrode on a silicon-containing semiconductor substrate with a gate insulating layer therebetween, the silicon-containing gate electrode having an upper surface and opposing side surfaces; and contacting the intermediate product with a solution comprising iodine and ethanol for a period of time sufficient to passivate surface regions of the intermediate product. 2. The method according to claim 1, wherein sidewall spacers are formed on the opposing side surfaces prior to contacting the intermediate product with the solution.3. The method according to claim 2, wherein the sidewall spacers comprise silicon nitride.4. The method according to claim 3, wherein the concentration of iodine in the solution is from about 0.01 moles/liter to about 20.0 moles/liter.5. The method according to claim 4, wherein the concentration of iodine in the solution is from about 2.0 moles/liter to about 20.0 moles/liter.6. The method according to claim 3, comprising contacting the intermediate product by immersing the intermediate product in the solution.7. The method according to claim 3, comprising contacting the intermediate product by spraying the solution onto the intermediate product.8. The method according to claim 3, comprising contacting the intermediate product with the solution for about 1 minute to about 60 minutes.9. The method according to claim 3, wherein the solution is applied to the substrate at a temperature of about 1[deg.] C. to about 40[deg.] C.10. The method according to claim 3, further comprising forming source/drain regions adjacent to the sidewall spacers.11. The method according to claim 10, further comprising forming silicide contacts on a semiconductor device comprising:depositing a metal layer on the upper surface of the silicon-containing gate electrode, sidewall spacers, and source/drain regions; heating to react the metal with silicon in the silicon-containing gate electrode and the silicon-containing semiconductor substrate to form a metal silicide layer on the gate electrode and metal silicide layers on the source/drain regions; and removing unreacted metal from the sidewall spacers. 12. The method according to claim 11, wherein the metal is selected from the group consisting of Co, Ni, Ti, Ta, Mo, W, Cr, Pt, and Pd.13. The method according to claim 12, wherein the metal is Ni. |
RELATED APPLICATIONSThis application contains subject matter similar to that disclosed in U.S. patent application Ser. No. 09/664,714, filed on Sep. 19, 2000.TECHNICAL FIELDThe present invention relates to the field of manufacturing semiconductor devices and, more particularly, to an improved salicide process of forming metal silicide contacts.BACKGROUND OF THE INVENTIONAn important aim of ongoing research in the semiconductor industry is the reduction in the dimensions of the devices used in integrated circuits. Planar transistors, such as metal oxide semiconductor (MOS) transistors, are particularly suited for use in high-density integrated circuits. As the size of the MOS transistors and other active devices decreases, the dimensions of the source/drain regions and gate electrodes, and the channel region of each device, decrease correspondingly.The design of ever-smaller planar transistors with short channel lengths makes it necessary to provide very shallow source/drain junctions. Shallow junctions are necessary to avoid lateral diffusion of implanted dopants into the channel, since such a diffusion disadvantageously contributes to leakage currents and poor breakdown performance. Shallow source/drain junctions of less than 1,000 Å, e.g., less than 800 Å, are required for acceptable performance in short channel devices.Metal silicide contacts are typically used to provide low resistance contacts to source/drain regions and gate electrodes. The metal silicide contacts are conventionally formed by depositing a conductive metal, such as titanium, cobalt, tungsten, or nickel, on the source/drain regions and gate electrodes by physical vapor deposition (PVD), e.g. sputtering or evaporation; or by a chemical vapor deposition (CVD) technique. Subsequently, heating is performed to react the metal with underlying silicon to form a metal silicide layer on the source/drain regions and gate electrodes. The metal silicide has a substantially lower sheet resistance than the silicon to which it is bonded. Desirably, the metal silicide is only formed on the underlying silicon, not on the dielectric sidewall spacers. Selective etching is then conducted to remove unreacted metal from the non-silicided areas, such as the dielectric sidewall spacers. Thus, the silicide regions are aligned only on the electrically conductive areas. This self-aligned silicide process is generally referred to as the "salicide" process.A portion of a typical semiconductor device 10 is schematically illustrated in FIG. 1 and comprises a silicon-containing substrate 12 with shallow source/drain extensions 15A and source/drain 15B regions formed therein. Gate oxide 24 and gate electrode 28 are formed on the silicon-containing substrate 12. Sidewall spacers 18 are formed on opposing side surfaces 29 of gate electrode 28. Sidewall spacers 18 typically comprise silicon based insulators, such as silicon nitride, silicon oxide, or silicon carbide. The sidewall spacers 18 function to mask shallow source/drain extensions 15A during ion implantation to form source/drain regions 15B. The sidewall spacers 18 also mask the side surfaces 29 of the gate 28 when metal layer 16 is deposited, thereby preventing silicide from forming on the side surfaces 29.After metal layer 16 is deposited, heating is conducted at a temperature sufficient to react the metal with underlying silicon in the gate electrode and substrate surface to form conductive metal silicide contacts 26. After the metal silicide contacts 26 are formed, the unreacted metal 16 is removed by etching, as with a wet etchant, e.g., an aqueous H2O2/NH4OH solution. The sidewall spacer 18, therefore, acts as an electrical insulator separating the silicide contact 26 on the gate electrode 28 from the metal silicide contacts 26 on the source/drain regions 15B, as shown in FIG. 2.Difficulties are encountered in such a conventional silicidation process, particularly when employing silicon nitride sidewall spacers and nickel as the metal. Specifically it was found that nickel reacts with dangling silicon bonds in the silicon nitride sidewall spacers during heating to form nickel silicide layers on the sidewall spacer surface 20 forming an electrical bridge between the nickel silicide contact 26 on the gate electrode 28 and the nickel silicide contact 26 on the source/drain regions 15B. This undesirable effect is particularly problematic as device design rules plunge into the deep sub-micron range and is schematically illustrated in FIG. 3, wherein sidewall spacer surface 20 contains dangling silicon bonds 21. When the metal layer 16 is deposited on the sidewall spacer surface 20 and heated, a metal silicide layer 26 remains on the surface of the sidewall spacer 20 after etching.Bridging between the gate electrode and the associated source/drain regions results in diminished device performance and device failure.Additional difficulties encountered in the silicidation process include oxidation of the gate electrode and source/drain surfaces. Surface oxides on the gate electrode and source/drain regions can inhibit the silicidation reaction between the metal and silicon. Metals that can not diffuse through a silicon oxide surface film, such as titanium, will not readily react with the underlying silicon when heated, resulting in inadequate metal silicide formation. Surface oxides readily form on exposed silicon surfaces under ambient environmental conditions. Aqueous HF is conventionally used to remove surface oxides prior to depositing the silicidation metal. However, if the metal layer is not deposited in a timely manner after HF treatment, the surface oxide layer will be regenerated, requiring additional HF treatment. In addition, the use of aqueous HF to remove surface oxide films from the gate and source and drain regions also undesirably removes surface Oxides and leaves dangling silicon bonds on the sidewall spacers.The term semiconductor devices, as used herein, is not to be limited to the specifically disclosed embodiments. Semiconductor devices, as used herein, include a wide variety of electronic devices including flip chip, flip chip/package assemblies, transistors, capacitors, microprocessors,random access memories, etc. In general, se comprising semiconductors.SUMMARY OF THE INVENTIONThere exists a need for efficient methodology to produce highly reliable semiconductor devices with ultra-shallow junctions by eliminating bridging between transistor gate electrodes and associated source/drain regions and preventing surface oxidation of gate electrodes and source/drain regions. There exists a particular need in this art to eliminate nickel silicide formation on silicon nitride sidewall spacer surfaces.These and other needs are met by the embodiments of the present invention, which provide a method of passivating a semiconductor device comprising: providing an intermediate product comprising a gate electrode on a semiconductor substrate with a gate insulating layer therebetween. The gate electrode has an upper surface and opposing side surfaces. The intermediate product is contacted with a solution comprising iodine and ethanol for a period of time sufficient to passivate surface regions of the intermediate product.The earlier stated needs are also met by another embodiment of the instant invention which provides a method of manufacturing a semiconductor device comprising forming silicide contacts on a semiconductor device comprising: providing an intermediate product comprising a gate electrode and source/drain regions, wherein sidewall spacers are formed on the side surfaces of the gate electrode. The surfaces of the intermediate product are contacted with an iodine and ethanol solution for a period of time sufficient to passivate the surfaces of the intermediate product. A metal layer is deposited over the intermediate product and the metal layer is subsequently heated at a temperature sufficient to cause the metal to react with silicon in the gate electrode and source/drain regions to form metal silicide. Unreacted metal is subsequently removed from the intermediate product.The earlier stated needs are further met by another embodiment of the instant invention that provides a semiconductor device comprising a gate electrode on a semiconductor substrate with a gate insulating layer therebetween. The gate electrode has an upper surface and opposing side surfaces. The surfaces of the semiconductor device are passivated by contacting them with an iodine and ethanol solution.The foregoing other features, aspects, and advantages of the present invention will become apparent in the following detailed description of the present invention when taken in conjunction with, the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 schematically illustrates a gate/source/drain region of a semiconductor device formed by a conventional method.FIG. 2 schematically illustrates a gate/source/drain region of a semiconductor device formed by a conventional salicide method.FIG. 3 schematically illustrates the formation of metal silicide on sidewall spacers.FIG. 4 schematically illustrates sidewall spacer passivation according to an embodiment of the present invention.FIG. 5 illustrates the passivation of sidewall spacers of a semiconductor device by immersing the semiconductor device in an iodine/ethanol solution.FIG. 6 illustrates the passivation of sidewall spacers of a semiconductor device by spraying the device with an iodine/ethanol solution.DETAILED DESCRIPTION OF THE INVENTIONThe present invention enables the production of semiconductor devices with improved performance and reduced failure rates by preventing electrical bridging between a transistor gate electrode and associated source/drain regions and by preventing surface oxidation of the gate electrode and source and drain regions. These objectives are achieved by strategically passivating the surfaces of a semiconductor device intermediate product to limit metal silicide formation to the upper surfaces of the gate electrode and substrate during the silicidation process.As shown in FIG. 3, dangling silicon bonds 21 on the sidewall spacer surface 20 react with deposited metal layer 16 to form a metal silicide layer 26 on the sidewall spacer surface 20. The present invention enables the prevention of metal silicide layer 26 on sidewall spacer surface 20 by passivating the sidewall spacer surface prior to depositing metal layer 16. Such passivation substantially eliminates the dangling silicon bonds making them unavailable to react with the deposited metal.Dangling silicon bonds can be passivated by reaction with oxygen to form an oxide that does not react with metal layer 16 during the heating process, thereby preventing conductive metal silicide from forming on the sidewall spacers. The passivated sidewall spacers then satisfy their intended function as insulators between the gate electrode and the source/drain regions.Several techniques can be used to passivate the dangling silicon bonds with oxygen. For example, the intermediate product can be exposed to a microwave oxygen plasma. The oxygen plasma provides highly reactive oxygen ions that react with the dangling silicon bonds to form stable silicon oxides. Another technique of passivation involves contacting the semiconductor device with a solution of hydrogen peroxide and sulfuric acid. The use of oxygen plasma to passivate the spacers requires an expensive oxygen plasma generating apparatus. While the use of a hydrogen peroxide and sulfuric acid solution requires handling and subsequent disposal of highly corrosive chemicals. Furthermore, oxygen passivation techniques also form an oxide film, on the gate electrode and source/drain regions, that can interfere with subsequent silicidation.The present invention provides an elegant, economical method of passivatiig semiconductor devices. The present invention effects semiconductor device surface passivation using an iodine (I2) and ethanol solution to introduce hydroxyl (OH) groups into the dangling silicon bonds.An embodiment of the present invention is schematically illustrated in FIG. 4. A sidewall spacer surface 20 with dangling silicon bonds 21 is contacted with a solution of iodine and ethanol for a period of time sufficient to passivate the sidewall spacer surface. Passivation is effected by contacting the dangling silicon bonds 21 with the I2/ethanol solution to generate hydroxyl groups that react with the dangling silicon bonds 21. The passivation step leaves-the sidewall spacer surface 20 substantially free of dangling silicon bonds 21 which would otherwise react with the subsequently deposited metal, e.g., nickel, during silicidation After passivation, metal layer 16 is deposited over the intermediate product 10, including the sidewall spacer surface 20. The metal does not react with the passivated sidewall surface during subsequent heating. As a result, the unreacted metal is easily removed during etching, leaving the sidewall spacer surface 20 substantially free of metal silicide.Silicon bonds on the surface of the gate electrode and source/drain regions will also react with the I2/ethanol solution to form a thin hydroxide layer on their respective surfaces. However, the hydroxide layer formed on the surfaces of the gate electrode and source/drain regions does not prevent subsequent silicide formation thereon because these regions are predominantly elemental silicon available for reaction with the metal layer. The sidewall spacers, on the other hand, are predominantly relatively inert silicon compounds, where only dangling silicon bonds are available for reaction with the metal layer. While a portion of the silicon on the surface of the gate electrode and source/drain regions forms a thin hydroxide layer when exposed to the I2/ethanol solution, there is abundant silicon remaining to form the metal silicide contacts.The concentration of I2 in the I2/ethanol solution ranges from about 0.01 moles/liter to about 20.0 moles/liter. In certain embodiments of the present invention, a concentration of iodine in the solution of about 2.0 moles/liter to about 20.0 moles/liter efficiently passivates the surfaces of the semiconductor devices.The solution is contacted with the surfaces of the semiconductor device for a period of time sufficient to passivate the sidewall spacer surfaces, gate electrode upper surface, and substrate upper surface. The length of time to effect passivation can be determined for a particular situation and ranges from about 1 minute to about 60 minutes. In certain embodiments, for example in passivating silicon nitride sidewall spacers, an exposure time of about 5 minutes to about 35 minutes, e.g. about 10 minutes, is sufficient to prevent nickel silicide thereon.The semiconductor device can be contacted with the I2/ethanol solution by immersing the intermediate product 10 in a vessel 34 containing the I2/ethanol solution 32, as illustrated in FIG. 5.Alternatively, the I2/ethanol solution 44 can be sprayed onto the surface of the intermediate product 10 by a spray nozzle 42, as shown in FIG. 6.The temperature of the I2/ethanol solution employed for passivation in accordance with embodiments of the present invention can range from about 1[deg.] C. to about 80[deg.] C. When passivating silicon nitride sidewall spacers, for example, the solution is advantageously applied between about 5[deg.] C. and 35[deg.] C., or more advantageously between about 17[deg.] C. and 27[deg.] C., which provides the convenience of room temperature processing.The metal layer 16 comprises a metal that forms a metal silicide with high conductivity.Typical silicidation metals include Co, Ni, Ti, W, Ta, Mo, Cr, Pt, and Pd. In certain embodiments Co, Ni, and Ti have been found to provide high reliability, high conductivity silicide contacts. Nickel has been found to particularly advantageous because it enables low temperature salicide processing.Metal layer 16 is deposited by a PVD method, such as sputtering or evaporation, or a CVD method. The metal layer is deposited to a thickness of about 100 Åto about 500 Å. The metal is heated at a temperature ranging from about 300[deg.] C. to about 1000[deg.] C. depending on the metal deposited. For example, if Co is deposited, the Co is heated for about 10 seconds to 60 seconds at about 600[deg.] C. to about 850[deg.] C. to form CoSi2. When Ni is the metal layer deposited, the metal layer is heated for about 15seconds to about 120 seconds at 350[deg.] C. to about 700[deg.] C., to form NiSi.The method of the present invention prevents metal silicide bridging across sidewall spacers. The invention prevents device failure due to electrical bridging between the gate electrode and the source/drain regions. By reacting dangling silicon bonds with a solution of iodine and ethanol to form hydroxyl groups at the dangling silicon bonds, deposited metal does not react with the dangling silicon bonds to form conductive suicides. The present invention provides sidewall spacer surfaces that are substantially free of dangling silicon bonds and metal silicide. This invention further prevents surface oxide formation on the gate electrode and source/drain regions, which enables the formation of improved silicide contacts. This invention increases the yield of semiconductor fabrication processes in novel and elegant manner.The embodiments illustrated in the instant disclosure are for illustrative purposes only. The embodiments illustrated should not be construed to limit the scope of the claims. As is clear to one of ordinary skill in the art, the instant disclosure encompasses a wide variety of embodiments not specifically illustrated herein. |
The streaming engine (3100) fetches a fixed read-only data stream and packs the stream header data into two headend registers (3118/3119). The instruction decoder (113) decodes the instruction operandfield (1305) to control the data supply to the functional units (3230, 3240). The sub-decoder (3211) supplies the data from the general register file (231) to the functional unit (3230). The read-only operation sub-decoder (3215) supplies data from the first head-end register (3118). Read / advance operand sub-decoder (3216) supplies data and advances streams. The corresponding read only operandsub-decoder (3217) and read / early operand sub-decoder (3218) operate similarly using the second head register (3119). The read-only operand sub-decoder (3213) supplies double-width data from the twohead-end registers (3118, 3119) to the function unit (3230) and the pairing function unit (3240). The read / advance operand sub-decoder (3214) supplies double-width data and advances the stream. |
1.A digital signal processor includes:An instruction memory, which stores instructions, each of which specifies a data processing operation and at least one data manipulation field;An instruction decoder connected to the instruction memory for successively calling instructions from the instruction memory and determining the specified data processing operation and the specified at least one operand;At least one functional unit connected to the data register file and the instruction decoder for performing a data processing operation after the instruction decoder decodes at least one operand corresponding to the instruction and storing the result in an instruction designation Data registerA stream engine, connected to the instruction decoder, the stream engine being operable to recall a sequence of a plurality of data elements specified by the instruction from memory in response to a stream start instruction, the stream engine comprising:a first stream register that stores the next data element to be used in the stream, anda second stream head register that stores the next data element to be used in the stream after the data stored in the first stream head register; andThe instruction decoder is operable to:Decoding an instruction having a predetermined first stream read encoding to supply the data stored in the first stream head register to the corresponding functional unit, andAn instruction having a predetermined second stream read encoding is decoded to supply the data stored in the second stream head register to the corresponding functional unit.2.The digital signal processor of claim 1, further comprising:Each of the instructions includes at least one data manipulation number field specifying the at least one operand;a data register file that includes a plurality of data registers for storing data indicated by a register number;The predetermined first stream read encoding consists of the data operand field encoded with the first stream read bit;The predetermined second stream read encoding consists of the data operand field encoded with a second stream read bit;The instruction decoder is further operable to:Decoding an instruction containing a data operand field having one of the first subset bit codes so that the data stored in the corresponding data register of the data register file is supplied to the corresponding functional unit, andDecoding an instruction containing a data operand field having the predetermined first stream read bit code so as to supply the data stored in the first stream head register to the corresponding functional unit, andAn instruction having a data operand field containing the predetermined second stream read bit code is decoded to supply the data stored in the second stream head register to the corresponding functional unit.3.The digital signal processor according to claim 1, wherein:The instruction decoder is further operable to:Decoding an instruction having a predetermined first stream read/advance encoding to supply the data stored in the first stream head register to the corresponding functional unit, and by storing the next sequential data element of the stream Advance the flow engine in the first stream register and the second stream register, andDecoding an instruction having a predetermined second stream read/advance encoding to supply the data stored in the second stream head register to the corresponding functional unit and by passing the next successive data of the stream Elements are stored in the first stream header register and the second stream head register to advance the stream engine.4.The digital signal processor of claim 3, further comprising:Each of the instructions includes at least one data manipulation number field specifying the at least one operand;a data register file that includes a plurality of data registers that store data as indicated by the register number;The predetermined first stream read/advance encoding consists of the data operand field having a first stream read/preamble bit encoding;The predetermined second stream read/advance encoding consists of the data operand field having a second stream read/advance bit encoding;The instruction decoder is further operable to:Decoding an instruction containing a data operand field having one of the first subset bit codes so as to supply data stored in a corresponding data register of the data register file to a corresponding functional unit; andDecoding an instruction containing a data operation number field having the predetermined first stream read/advance bit code so as to supply the data stored in the first stream head register to the corresponding function unit and by passing the data The next sequential data element of the stream is stored in the first stream header register and the second stream head register to advance the stream engine, andDecoding an instruction containing a data operand field having the predetermined second stream read/advance bit code so that the data stored in the second stream head register is supplied to the corresponding functional unit and passed The stream engine is advanced by storing the next sequential data element of the stream in the first stream header register and the second stream head register.5.The digital signal processor according to claim 1, wherein:The at least one functional unit comprises at least one functional unit operable on double-width data;The instruction decoder is further operable to:Decoding an instruction having a predetermined third stream read encoding to supply the data stored in both the first stream header register and the second stream head register to the operable on double-width data Functional unit.6.The digital signal processor of claim 1, further comprising:Each of the instructions includes at least one data manipulation number field specifying the at least one operand;a data register file that includes a plurality of data registers that store data as indicated by the register number;The predetermined third stream read code consists of the data operand field encoded with a third stream read bit;The instruction decoder is further operable to:Decoding an instruction containing a data operand field encoded with a first subset of bits to supply data stored in a corresponding data register of the data register file to a corresponding functional unit; andDecoding an instruction having the predetermined third stream read bit code to supply the data stored in both the first stream header register and the second stream head register to be operable on double-width data The functional unit described.7.The digital signal processor according to claim 1, wherein:The instruction decoder is further operable to:Decoding an instruction having a predetermined third stream read/advance encoding to supply the data stored in both the first stream header register and the second stream head register to be operable on double-width data The functional unit, and advancing the stream engine by storing the next sequential data element of the stream in the first stream header register and the second stream head register.8.The digital signal processor of claim 7, further comprising:Each of the instructions includes at least one data manipulation number field specifying the at least one operand;a data register file that includes a plurality of data registers that store data as indicated by the register number;The predetermined third stream read/advance encoding consists of the data operand field having a third stream read/advance bit encoding;The instruction decoder is further operable to:Decoding an instruction containing a data operand field encoded with a first subset of bits to supply data stored in a corresponding data register of the data register file to a corresponding functional unit; andDecoding an instruction having the predetermined third stream read/advance bit code to supply the data stored in both the first stream header register and the second stream head register to double-width data The functional unit is operable and advances the flow engine by storing the next sequential data element of the flow in the first stream header register and the second stream head register.9.The digital signal processor according to claim 1, wherein:The instruction designation sequence of the plurality of data elements of a stream is specified by a start address and an element size.10.The digital signal processor according to claim 1, wherein:The stream head register is divided into a plurality of channels having the element data size; andThe stream engine stores one data element of the stream in each channel of the stream head register.11.The digital signal processor according to claim 10, wherein:If there are fewer remaining data elements than the channels, the flow engine stores all zeros in the excess channels.12.The digital signal processor according to claim 10, wherein:The flow engine includes:An address generator that generates a memory address of a next data element in the stream to extract the next element from memory;Data FIFO buffer, which temporarily stores data elements extracted from memory, andA formatter connected to the data FIFO buffer and connected to the stream head register to call data elements from the data FIFO buffer and perform instruction-specified formatting on each data element And the formatted data elements are stored in the stream head register.13.The digital signal processor according to claim 1, wherein:The instruction memory stores at least one stream save instruction and at least one stream restore instruction;The stream engine saves the current state of the open stream in response to the stream save instruction and stops the current open stream; andThe stream engine calls previously saved state metadata of the current open stream in response to a stream restore instruction, reopens the current open stream, and restores the current open stream corresponding to the stored metadata.14.The digital signal processor of claim 1, further comprising:a primary data buffer connected to said at least one operating unit, said primary data buffer temporarily storing data manipulated by said at least one operating unit, and if corresponding data is stored therein, that is, a cache hit, A primary data buffer serving the at least one operating unit memory read and write, otherwise a cache miss, delegating the at least one operating unit memory read and write to a more advanced memory; andA second level buffer is connected to the first level data buffer and is connected to the stream engine, and the second level buffer temporarily stores data manipulated by the at least one operation unit if corresponding data is stored therein. Cache hits, then the level 2 cache serving the level 1 cache misses memory reads and writes and the stream engine memory reads, otherwise the cache misses, reads and writes the at least one operating unit memory And stream engine read delegates to more advanced memory. |
Stream reference register with dual vector and dual single vector operation modesRelated applicationThis patent application is an improvement over U.S. Patent Application Serial No. 14/331,986, filed on July 15, 2014 and entitled "HIGHLY INTEGRATED SCALABLE, FLEXIBLE DSP MEGAMODULE ARCHITECTURE", which requires a serial number filed on July 15, 2013 Priority is U.S. Provisional Patent Application 61/846,148.Technical fieldThe technical field of the present invention is digital data processing, and more specifically, control of a flow engine for operand extraction.Background techniqueModern digital signal processors (DSPs) face multiple challenges. Increasing workload requires increased bandwidth. The system-on-chip (SOC) continues to grow in size and complexity. Memory system latency severely affects some types of algorithms. As transistors become smaller, memory and registers become more unreliable. As the software stack gets larger, the number of possible interactions and errors becomes larger.For digital signal processors operating on real-time data, memory bandwidth and scheduling is a problem. A digital signal processor that operates real-time data typically receives an input data stream, performs a filtering function such as encoding or decoding on the data stream, and outputs the converted data stream. This system is called real-time because the application fails if the converted data stream is not available for output when it is scheduled. Typical video coding requires predictable but non-timing input data patterns. It is often difficult to achieve the available address generation and memory access resources for the corresponding memory accesses. A typical application requires memory access to load data registers in a data register file (RF) and then supply it to functional units that perform data processing.Summary of the InventionThe present invention is a streaming engine used in digital signal processors. The fixed stream sequence is specified by storing the corresponding parameters in the control register. Once started, the data stream is read-only and cannot be written. This usually corresponds to the need for real-time filtering operations.Once the data stream is extracted, the data stream is stored in a first-in, first-out buffer before being supplied to the functional unit. Data can only be presented to the functional units in a fixed order. The exemplary embodiment supplies the data elements of the specified data stream to sequentially packetize a pair of headend registers, each of which has a data width of the functional unit.The pair of headend registers allows various accesses to the data stream. The first stream read instruction reads from the first headend register that stores the most recent data element of the data stream. The second stream read instruction reads from the second headend register that stores the next data element after the first headend register. This allows a slight rearrangement of access within the data stream sequence based on the use of streaming data.An exemplary embodiment uses the pair of headend registers to supply data to double data width instructions. At least one functional unit is capable of operating double normal data width data. This can be achieved using associated functional units, each operating a normal data width. The third stream read instruction reads from the first headend register and the second headend register to supply double-width data.In a preferred embodiment, each of the first stream read instruction, the second stream read instruction, and the third stream read instruction has an associated stream read/advance instruction. Stream read/advance instructions supply data like their associated stream read instructions. Each stream read/advance instruction also advances the data stream by storing the next sequential data element of the stream in the first stream header register and the second stream head register.Description of the drawingsThese and other aspects of the invention are illustrated in the drawings, in which:FIG. 1 illustrates a dual scalar/vector data path processor in accordance with one embodiment of the present invention; FIG.Figure 2 illustrates registers and functional units in the dual scalar/vector data path processor illustrated in Figure 1;Figure 3 illustrates the global scalar register file;Figure 4 illustrates the local scalar register file shared by the algorithmic functional units;Figure 5 illustrates the local scalar register file shared by the multiplication function unit;Figure 6 illustrates a local scalar register file shared by local/memory cells;Figure 7 illustrates the global vector register file;Figure 8 illustrates the assertion register file;Figure 9 illustrates the local vector register file shared by the algorithmic functional units;Figure 10 illustrates a local vector register file shared by a multiplication function unit and related functional units;FIG. 11 illustrates a pipeline stage of a central processing unit according to a preferred embodiment of the present invention; FIG.Figure 12 illustrates sixteen instructions for a single fetch packet;FIG. 13 illustrates an example of instruction encoding of an instruction used by the present invention; FIG.Figure 14 illustrates the bit encoding of the condition code expansion slot (slot) 0;Figure 15 illustrates the bit encoding of the condition code extension gap 1;Figure 16 illustrates the bit encoding 0 of the constant expansion gap;Figure 17 is a block diagram illustrating the expansion of the constant;FIG. 18 illustrates carry control for SIMD operation according to the present invention; FIG.FIG. 19 illustrates a conceptual view of a flow engine of the present invention; FIG.Figure 20 illustrates a first example of channel assignment in a vector;Figure 21 illustrates a second example of channel assignment in a vector;Figure 22 illustrates the basic two-dimensional flow;FIG. 23 illustrates the sequence of elements in the example stream of FIG. 21; FIG.Figure 24 illustrates the removal of a smaller rectangle from a larger rectangle;Figure 25 illustrates how the stream engine will extract a stream with 4 bytes of transfer granularity in this example;Figure 26 illustrates how the stream engine will extract a stream with 8 bytes of transfer granularity in this example;FIG. 27 illustrates the details of the flow engine of the present invention; FIG.FIG. 28 illustrates the flow template register of the present invention; FIG.Figure 29 illustrates the subfield definition of the tag field of the stream template register of the present invention;FIG. 30 illustrates a partial schematic diagram showing flow engine supply data of the present invention; FIG.FIG. 31 illustrates details of a flow engine of an alternative embodiment of the present invention; andFIG. 32 illustrates a partial schematic diagram showing flow engine provisioning data in the example of FIG.detailed descriptionFIG. 1 illustrates a dual scalar/vector data path processor in accordance with a preferred embodiment of the present invention. The processor 100 includes a separate primary instruction buffer (L1I) 121 and a primary data buffer (L1D) 123. The processor 100 includes a secondary combined instruction/data buffer (L2) 130 that holds both instructions and data. FIG. 1 illustrates the connection between the primary instruction cache 121 and the secondary combined instruction/data buffer 130 (bus 142). FIG. 1 illustrates the connection between the primary data buffer 123 and the secondary combined instruction/data buffer 130 (bus 145 ). In the preferred embodiment of the processor 100, the secondary combined instruction/data buffer 130 stores these two instructions to back up the primary instruction buffer 121 and store data to back up the primary data buffer 123. In a preferred embodiment, the secondary combined instruction/data buffer 130 is further connected to higher level registers and/or main memory in a manner known in the art but not illustrated in FIG. In the preferred embodiment, the central processing unit core 110, the primary instruction cache 121, the primary data buffer 123, and the secondary combined instruction/data buffer 130 are formed on a single integrated circuit. The signal integrated circuit optionally includes other circuits.The central processing unit core 110 extracts instructions from the primary instruction cache 121 under the control of the instruction fetch unit 111 . The instruction fetch unit 111 determines the next instruction to be executed and invokes a set of such instructions fetching the packet size. The nature and size of the extraction bag are further detailed below. As is known in the art, following a cache hit (if these instructions are stored in the primary instruction cache 121), the instruction is directly fetched from the primary instruction cache 121. After the cache miss (the specified instruction fetch packet is not stored in the primary instruction cache 121), these instructions are searched in the secondary combined buffer 130. In a preferred embodiment, the size of the cache line in the primary instruction cache 121 is equal to the size of the extracted packet. The memory location of these instructions is in the secondary combined buffer 130 or hit or missed. Hits are served by the secondary combined buffer 130. The miss is served by another level of buffer (not illustrated) or by main memory (not illustrated). As is known in the art, the requested instructions may be simultaneously supplied to both the primary instruction cache 121 and the central processing unit core 110 for accelerated use.In a preferred embodiment of the present invention, the central processing unit core 110 includes a plurality of functional units to perform instruction-specified data processing tasks. The instruction dispatch unit 112 determines the target functional unit of each extracted instruction. In a preferred embodiment, the central processing unit 110 operates as a Very Long Instruction Word (VLIW) processor capable of simultaneously processing a plurality of instructions in a respective functional unit. Preferably, the compiler organizes execution of the instructions that are executed together in the package. Instruction dispatch unit 112 directs each instruction to its target functional unit. The functional unit assigned to the instruction is completely specified by the compiler-generated instruction. The hardware of the central processing unit core 110 does not participate in this functional unit assignment. In a preferred embodiment, instruction dispatch unit 12 may operate a plurality of instructions in parallel. The number of such parallel instructions is set by the size of the execution package. This will be further detailed below.Part of the dispatch task of the instruction dispatch unit 112 is a determination of whether the instruction is to be executed on a functional unit in the scalar data path side A 115 or on a functional unit in the vector data path side B 116 . The instruction bit called s-bit in each instruction determines which data path the instruction controls. This will be further detailed below.The instruction decoding unit 113 decodes each instruction in the current execution packet. Decoding includes identifying a functional unit executing the instruction, identifying a register for supplying data for a corresponding data processing operation from a possible register file (RF), and a register destination identifying a result of a corresponding data processing operation. As explained further below, the instructions may include a constant field that operates on a numeric field instead of a register number. The result of this decoding is a signal that is used to control the target functional unit in order to perform the data processing operation specified by the corresponding instruction on the designated data.The central processing unit core 110 includes a control register 114 . The control register 114 stores information for controlling the functional units in the scalar data path side A 115 and the vector data path side B 116 in a manner that is not related to the present invention. This information may be mode information or the like.The decoded instruction from the instruction decoder 113 and the information stored in the control register 114 are supplied to the scalar data path side A 115 and the vector data path side B 116. As a result, the functional units in the scalar data path side A 115 and the vector data path side B 116 perform instruction-specified data processing operations based on data specified by the instruction and store the result in one or more instruction-specified data registers. Each of the scalar data path side A 115 and the vector data path side B 116 includes a plurality of functional units preferably operating in parallel. These will be further detailed with reference to FIG. There is a data path 117 allowing data exchange between the scalar data path side A 115 and the vector data path side B 116 .The central processing unit core 110 further includes non-instruction based modules. The simulation unit 118 allows the machine state of the central processing unit core 110 to be determined in response to the instructions. This ability will usually be used for algorithm development. The interrupt/exception unit 119 enables the central processing unit core 110 to respond to an external asynchronous event (interruption) and responds to an attempt (abnormality) to perform an inappropriate operation.Central processing unit core 110 includes a flow engine 125 . The stream engine 125 supplies the two data streams from the predetermined address, which is normally cached in the secondary combined buffer 130, to the register file of the vector data path side B. This provides controlled data movement directly from the memory (eg, buffered in the secondary combined buffer 130) to the functional unit operand input. This will be further detailed below.FIG. 1 illustrates an exemplary data width of a bus between various sections. The level 1 instruction buffer 121 supplies the instruction via the bus 141 to the instruction extraction unit 111 . Preferably, bus 141 is a 512-bit bus. The bus 141 is unidirectionally passed from the primary instruction buffer 121 to the central processing unit 10 . The secondary combined buffer 130 supplies instructions to the primary instruction cache 121 via the bus 142 . Preferably, bus 142 is a 512-bit bus. The bus 142 is unidirectionally passed from the secondary combined buffer 130 to the primary instruction cache 121 .The primary data buffer 123 exchanges data with the register file in the scalar data path side A 115 via the bus 143 . Preferably, bus 143 is a 64-bit bus. The primary data buffer 123 exchanges data with the register file in the vector data path side B 116 via the bus 144 . Preferably, bus 144 is a 512-bit bus. Busses 143 and 144 are illustrated as bidirectionally supporting both central processing unit 110 data reading and data writing. Primary data buffer 123 and secondary combined buffer 130 exchange data via bus 145 . Bus 145 is preferably a 512-bit bus. The bus 145 is illustrated as a bi-directional support for the cache service of the central processing unit 110 for data reading and data writing.As known in the art, after a cache hit (if the requested data is stored in the primary data buffer 123), the CPU data request is directly fetched from the primary data cache 123. After the cache miss (specified data is not stored in the primary data buffer 123), the data is searched for in the secondary combined buffer 130. The memory location of the request data is in the secondary combined buffer 130 or hit or missed. Hits are served by the secondary combined buffer 130. The miss is served by another level of buffer (not illustrated) or by main memory (not illustrated). As is known in the art, the requested instructions may be simultaneously supplied to both the primary data buffer 123 and the central processing unit core 110 for accelerated use.The secondary combined buffer 130 supplies the data of the first data stream to the stream engine 125 via the bus 146 . Bus 146 is preferably a 512-bit bus. The stream engine 125 supplies the data of the first data stream to the functional units of the vector data path side B 116 via the bus 147 . Preferably, bus 147 is a 512-bit bus. The secondary combined buffer 130 supplies data of the second data stream to the stream engine 125 via the bus 148 . Preferably, bus 148 is a 512-bit bus. The stream engine 125 supplies the data of the second data stream to the functional units of the vector data path side B 116 via the bus 149 . Preferably, bus 149 is a 512-bit bus. According to a preferred embodiment of the present invention, busses 146, 147, 148, and 149 are illustrated as being unidirectionally routed from the secondary combined buffer 130 to the flow engine 125 and to the vector data path side B 116.After a cache hit (if the requested data is stored in the secondary combined buffer 130), the streaming engine data request is directly fetched from the secondary combined buffer 130. After the cache miss (specified data is not stored in the secondary combined buffer 130), the data is searched from another level of cache (not illustrated) or from main memory (not illustrated). In some embodiments, it is technically feasible for the primary data buffer 123 to cache data that is not stored in the secondary combined buffer 130 . If this operation is supported, the secondary combined buffer 130 should listen to the primary data buffer 1233 for data requested by the streaming engine based on the stream engine data request lost in the secondary combined buffer 130 . If the primary data buffer 123 stores this data, its snoop response will include data that is subsequently provisioned by the service flow engine. If the primary data buffer 123 does not store the data, its snoop response will indicate this result and the secondary combined buffer 130 must service from another level of buffer (not illustrated) or from main memory (not shown Description of this flow engine request.In a preferred embodiment of the present invention, according to U.S. Patent No. 6,606,686 entitled "UNIFIED MEMORY SYSTEM ARCHITECTURE INCLUDING CACHE AND DIRECTLY ADDRESSABLE STATIC RANDOM ACCESS MEMORY", both the primary data buffer 123 and the secondary combined buffer 130 are Can be configured as a selected number of buffers or directly addressable memory.FIG. 2 further illustrates the details of functional units and register files in the scalar data path side A 115 and the vector data path side B 116 . The scalar data path side A 115 includes a global scalar register file (RF) 211, an L1/S1 local register file 212, an M1/N1 local register file 213, and a D1/D2 local register file 214. The scalar data path side A 115 includes an L1 unit 221, an S1 unit 222, an M1 unit 223, an N1 unit 224, a D1 unit 225, and a D2 unit 226. Vector data path side B 116 includes global vector register file 231, L2/S2 local register file 232, M2/N2/C local register file 233, and predicate register file 214. The vector data path side B 116 includes an L2 unit 241, an S2 unit 242, an M2 unit 243, an N2 unit 244, a C unit 245, and a P unit 246. There is a limit on which register file the functional unit can read or which register file can be written. These will be described in detail below.The scalar data path side A 115 includes an L1 unit 221 . The L1 unit 221 typically accepts two 64-bit operands and produces a 64-bit result. Both of these operands are called from the register specified by the instruction in the global scalar register file 211 or the L1/S1 local register file 212. The L1 unit 221 preferably performs the following instruction selected operations: 64-bit add/subtract operations; 32-bit min/max operations; 8-bit single instruction multiple data (SIMD) instructions (such as absolute value summation, minimum, and maximum value determinations); Cycles min/max operations and various move operations between register files. The result can be written in the register specified by the instruction of the global scalar register file 211, the L1/S1 local register file 212, the M1/N1 local register file 213 or the D1/D2 local register file 214.The scalar data path side A 115 includes an S1 unit 222 . S1 unit 222 typically accepts two 64-bit operands and produces a 64-bit result. Both of these operands are called from the register specified by the instruction in the global scalar register file 211 or the L1/S1 local register file 212. S1 unit 222 preferably performs the same type of operation as L1 unit 211 . There may optionally be slight variations between the data processing operations supported by the L1 unit 211 and the S1 unit 222 . The result can be written in the register specified by the instruction of the global scalar register file 211, the L1/S1 local register file 212, the M1/N1 local register file 213 or the D1/D2 local register file 214.The scalar data path side A 115 includes an M1 unit 223 . The M1 unit 223 typically accepts two 64-bit operands and produces a 64-bit result. Both of these operands are called from the registers specified by the instructions in the global scalar register file 211 or the M1/N1 local register file 213. The M1 unit 223 preferably performs the following commanded operations: 8-bit multiplication operations; complex dot product operations; 32-bit count operations; complex conjugate multiplication operations and bitwise logic operations, shifts, plus and minus. The result can be written in the register specified by the instruction of the global scalar register file 211, the L1/S1 local register file 212, the M1/N1 local register file 213 or the D1/D2 local register file 214.The scalar data path side A 115 includes an N1 unit 224 . The N1 unit 224 typically accepts two 64-bit operands and produces a 64-bit result. Both of these operands are called from the registers specified by the instructions in the global scalar register file 211 or the M1/N1 local register file 213. The N1 unit 224 preferably performs the same type of operation as the M1 unit 223 . There may be some dual operations (referred to as double posted instructions) that employ both the M1 unit 223 and the N1 unit 224 simultaneously. The result can be written in the register specified by the instruction of the global scalar register file 211, the L1/S1 local register file 212, the M1/N1 local register file 213 or the D1/D2 local register file 214.The scalar data path side A 115 includes a D1 unit 225 and a D2 unit 226 . The D1 unit 225 and the D2 unit 226 typically both accept two 64-bit operands and both produce a 64-bit result. The D1 unit 225 and D2 unit 226 typically perform address calculations and corresponding load and store operations. The D1 unit 225 is for 64-bit scalar loading and storage. The D2 unit 226 is for 512-bit vector loading and storage. Preferably, the D1 unit 225 and the D2 unit 226 also perform: swapping, packing, and unpacking load and store data; 64-bit SIMD algorithm operations and 64-bit bitwise logic operations. The D1/D2 local register file 214 will typically store the base and offset addresses for address calculations for the respective load and store. Both of these operands are called from the registers specified by the instructions in the global scalar register file 211 or the D1/D2 local register file 214. The result of the calculation may be written into a register specified by an instruction of the global scalar register file 211, the L1/S1 local register file 212, the M1/N1 local register file 213, or the D1/D2 local register file 214.Vector data path side B 116 includes L2 unit 241 . L2 unit 241 typically accepts two 512-bit operands and produces a 512-bit result. Both of these operands are called from the registers specified by the instructions in the global vector register file 231, the L2/S2 local register file 232, or the assertion register file 234. The L2 unit 241 preferably performs similar instructions to the L1 unit 221 except for wider 512-bit data. The result can be written in a register designated by an instruction of the global vector register file 231, the L2/S2 local register file 232, the M2/N2/C local register file 233, or the assertion register file 214.Vector data path side B 116 includes S2 unit 242 . S2 unit 242 typically accepts two 512-bit operands and produces a 512-bit result. Both of these operands are called from the registers specified by the instructions in the global vector register file 231, the L2/S2 local register file 232, or the assertion register file 234. In addition to wider 512-bit data, S2 unit 242 preferably performs similar instructions to S1 unit 222. The result can be written in a register designated by an instruction of the global vector register file 231, the L2/S2 local register file 232, the M2/N2/C local register file 233, or the assertion register file 214.The vector data path side B 116 includes an M2 unit 243 . The M2 unit 243 typically accepts two 512-bit operands and produces a 512-bit result. Both of these operands are called from the registers specified by the instructions in the global vector register file 231 or the M2/N2/C local register file 233. In addition to the wider 512-bit data, the M2 unit 243 preferably executes similar instructions to the M1 unit 222. The result can be written in the register specified by the instruction of the global vector register file 231, the L2/S2 local register file 232, or the M2/N2/C local register file 233.Vector data path side B 116 includes N2 unit 244 . The N2 unit 244 typically accepts two 512-bit operands and produces a 512-bit result. Both of these operands are called from the registers specified by the instructions in the global vector register file 231 or the M2/N2/C local register file 233. The N2 unit 244 preferably executes the same type of instructions as the M2 unit 243. There may be some dual operations (referred to as double posted instructions) that employ both the M2 unit 243 and the N2 unit 244 simultaneously. The result can be written in the register specified by the instruction of the global vector register file 231, the L2/S2 local register file 232, or the M2/N2/C local register file 233.Vector data path side B 116 includes C unit 245 . C unit 245 typically accepts two 512-bit operands and produces a 512-bit result. Both of these operands are called from the registers specified by the instructions in the global vector register file 231 or the M2/N2/C local register file 233. C unit 245 preferably executes: "search" and "search" instructions; up to 512 2-bit PN*8-bit multiplication I/Q complex multiplication per clock cycle; 8 and 16-bit absolute difference summation (SAD) Calculate up to 512 SADs per clock cycle; horizontal addition and horizontal min/max instructions; and vector alignment instructions. The C unit 245 also includes 4 vector control registers (CUCR0 to CUCR3) for controlling certain operations of the C unit 245 instructions. The control registers CUCR0 to CUCR3 serve as operands for some of the C unit 245 operations. The control registers CUCR0 to CUCR3 are preferably used to control the general alignment instructions (VPERM) and are used as masks for SIMD multi-focus product operation (DOTPM) and SIMD multiple absolute difference sum (SAD) operations. The control register CUCR0 is preferably used to store a polynomial of a Galois field multiplication operation (GFMPY). The control register CUCR1 is preferably used to store Galois field polynomial generator functions.Vector data path side B 116 includes P unit 246 . P unit 246 performs basic logic operations on registers that locally assert register file 234 . P unit 246 has direct access rights to read from predicate register file 234 and to write predicate register file 234 . These operations include AND, ANDN, OR, XOR, NOR, BITR, NEG, SET, BITCNT, RMBD, BIT Decimate, and Extension. The normal intended use of P unit 246 includes the manipulation of SIMD vector comparison results for controlling further SIMD vector operations.FIG. 3 illustrates the global scalar register file 211. There are 16 independent 64-bit wide scalar registers, labeled A0-A15. Each register of the global scalar register file 211 can be read or written as 64-bit scalar data. All of the scalar data path side A 115 functional units (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, and D2 unit 226) can read or write to the global scalar register file 211. The global scalar register file 211 may be read as 32 bits or read as 64 bits and may only be written as 64 bits. The execution of the instruction determines the size of the read data. The vector data path side B 116 functional units (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245, and P unit 246) can be globally scalar via the crossover path 117 under the constraints detailed below. The register file 211 is read.FIG. 4 illustrates the D1/D2 local register file 214. There are 16 independent 64-bit wide scalar registers, labeled D0-D16. Each register of the D1/D2 local register file 214 can be read or written as 64-bit scalar data. All of the scalar data path side A 115 functional units (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, and D2 unit 226) can write to the global scalar register file 211. Only D1 unit 225 and D2 unit 226 may read from D1/D2 local register file 214 . It is expected that the data stored in the D1/D2 local register file 214 will include the base address and offset address for address calculation.FIG. 5 illustrates the L1/S1 local register file 212. The embodiment illustrated in FIG. 5 has eight independent 64-bit wide scalar registers, labeled AL0-AL7. The preferred instruction code (see FIG. 13) allows the L1/S1 local register file 212 to include up to 16 registers. The embodiment of FIG. 5 implements only 8 registers to reduce circuit size and complexity. Each register of the L1/S1 local register file 212 can be read or written as 64-bit scalar data. All of the scalar data path side A 115 functional units (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, and D2 unit 226) may write to the L1/S1 local register file 212. Only the L1 unit 221 and the S1 unit 222 may read from the L1/S1 local register file 212.FIG. 6 illustrates the M1/N1 local register file 213. The embodiment illustrated in FIG. 6 has eight independent 64-bit wide scalar registers, which are labeled AM0-AM7. The preferred instruction code (see FIG. 13) allows the M1/N1 local register file 213 to include up to 16 registers. The embodiment of FIG. 6 implements only 8 registers to reduce circuit size and complexity. Each register of the M1/N1 local register file 213 can be read or written as 64-bit scalar data. All of the scalar data path side A115 functional units (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, and D2 unit 226) can write to the M1/N1 local register file 213. Only the M1 unit 223 and the N1 unit 224 may read from the M1/N1 local register file 213.FIG. 7 illustrates the global vector register file 231. There are 16 independent 512-bit wide vector registers. Each register of the global vector register file 231 can be read or written as 64-bit scalar data, labeled B0-B15. The instruction type determines the data size. All of the vector data path side B 116 functional units (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245, and P unit 246) may read or write to the global vector register file 231. The scalar data path side A115 functional units (L1 cell 221, S1 cell 222, M1 cell 223, N1 cell 224, D1 cell 225, and D2 cell 226) can be accessed from the global vector register via the crossover path 117 under the constraints detailed below. File 231 reads.FIG. 8 illustrates P local register file 234. There are 8 independent 64-bit wide registers, labeled P0-P15. Each register of the P local register file 234 can be read or written as 64-bit scalar data. The vector data path side B 116 function unit L2 unit 241, S2 unit 242, C unit 244, and P unit 246 may write to the P local register file 234. Only L2 unit 241, S2 unit 242, and P unit 246 may read from P local register file 234. The general intended use of P local register file 234 includes writing a SIMD vector comparison result from L2 unit 241, S2 unit 242 or C unit 244, manipulating the SIMD vector comparison result by P unit 246, and using the manipulation result to control further SIMD vector operations.FIG. 9 illustrates the L2/S2 local register file 232. The embodiment illustrated in FIG. 9 has 8 independent 512-bit wide vector registers. The preferred instruction code (see FIG. 13) allows the L2/S2 local register file 232 to include up to 16 registers. The embodiment of FIG. 9 implements only 8 registers to reduce circuit size and complexity. Each register of the L2/S2 local vector register file 232 can be read or written as 64-bit scalar data, labeled BL0-BL7. Each register of the L2/S2 local vector register file 232 can be read or written as 512-bit vector data, labeled VBL0-VBL7. The instruction type determines the data size. All vector data path side B 116 functional units (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245, and P unit 246) may write L2/S2 local vector register file 232. Only L2 unit 241 and S2 unit 242 may read from L2/S2 local vector register file 232.FIG. 10 illustrates the M2/N2/C local register file 233. The embodiment illustrated in FIG. 10 has 8 independent 512-bit wide vector registers. The preferred instruction code (see FIG. 13) allows the L1/S1 local register file 212 to include up to 16 registers. The embodiment of FIG. 10 implements only 8 registers to reduce circuit size and complexity. Each register of the M2/N2/C local vector register file 233 can be read or written as 64-bit scalar data, labeled BM0-BM7. Each register of the M2/N2/C local vector register file 233 can be read or written as 512-bit vector data, labeled VML0-VML7. All of the vector data path side B 116 functional units (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245, and P unit 246) can write to the M2/N2/C local vector register file 233. Only the M2 unit 233, the N2 unit 244, and the C unit 245 may read from the M2/N2/C local vector register file 233.The specification of global register files accessible by all functional areas on one side and local register files accessible only by some functional units on one side is a design choice. The present invention can be practiced by using only one type of register file corresponding to the disclosed global register file.Cross path 117 allows limited data exchange between scalar data path side A 115 and vector data path side B 116 . During each operating cycle, a 64-bit data word can be called from the global scalar register file A 211 to be used as the operand of one or more functional units of the vector data path side B 116, and a 64-bit data word can be The operands to be used as one or more functional units of the scalar data path side A 115 are called from the global vector register file 231 . Any scalar data path side A 115 functional unit (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, and D2 unit 226) can read 64-bit operands from the global vector register file 231. This 64-bit operand is the least significant bit of the 512-bit data in the accessed register of the global vector register file 232. The plurality of scalar data path side A 115 functional units may employ the same 64-bit cross-path data as the operands during the same operation cycle. However, only one 64-bit operand is transferred from the vector data path side B 116 to the scalar data path side A 115 in any single operation cycle. Any vector data path side B 116 (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245, and P unit 246) can read 64-bit operands from the global scalar register file 211. If the corresponding instruction is a scalar instruction, the cross path operand data is treated as any other 64-bit operand. If the corresponding instruction is a vector instruction, the high 448 bits of the operand are filled with zeros. The plurality of vector data path side B 116 functional units may employ the same 64-bit cross path data as during the same operation period. Only one 64-bit operand is transferred from the scalar data path side A 115 to the vector data path side B 116 in any single operation cycle.Stream engine 125 transmits data under certain limited circumstances. Stream engine 125 controls two data streams. A stream consists of a sequence of elements of a particular type. The programs that operate on these streams read the data sequentially and operate on each element. Each stream has the following basic properties. Stream data has well-defined start and end times. Stream data has a fixed element size and type throughout the stream. Stream data has a fixed sequence of elements. Therefore, the program cannot be randomly found in the stream. Stream data is only read when activated. When a program reads from a stream it cannot write to the stream at the same time. Once the stream is opened, the stream engine 125: computes the address; extracts the defined data type from the secondary unified cache (which may require a cache service from a more advanced memory); performs data type manipulations such as zero expansion, sign expansion , Data element collation/exchange (such as matrix transposition); and transferring data directly to a programmed data register file in the CPU 110. Therefore, the stream engine 125 benefits real-time digital filtering operations on well-behaved data. The flow engine 125 releases these memory fetch tasks from the corresponding CPU to enable other processing functions.Stream engine 125 provides the following benefits. Stream engine 125 allows multi-dimensional memory access. Stream engine 125 increases the available bandwidth of the functional unit. Since the stream buffer bypasses the primary data buffer 123, the stream engine 125 minimizes the number of stall misses. Stream engine 125 reduces the number of scalar operations required to maintain the loop. Stream engine 125 manages address pointers. The flow engine 125 handles the automatic release of address generation instruction slots for other calculations and address generation of the D1 unit 224 and the D2 unit 226 .The CPU 110 operates on the instruction pipeline. The instructions are extracted in a fixed-length instruction packet described further below. All instructions require the same number of pipeline stages for extraction and decoding, but require a different number of execution stages.FIG. 11 illustrates the following pipeline stages: program extraction stage 1110, dispatch and decode stage 1110, and execution stage 1130. For all instructions, program extraction stage 1110 includes three levels. For all instructions, the dispatch and decode stages include three levels. The execution stage 1130 includes one to four levels according to the instructions.The extraction stage 1110 includes a program address generation stage 1111 (PG), a program access stage 1112 (PA), and a program reception stage 1113 (PR). During the program address generation stage 1111 (PG), a program address is generated in the CPU and a read request is sent to the memory controller of the level one instruction buffer L1I. During the program access level 1112 (PA), the level one instruction buffer L1I processes the request, accesses the data in its memory and sends the extracted packet to the CPU boundary. During the program reception stage 1113 (PR), the CPU registers the extraction package.The instruction is always extracted sixteen 32-bit wide gaps at a time to form a fetch packet. Figure 12 illustrates the 16 instructions 1201-1216 for a single fetch packet. Extracted packets are aligned on 512-bit (16-word) boundaries. The preferred embodiment uses a fixed 32-bit instruction length. Fixed-length instructions are advantageous for the following reasons. Fixed-length instructions make the decoder easy to align. A properly aligned instruction fetch can load a plurality of instructions into a parallel instruction decoder. When a predetermined instruction alignment is stored in the memory coupled to the fixed instruction packet extraction (the fetch of the packet is aligned on a 512-bit boundary), the predetermined instruction alignment can obtain such properly aligned instruction fetches. Aligned instruction fetching allows the parallel decoder to operate on the fetched bits of instruction size. Variable-length instructions require an initial step of locating each instruction boundary before each instruction can be decoded. Fixed length instruction sets usually allow more regular layout of instruction fields. This simplifies the structure of each decoder and benefits the wide issue of the VLIW central processor.The execution of individual instructions is partially controlled by the p-bit in each instruction. Preferably, the p-bit is bit 0 of a 32-bit wide gap. The p bit determines whether the instruction executes in parallel with the next instruction. Scan instructions from lower address to higher address. If the p bit of the instruction is 1, then the next following instruction (higher memory address) is executed in parallel with the instruction (with the instruction in the same cycle). If the p bit of the instruction is 0, the next following instruction is executed within the period following the instruction.The CPU 110 and the level one instruction buffer L1I 121 pipeline are decoupled from each other. The extracted packets returned from the primary instruction buffer L1I may employ different numbers of clock cycles, depending on external circumstances, such as whether they are hit in the primary instruction cache 121 or hit in the secondary combined buffer 130 . Therefore, the program access stage 1112 (PA) can use several clock cycles instead of 1 clock cycle as in other stages.Instructions executed in parallel form an execution package. In a preferred embodiment, the execution package may include up to sixteen instructions. No two instructions in the execution package can use the same functional unit. The gap is one of the following five types: 1) The functional unit in the CPU 110 (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, D2 unit 226, L2 unit 241, S2 unit 242, M2 a self-sustaining instruction executed on one of the unit 243, the N2 unit 244, the C unit 245, and the P unit 246); 2) a unitless instruction such as a NOP (no operation) instruction or a multiple NOP instruction; 3) a branch instruction; 4) a constant Field expansion; and 5) Condition code expansion. Some of these gap types will be further explained below.The dispatch and decode stage 1110 includes instruction dispatch to the appropriate execution unit level 1121 (DS); instruction predecode stage 1122 (D1); and instruction decode and operand read stage 1222 (D2). During instruction dispatch to the appropriate execution unit level 1121 (DS), the extraction packet is divided into execution packages and assigned to the appropriate functional units. During the instruction pre-decode stage 1122 (D1), the source register, destination register, and associated path are decoded for execution of instructions in the functional unit. During the instruction decode and operand read stage 1222 (D2), more detailed unit decoding is done, and operands are read from the register file.Execution stage 1130 includes execution stages 1131-1135 (E1-E5). Different types of instructions require different numbers of these stages to complete their execution. These stages of the pipeline play an important role in understanding the device state at the CPU cycle boundary.During execution of level 1 1131 (E1), the condition of the instruction is evaluated and the operand is operated. As illustrated in FIG. 11, the execution level 1131 may receive operands from one of the stream buffer 1141 and a register file (illustratively shown as 1142). For load and store instructions, address generation is performed and address modifications are written to the register file. For branch instructions, branch extraction packages in the PG stage are affected. As illustrated in FIG. 11, load and store instructions access the memory (shown here graphically as memory 1151). For single-cycle instructions, the result is written to the destination register file. This assumes that any condition of the instruction is evaluated as correct. If the condition is evaluated as an error, the instruction does not write any result or does not have any pipeline operation after executing level 1131.During the execution of level 2 1132 (E2), the load instruction sends the address to memory. The store instruction sends the address and data to memory. If saturation occurs, the one-cycle instruction that saturated the result sets the SAT bit in the control status register (CSR). For 2-cycle instructions, the result is written to the destination register file.Data memory access is performed during execution of level 3 1133 (E3). If saturation occurs, any multiple instructions that saturate the result set the SAT bit in the control status register (CSR). For 3-cycle instructions, the result is written to the destination register file.During the execution of level 4 1134 (E4), the load instruction takes data to the CPU boundary. For 4-cycle instructions, the result is written to the destination register file.During the execution of level 5 1135 (E5), the load instruction writes data into the register. This is illustrated in FIG. 11 by the input from memory 1151 to execution stage 5 1135 .FIG. 13 illustrates an example of an instruction code 1300 for a functional unit instruction used by the present invention. Those skilled in the art will recognize that other encodings are feasible and are within the scope of the present invention. Each instruction consists of 32 bits and controls each controllable functional unit (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, D2 unit 226, L2 unit 241, S2 unit 242, M2 unit 243 The operation of one of the N2 unit 244, the C unit 245, and the P unit 246). The bit field is defined as follows.The creg field 1301 (bit 29 to bit 31) and the z bit 1302 (bit 28) are optional fields for conditional instructions. These bits are used in conditional instructions to identify predicate registers and conditions. The z bit 1302 (bit 28) indicates whether the assertion is based on zero or non-zero in the predicate register. If z=1, the test is for zero, and if z=0, the test is for non-zero. The case where creg = 0 and z = 0 is considered as always correct to allow unconditional instruction execution. The creg field 1301 and the z field 1302 are encoded in the instruction as shown in Table 1.Table 1Execution of conditional instructions is conditional on the values stored in the specified data register. This data register is in global scalar register file 211 for all instruction units. Note that the "z" in the z-bit column refers to the zero/non-zero comparison option mentioned above and "x" is not concerned with the state. This encoding can specify only a subset of 16 global registers as assertion registers. Make this choice to save the bits in the instruction code. Note that unconditional instructions do not have these optional bits. Preferably, for unconditional instructions, these bits (28 to 31) in fields 1301 and 1302 are used as additional opcode bits.The dst field 1303 (bits 23 to 27) specifies the register in the corresponding register file as the destination of the instruction result.The src2/cst field 1304 (bits 18 to 22) has several meanings depending on the instruction opcode field (bits 4 to 12 for all instructions, and additionally, bits 28 to 31 for unconditional instructions). The first meaning specifies the register of the corresponding register file as the second operand. The second meaning is direct constants. Depending on the instruction type, this is treated as an unsigned integer and zero that extends to the specified data length or as a signed integer and symbol that extends to the specified data length.The src1 field 1305 (bits 13 to 17) specifies the register in the corresponding register file as the first source operand.The opcode field 1306 (bits 4 through 12) for all instructions (and additionally for bits 28 through 31 of the unconditional instruction) specifies the type of instruction and indicates the appropriate instruction options. This includes the explicit name of the functional unit that is used and performs the operation. In addition to the instruction options detailed below, a detailed explanation of the opcodes is beyond the scope of the present invention.The e-bit 1037 (bit 2) is used only for direct constant instructions, where the constants can be expanded. If e=1, the direct constant is expanded in the manner detailed below. If e=0, the direct constant is not expanded. In this case, the direct constant is specified by the src2/cst field 1304 (bits 18 to 22). Note that the e-bit 1307 is only used for some instructions. Thus, by using the proper encoding, the e-bit 1307 can be omitted from the instruction that does not require it and this bit is used as an additional opcode bit.The s bit 1307 (bit 1) designates the scalar data path side A 115 or the vector data path side B 116. If s=0, the scalar data path side A 115 is selected. This limits the functional units to the L1 unit 221, the S1 unit 222, the M1 unit 223, the N1 unit 224, the D1 unit 225, the D2 unit 226 and the corresponding register file illustrated in FIG. 2. Similarly, s=1 selects the vector data path side B 116, which limits the functional units to the L2 unit 241, the S2 unit 242, the M2 unit 243, the N2 unit 244, the P unit 246 and the corresponding registers illustrated in FIG. 2. file.The p-bit 1308 (bit 0) flag executes the packet. The p bit determines whether the instruction is executed in parallel with the subsequent instruction. Scan p-bit from lower address to higher address. If p=1 for the current instruction, the next instruction is executed in parallel with the current instruction. If p=0 for the current instruction, the next instruction is executed in the cycle following the current instruction. All instructions executed in parallel form an execution package. The execution package can contain up to 12 instructions. Each instruction in the execution package must use a different functional unit.There are two different condition code expansion slots. Each execution packet can contain each of these unique 32-bit condition code extension gaps, which contains 4 bit creg/z fields for the instructions in the same execution packet. FIG. 14 illustrates encoding for the condition code extension gap 0 and FIG. 15 illustrates encoding for the condition code extension gap 1 .FIG. 14 illustrates encoding for a condition code extended gap 0 with 32 bits. The field 1401 (bit 28 to bit 31) designates 4 creg/z bits assigned to the L1 unit 221 instruction in the same execution pack. Field 1402 (bits 27 to 24) designates 4 creg/z bits allocated to the L2 unit 241 instruction in the same execution pack. Field 1403 (bit 19 to bit 23) designates 4 creg/z bits allocated to the S1 unit 222 instruction in the same execution packet. Field 1404 (Bit 16 to Bit 19) specifies the 4 creg/z bits assigned to the S2 unit 242 instruction in the same execution packet. Field 1405 (bit 12 to bit 15) specifies the 4 creg/z bits assigned to the D1 unit 225 instruction in the same execution packet. Field 1406 (bits 8 to 11) specifies the 4 creg/z bits allocated to the D2 unit 245 instruction in the same execution packet. Fields 1407 (bits 6 and 7) are unused/reserved. Field 1408 (Bit 0 to Bit 5) is encoded with a set of unique bits (CCEXO) to identify the condition code extension gap 0. Once the unique ID of the condition code extended gap 0 is detected, the corresponding creg/z bits are used to control any L1 unit 221, L2 unit 241, S1 unit 222, S2 unit 242, D1 unit 224, D2 in the same execution pack. Unit 225 conditions the execution of the instruction. These creg/z bits are interpreted as shown in Table 1. If the corresponding instruction is conditional (including the creg/z bit), the corresponding bit in condition code extension gap 0 overrides the condition code bit in the instruction. Note that execution packages cannot have more than one instruction that points to a particular execution unit. The execution packet of the instruction cannot include more than one condition code extension gap 0. So the mapping of the creg/z bits to the functional unit instructions is clear. Setting the creg/z bit equal to "0000" makes the instruction unconditional. Therefore properly coded condition code extended gap 0 can make some corresponding instructions conditional and some instructions unconditional.FIG. 15 illustrates encoding for a condition code extended gap 1 with 32 bits. The field 1501 (bit 28 to bit 31) designates 4 creg/z bits assigned to the M1 unit 223 instruction in the same execution pack. The field 1502 (bits 27 to 24) designates 4 creg/z bits assigned to the M2 unit 243 instruction in the same execution pack. Field 1503 (bit 19 to bit 23) specifies the 4 creg/z bits assigned to the C unit 245 instruction in the same execution packet. Field 1504 (Bit 16 to Bit 19) specifies the 4 creg/z bits assigned to the N1 unit 224 instruction in the same execution package. Field 1505 (bit 12 to bit 15) specifies the 4 creg/z bits assigned to the N2 unit 244 instruction in the same execution packet. Field 1506 (bits 5 to 11) is unused/reserved. Field 1507 (Bit 0 to Bit 5) is encoded with a set of unique bits (CCEX1) to identify condition code extension gap 1. Once the unique ID of the condition code extended gap 1 is detected, the corresponding creg/z bits are used to control any M1 unit 223, M2 unit 243, C unit 245, N1 unit 224, and N2 unit 244 instructions in the same execution package. Conditional execution. These creg/z bits are interpreted as shown in Table 1. If the corresponding instruction is conditional (including the creg/z bit), then the condition code expands the condition code bits in gap 1 by the corresponding bit in the override instruction. Note that the execution package may not have more than one instruction that points to a particular execution unit. The execution packet of the instruction may not include more than one condition code expansion gap 1 . So the mapping of the creg/z bits to the functional unit instructions is clear. The creg/z bit is set equal to "0000" to make the instruction unconditional. Therefore properly coded condition code extension gap 1 can make some instructions conditional and some instructions unconditional.As described above in connection with FIG. 13, both condition code extension gap 0 and condition code extension gap 1 may include p bits to define an execution packet. In the preferred embodiment, as illustrated in FIGS. 14 and 15, the code extension gap 0 and the condition code extension gap 1 preferably have bit 0 (p bits) that is always encoded as one. Therefore, neither the condition code extension gap 0 nor the condition code extension gap 1 can be in the last instruction gap of the execution packet.There are two different constant expansion gaps. Each execution package can contain each of these unique 32-bit constant-extension gaps, which include 27-bit and 5-bit constant fields 1305 that are to be concatenated into high-order bits to form 32-bit constants. As mentioned in the instruction code description above, only some instructions define the src2/cst field 1304 as a constant rather than a source register identifier. At least some of these instructions can use constant expansion gaps to extend the constant to 32 bits.Figure 16 illustrates the constant extended gap 0 field. Each execution package may include one instance of a constant expansion gap 0 and one instance of a constant expansion gap 1 . FIG. 16 illustrates that the constant expansion gap 0 1600 includes two fields. Field 1601 (bits 5 to 31) forms the 27 most significant bits of the extended 32-bit constant, which includes the target instruction scr2/cst field 1304 as the five least significant bits. Field 1602 (Bit 0 to Bit 4) is encoded with a set of unique bits (CSTX0) to identify the constant extension gap 0. In a preferred embodiment, the constant expansion gap 0 1600 may only be used to extend the offsets in the L1 unit 221, D1 unit 225, S2 unit 242, D2 unit 226, M2 in the same execution packet. The constant of one of the unit 243 instruction, the N2 unit 244 instruction, the branch instruction, or the C unit 245 instruction. The constant expansion gap 1 is similar to the constant expansion gap 0 except that bits 0 to 4 are encoded as a set of unique bits (CSTX1) to identify the constant expansion gap 1. In a preferred embodiment, the constant expansion gap 1 may only be used to extend the L2 unit 241 instruction in the same execution packet, the data in the D2 unit 226 instruction, the offset in the S1 unit 222 instruction, the offset in the D1 unit 225 instruction, and the M1 unit Constant of one of the 223 instruction or the N1 unit 224 instruction.Constant expansion gap 0 and constant expansion gap 1 are used as follows. The target instruction must have a type that allows constant specifications. As is known in the art, this is accomplished by replacing an input operand register specification field with the least significant bit of the constant as described above with respect to the scr2/cst field 1304 . The instruction decoder 113 determines this condition (which is referred to as a direct field) based on the instruction operation code bits. The target instruction also includes a constant expansion bit (e bit 1307) dedicated to issuing the specified constant without being extended (preferably, the constant expansion bit = 0) or the constant being extended (preferably, the constant expansion bit = 1 )signal of. If the instruction decoder 113 detects a constant-extension gap 0 or a constant-spread gap 1, it further inspects other instructions within the execution packet for instructions corresponding to the detected constant-extension gap. The constant expansion is only performed when a corresponding instruction has a constant expansion bit equal to 1 (e bit 1307).FIG. 17 is a partial block diagram 1700 illustrating constant expansion. FIG. 17 assumes that the instruction decoder 113 detects the constant expansion gap and the corresponding instruction in the same execution packet. The instruction decoder 113 supplies the 27 expansion bits (bit field 1601) from the constant expansion gap and the 5 constant bits (bit field 1305) from the corresponding instruction to the serializer 1701. The stringer 1701 forms a single 32-bit word based on these two parts. In a preferred embodiment, the 27 extension bits (bit field 1601) from the constant extension gap are the most significant bits and the 5 constant bits (bit field 1305) are the least significant bits. The combined 32-bit word is supplied to one input of multiplexer 1702. Five constant bits from the corresponding instruction field 1305 supply the second input to the multiplexer 1702. The selection of multiplexer 1702 is controlled by the state of the constant expansion bit. If the constant expansion bit (e-bit 1307) is 1 (extended), multiplexer 1702 selects the concatenated 32-bit input. If the constant extension bit is 0 (not extended), the multiplexer 1702 selects 5 constant bits from the corresponding instruction field 1305. Multiplexer 1702 supplies this output to the input of symbol expansion unit 1703 .The sign extension unit 1703 forms a final operation value based on the input from the multiplexer 1703 . The sign extension unit 1703 receives the control input scalar/vector and data size. The scalar/vector input indicates whether the corresponding instruction is a scalar instruction or a vector instruction. The functional units of the data path side A 115 (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, and D2 unit 226) may perform only scalar instructions. Any instruction that points to one of these functional units is a scalar instruction. The functional unit L2 unit 241, the S2 unit 242, the M2 unit 243, the N2 unit 244, and the C unit 245 of the data path side B 116 may execute scalar instructions or vector instructions. The instruction decoder 113 determines from the operation code bits whether the instruction is a scalar instruction or a vector instruction. P unit 246 may only perform scalar instructions. The data size can be 8 bits (byte B), 16 bits (half word H), 32 bits (word W) or 64 bits (double word D).Table 2 lists the operation of the symbol expansion unit 1703 for various options.Table 2As described above in connection with FIG. 13 , both the constant extension gap 0 and the constant extension gap 1 may include p bits to define an execution packet. In the preferred embodiment, the constant expansion gap 0 and the constant expansion gap 1 preferably have bit 0 (p bits) which is always encoded as 1 in the case of the condition code expansion gap. Therefore, neither the constant expansion gap 0 nor the constant expansion gap 1 can be in the last instruction gap of the execution packet.It is technically feasible that the execution package includes more than one corresponding instruction with a constant expansion gap of 0 or 1 and a constant expansion (e bit=1). For a constant extension gap 0, this would imply execution of an L1 unit 221 instruction in a packet, data in a D1 unit 225 instruction, an offset in an S2 unit 242 instruction, an offset in a D2 unit 226 instruction, an M2 unit 243 instruction, or an N2 unit 244 instruction More than one has an e-bit of 1. For a constant expansion gap 1, this will mean an execution of an L2 unit 241 instruction in a packet, data in a D2 unit 226 instruction, an offset in an S1 unit 222 instruction, an offset in a D1 unit 225 instruction, an M1 unit 223 instruction, or an N1 unit 224 instruction More than one has an e-bit of 1. Supplying the same constant extension to more than one instruction is not expected to be a useful function. Thus, in one embodiment, the instruction decoder 113 may determine that this condition is an invalid operation and is not supported. Alternatively, the combination may be supported by extension bits of a constant expansion gap applied to each respective functional unit instruction marked as a constant expansion.A special vector assertion instruction uses registers in the prefetch register file 234 to control vector operations. In the current embodiment, all of these SIMD vector assertion instructions operate on the selected data size. The data size may include byte (8-bit) data, half-word (16-bit) data, word (32-bit) data, double-word (64-bit) data, quadword (128-bit) data, and half-vector (256 bits) data. Each bit of the predicate register controls whether the SIMD operation is performed according to the corresponding data byte. The operation of P unit 245 allows various composite vector SIMD operations based on more than one vector comparison. For example, two comparisons can be used for range determination. The candidate vector is compared with a first vector reference having a minimum value of the range packed in the first data register. The second comparison of candidate vectors is performed using a second reference vector having a maximum value of the range packed in the second data register. A logical comparison of the two result predicative registers will allow vector conditional operations to determine if each data portion of the candidate vector is in range or out of range.The L1 unit 221, the S1 unit 222, the L2 unit 241, the S2 unit 242, and the C unit 245 typically operate in a single instruction multiple data (SIMD) mode. In this SIMD mode, the same instruction is applied to packed data from two operands. Each operand holds a plurality of data elements set in a predetermined gap. SIMD operation is enabled by carry control at the data boundary. This carry control enables operations on varying data widths.FIG. 18 illustrates the carry control. The AND gate 1801 receives the carry output of the bit N in the arithmetic logic unit (64 bits for the scalar data path side A 115 functional unit and 512 bits for the vector data path side B 116 functional unit) that receives the operand width. The AND gate 1801 also receives a carry control signal, which will be further explained below. The output of AND gate 1801 is supplied to the carry input of bit N+1 of the arithmetic logic unit of the operand width. An AND gate such as AND gate 1801 is set between each pair of digits at the possible data boundary. For example, for 8-bit data, this AND gate will be between bit 7 and bit 8, bit 15 and bit 16, bit 23 and bit 24, etc. Each such AND gate receives a corresponding carry control signal. If the data size is the smallest, each carry control signal is 0, effectively blocking the carry transmission between adjacent bits. If the selected data size requires two arithmetic logic unit sections, the corresponding carry control signal is one. Table 3 below shows an example carry control signal in case of a 512-bit wide operand used by a functional unit of the vector data path side B116, which can be divided into 8-bit sections, 16-bit sections, 62-bit sections Segment, 64-bit segment, 128-bit segment, or 256-bit segment. In Table 3, the upper 32 bits control the upper bits (bit 128 to bit 511) carry, and the lower 32 bits control the lower bits (bit 0 to bit 127) carry. It is not necessary to control the carry out of the most significant bit, so only 63 carry control signals are required.table 3It is typical in the art to operate on an integer power (2N) data size of two. However, this carry control technique is not limited to an integer power of 2. Those skilled in the art will understand how to apply this technique to other data sizes or other operand widths.FIG. 19 illustrates a conceptual view of a streaming engine of the present invention. Figure 19 illustrates the processing of a single stream. Stream engine 1900 includes a stream address generator 1901 . The stream address generator 1901 continuously generates addresses of stream elements and supplies these element addresses to the system memory 1910. The memory 1910 calls data (data elements) stored at the element address and supplies these data elements to the data first-in-first-out (FIFO) memory 1902 . Data FIFO 1902 provides buffering between memory 1910 and CPU 1920. The data formatter 1903 receives the data elements from the data FIFO memory 1902 and provides data formatting according to the flow definition. This process will be described below. The stream engine 1900 supplies formatted data elements from the data formatter 1903 to the CPU 1920. The program on CPU 1920 consumes this data and generates output.Stream elements usually exist in normal memory. The memory itself does not specify a specific structure for this stream. The program defines the flow and therefore specifies the structure by specifying the following flow attributes: the address of the first element of the flow; the size and type of the elements in the flow; the formatting of the data in the flow; and the sequence of addresses associated with the flow.The stream engine defines the sequence of addresses of the elements of the stream based on pointers to the entire memory. Multi-level nested loops control the path taken by the pointer. The iteration count at the loop level indicates the number of repetitions of this level. The dimensions give the distance between the position of the loop-level pointer.In the base forward stream, the innermost loop always physically consumes consecutive elements from memory. The implicit dimension of this innermost loop is 1 element. The pointer itself moves element by element in a successively increasing order. In each stage outside the inner loop, the loop moves the pointer to a new location based on the size of the dimensions of the loop level.This form of addressing allows the program to specify a regular path through memory with a small number of parameters. Table 4 lists the basic flow addressing parameters.Table 4The above definition maps consecutive elements of the stream to the added addresses in memory. This works well for most (but not all) algorithms. Some algorithms are better served by reading elements in a reduced memory address, reverse stream addressing. For example, according to the following formula, a discrete convolution calculates the vector dot product:In most DSP code, f[] and g[] represent arrays in memory. For each output, the algorithm reads f[] in the forward direction but reads g[] in the reverse direction. The actual filter limits the exponential range of [x] and [t-x] to a limited number of elements. To support this mode, the streaming engine supports reading elements in descending address order.The matrix multiplication operation presents unique problems to the flow engine. Each element in the matrix product is the dot product of the vectors between the rows from the first matrix and the columns from the second matrix. Programs usually store matrices entirely in row main or column main sequences. The row master sequentially stores all the elements of a single row in memory. The column main sequence continuously stores all the elements of a single column in memory. The matrix is usually stored in the same order as the language's default array order. As a result, only one of the two matrices in the matrix multiplication operation is mapped to the 2-dimensional flow definition of the flow engine. In a typical example, the first index progressively passes through the rows on the second array through the columns on the first array. This problem is not unique to the flow engine. The matrix multiplication access mode does not fit well with most common memory tiering schemes. Some software libraries transpose one of the two matrices so that both are accessed in a row-by-row (or column-by-column) manner during the multiplication operation. The stream engine uses transposed streams to support implicit matrix transposition. The transposed stream avoids the cost of explicitly converting data in memory. Instead of accessing data in a strict sequential element order, the flow engine effectively exchanges its inner two loop dimensions in its horizontal order, extracting the elements into the continuous vector channel along the second dimension.The algorithm can work but is impractical for small element size implementations. Some algorithms work on matrix splicing with multiple columns and rows. Therefore, the flow engine defines a separate transpose granularity. The hardware specifies the minimum granularity. Transpose granularity must also be at least as large as the element size. Transpose granularity causes the stream engine to extract one or more contiguous elements from dimension 0 before moving along dimension 1. When the granularity is equal to the element size, this results in a single column being extracted from the row main sequence array. Otherwise, the granularity specifies to extract two, four, or more columns from the row main sequence array at a time. By swapping the rows and columns in the description, this also applies to the layout of the column main sequence. The parameter GRANULE indicates the transpose granularity in bytes.Another common matrix multiplication technique swaps the innermost two loops of matrix multiplication. When reading across a row of another matrix, the generated inner loop no longer reads down a column of a matrix. For example, the algorithm can promote an item outside the inner loop and replace it with a scalar value. On vector machines, the innermost loop can be very efficiently implemented by vector summing after a single scalar and vector multiplication. The DSP CPU of the present invention lacks scalar and vector multiplication. Instead, the program must repeat scalar values across the length of the vector and use vector and vector multiplications. The stream engine of the present invention directly supports this usage pattern and related usage patterns with the element repetition pattern. In this mode, the flow engine reads granules that are smaller than the full vector size and copies the particles to fill the next vector output.The flow engine processes each complex number as a single element with two sub-elements, which gives the real and imaginary parts (rectangular coordinates) or the magnitude and angle parts (polar coordinates) of a complex number. Not all programs or peripherals are consistent with the order in which these sub-elements should exist in memory. Therefore, the flow engine provides the ability to interchange two complex child elements at no cost. This feature interchanges half of the elements without interpreting the contents of the elements and can be used to interchange any type of sub-element pairs, not just plural numbers.The algorithm is generally preferred to work with high precision, but higher accuracy values require more storage devices and bandwidth than lower accuracy values. Typically, the program will store data in memory with low precision, raise these values to higher precision for calculation, and then reduce these values to lower precision for storage. The flow engine directly supports this approach by allowing the algorithm to specify a type of promotion level. In a preferred embodiment of the present invention, each sub-element may be promoted to the next larger type size with an integer type of sign extension or zero extension. It is also feasible that the streaming engine can support floating-point promotion, which promotes 16-bit and 32-bit floating-point values to 32-bit and 64-bit formats, respectively.The stream engine defines the stream as a discrete sequence of elements, and the DSP CPU consumes elements that are continuously packed into vectors. Vectors are similar to streams in that they contain multiple homogeneous elements and some implicit sequences. Because the stream engine reads the stream, but the DSP CPU consumes vectors, the stream engine must map the stream onto the vector in a consistent manner.Vectors consist of lanes of the same size, each containing child elements. The DSP CPU indicates the rightmost channel of the vector as channel 0, regardless of the current endian mode of the device. The channel number increases from right to left. The actual number of channels within a vector varies depending on the length of the vector and the data size of the sub-elements.FIG. 20 illustrates a first example of channel assignment in a vector. The vector 2000 is divided into 8 64-bit channels (vector length of 8 x 64 bits = 512 bits). Channel 0 includes bits 0 to 63; Channel 1 includes bits 64 to 125; Channel 2 includes bits 128 to 191; Channel 3 includes bits 192 to 255; Channel 4 includes bits 256 to 319; Channel 5 includes bits 320. Position 383; Channel 6 includes bits 384 to 447; Channel 7 includes bits 448 to 511.FIG. 21 illustrates a second example of channel assignment in a vector. Vector 2100 is divided into 16 32-bit channels (16 x 32 bits = 512-bit vector length). Channel 0 includes bits 0 to 31; Channel 1 includes bits 32 to 63; Channel 2 includes bits 64 to 95; Channel 3 includes bits 96 to 127; Channel 4 includes bits 128 to 159; Channel 5 includes bit 160 Position 191; Channel 6 includes bits 192 to 223; Channel 7 includes bits 224 to 255; Channel 8 includes bits 256 to 287; Channel 9 includes bits 288 to 319; Channel 10 includes bits 320 to 351; 11 includes bits 352 to 387; channel 12 includes bits 388 to 415; channel 13 includes bits 416 to 447; channel 14 includes bits 448 to 479; and channel 15 includes bits 480 to 511.The flow engine maps the innermost flow dimension directly to the vector channel. It maps the top element within this dimension to a lower channel number and maps the back element to a higher channel number. This is true whether the particular stream is advanced in the order of increasing the address or advancing in the decreasing address order. Regardless of the order in which the streams are defined, the stream engine stores the elements in the vector in the order of increasing channels. For non-composite data, it places the first element in channel 0 of the first vector extracted by the CPU, the second element in channel 1, and so on. For compound data, the stream engine places the first element in channel 0 and channel 1, the second element in channels 2 and 3, and so on. Regardless of the flow direction, the child elements within the element maintain the same relative order. For non-interchangeable composite elements, this places the sub-elements with lower addresses in each pair in even-numbered channels, and the sub-elements with higher addresses in each pair are placed in odd-numbered channels. The interchanged composite elements reverse this mapping.The stream engine fills each vector extracted by the CPU with as many elements as the CPU can extract from the innermost stream dimension. If the innermost dimension is not a multiple of the vector length, the flow engine lengthens the dimension to a multiple vector length by padding zeros. So for a higher-dimensional stream, the first element from each iteration of the outer dimension reaches channel 0 of the vector. The flow engine always maps the innermost dimension to consecutive channels in the vector. For a transposed stream, if the transpose swaps dimension 1 and dimension 0, then the innermost dimension consists of a group of sub-elements along dimension 1, not dimension 0, because the transposition exchanges the two dimensions.Two-dimensional flow shows great diversity compared to one-dimensional flow. The basic two-dimensional stream extracts the smaller rectangle from the larger rectangle. The transposed two-dimensional stream reads columns one by one instead of row by row. The loop stream that overlaps the second dimension with the first dimension performs a finite impulse response (FIR) filtering tap that iterates through or FIR filters on the sample that provides the sliding window of the input sample.Figure 22 illustrates the basic two-dimensional flow. The internal two dimensions represented by ELEM_BYTES, ICNT0, DIM1, and ICNT1 give sufficient flexibility to describe the removal of the smaller rectangle 2220 having dimensions 2221 and 2222 from the larger rectangle 2210 having dimensions 2211 and 2212. In this example, rectangle 2220 is a 9×13 rectangle of 64-bit values and rectangle 2210 is a larger 11×19 rectangle. The following flow parameters define the flow:ICNT0=9ELEM_BYTES=8ICNT1=13DIM1=88 (11 times 8)Therefore, the iteration count in 0-dimension 2211 is 9. The iteration count in 1 direction 2222 is 13. Note that ELEM_BYTES only scales the innermost dimension. The first dimension has an ICNT0 element of ELEM_BYTES size. The stream address generator does not scale external dimensions. Therefore, DIM1=88, which is 11 elements, where each element is scaled by 8 bytes.Figure 23 illustrates the order of the elements within this example stream. The flow engine extracts the elements of the flow in the order illustrated as sequence 2300 . The first 9 elements come from the first row of matrix 2200, with hop 1 to hop 8 from left to right. The 10th to 24th elements come from the second row, and so on. When the stream moves from the ninth element to the tenth element (hit 9 in Figure 23), the flow engine is based on the position of the pointer at the beginning of the internal loop, not where the pointer ends at the end of the first dimension. , to calculate the new location. This makes DIM1 independent of ELEM_BYTES and ICNT0. DIM1 always represents the distance between the first bytes of each successive line.The transposed stream is accessed along dimension 1 before accessing along dimension 0. The following example illustrates a pair of transposed flows with varying transposition granularity. FIG. 24 illustrates the extraction of a smaller rectangle 2420 (12×8) having dimensions 2421 and 2422 from a larger rectangle 2410 (14×13) having dimensions 2411 and 2412. In Figure 24 ELEM_BYTES is equal to 2.Figure 25 illustrates how the stream engine will extract streams with a 4-byte transpose granularity in this example. Extraction mode 2500 extracts element pairs from each row (because the value of 4 is twice the ELEM_BYTES value of 2), but the columns are moved downwards in addition to them. Once it reaches the bottom of a pair of columns, it repeats the pattern with the next pair of columns.Figure 26 illustrates how the stream engine will extract a stream with an 8-byte transpose granularity in this example. The overall structure remains the same. As shown in extraction mode 2600, the stream engine extracts 4 elements from each row (because the value of 8 is four times the ELEM_BYTES value of 2) and then moves on the column to the next row.The stream checked so far reads exactly one element from memory. Streams can read a given element from memory multiple times, actually cycling over a slice of memory. The FIR filter exhibits two common cyclic patterns. The FIR re-reads the same filtered taps for each output. The FIR also reads input samples from the sliding window. Two consecutive outputs will require input from two overlapping windows.FIG. 27 illustrates the details of the flow engine 2700. The flow engine 2700 includes three main sections: stream 0 2710; stream 1 2720; and shared L2 interface 2730. Both stream 0 2710 and stream 1 2720 contain the same hardware operating in parallel. Both stream 0 2710 and stream 1 2720 share the L2 interface 2730. Each stream 2710 and 2720 provides a CPU with up to 512 bits/cycle per cycle. The streaming engine architecture enables this effect through its dedicated flow paths and shared dual L2 interfaces.Each flow engine 2700 includes a dedicated 4-dimensional flow address generator 2711/2721, each of which can generate a new unaligned request per cycle. The address generator 2711/2721 outputs a 512-bit alignment address that overlaps with the elements in the sequence defined by the stream parameter. This will be further described below.Each address generator 2711/2711 is connected to a dedicated microtablet support buffer (μTLB) 2712/2722. The μTLB 2712/2722 translates a single 48-bit virtual address to a 44-bit physical address per cycle. Each μTLB 2712/2722 has 8 entries that cover a minimum of 32 KB with 4 KB pages or a maximum of 16 MB with 2 MB pages. Each address generator 2711/2721 generates two addresses per cycle. μTLB 2712/2722 translates only 1 address per cycle. In order to maintain throughput, the flow engine 2700 utilizes the fact that most flow references will be within the same 4kB page. Therefore, address translation does not modify bit 0 to bit 11 of the address. If aout0 and aout1 are arranged in the same 4KB page (aout0[47:12] is the same as aout1[47:12]), the μTLB 2712/2722 only translates aout0 and reuses the higher-order translation of the two addresses.The translated addresses are arranged in the command queue 1713/723. These addresses are aligned with the information from the corresponding storage dispatch and tracking block 2714/2724. The flow engine 2700 does not explicitly manage the μTLB 2712/2722. The system memory management unit (MMU) disables the μTLB during environment switching as needed.The storage allocation and tracking block 2714/2724 manages the internal storage of the stream, explores the reuse of data and tracks the lifetime of each piece of data. Storage allocation and tracking blocks 2714/2724 accept 2 virtual addresses per cycle and constrain these addresses to gaps in the stream's data storage device. Stream engine 2700 organizes its data storage into an array of gaps. The stream engine 2700 maintains the following metadata listed in Table 5 to track the content and lifetime of the data in each gap.table 5Table 6 details the interaction of the Valid bit, the Ready bit, and the Active bit.Table 6Using this metadata, storage allocation and tracking block 2714/2724 can identify data reuse opportunities in the stream. Storage allocation and tracking blocks 2714/2724 perform the following steps for each address. It compares the address and its associated tag in the tag array. When it hits, it cancels the command associated with that address. In the case of a miss, it assigns a free gap, sets Valid = 1, Ready = 0, and updates the outgoing command to direct the extracted data to the gap. In either case, the slot number is associated with the address. Storage allocation and tracking block 2714/2724 inserts this reference in the reference queue. Storage dispatch and trace block 2714/2724 sets Active=1 and updates the Last Reference to the referenced position in the referenced queue. This is the value of the insertion pointer that references the queue each time it is inserted. The process translates the resulting address into a slot number that represents the data. From this point on, the streaming engine does not need to directly track the address.In order to maximize reuse and minimize stalls, the stream engine 2700 allocates gaps in the following order: if available for FIFO order, the gaps follow the most recent allocation in succession; the lowest number of available gaps (if any); and if no The gap, delay and repeat these two steps until the distribution is successful. This will tend to allocate gaps in FIFO order, but avoid stall if a particular reuse pattern works against that order.The reference queue 2715/2725 stores the referenced sequence generated by the corresponding address generator 2711/2721. This information drives the data formatting network so that it can present data to the CPU in the correct order. Each entry in the reference queue 2715/2725 contains the necessary information for reading data from the data store and aligning it for the CPU. The reference queue 2715/2725 maintains the following information listed in Table 7 in each gap:Table 7When the address generator 2711/2721 generates a new address, the storage dispatch and trace block 2714/2724 will refer to the insert reference queue 2715/2725. Storage allocation and tracking blocks 2714/2724 remove references from the reference queue 2715/2725 when data becomes available and there is room in the stream holding registers. When the storage dispatch and tracking block 2714/2724 removes gap references from the reference queue 2715/2725 and formats the data, it checks whether these references represent the most recent reference to the corresponding gap. The storage dispatch and tracking block 2714/2724 compares the last reference of the reference record 2715/2725 with the pointer removed from the gap. If they match, the storage dispatch and tracking block 2714/2724 marks the gap as inactive once the data has been processed.The stream engine 2700 has a data repository 2716/2726 for any number of elements. Depth buffering allows the streaming engine to extract ahead of the stream, hiding the memory system latency. The correct amount of buffering may change between each product generation. In the currently preferred embodiment, the stream engine 2700 contributes 32 slots for each stream. Each gap holds 64 bytes of data.Butterfly Network 2717/2727 consists of 7 butterfly networks. The Butterfly Network 2717/2727 receives 128 bytes of input and produces 64 bytes of output. The first level of the butterfly network is actually a half class. It collects the bytes from the two gaps matching the misaligned extraction and combines them into a single rotated 64-byte array. The remaining 6 levels form the standard butterfly network. Butterfly Network 2717/2727 performs the following operations: rotates the next element down to byte channel 0; if requested, raises the data type by a power of 2; if requested, the real and imaginary parts of the complex Change; If the CPU is currently in big endian mode, convert big endian to little endian. The user specifies the element size, type promotion, and real/virtual swap as part of the stream's parameters.The streaming engine 2700 attempts to extract and format the data before it is needed by the CPU so that it can maintain sufficient throughput. Holding registers 2718/2728 provides a small amount of buffering so that the process remains adequately pipelined. In addition to the fact that the streaming engine 2700 provides sufficient throughput, the holding registers 2718/2728 are not directly architecturally visible.The two streams 2710/2720 share a pair of independent L2 interfaces 2730: L2 Interface A (IFA) 2733 and L2 Interface B (IFB) 2734. For an aggregate bandwidth of 1024 bits per cycle, each L2 interface provides 512 bits per cycle of throughput directly to the L2 controller. The L2 interface uses a credit-based multi-core bus architecture (MBA) protocol. The L2 controller assigns each interface its own pool of command credits. When reading the L2 RAM, L2 cache, and MSMC RAM, the pool should have sufficient credit so that each interface can send enough commands to obtain sufficient read-return bandwidth.To maximize performance, both streams can use both L2 interfaces, allowing a single stream to send a peak command rate of 2 commands/cycle. Each interface prefers one stream over the other, but the preference is dynamically changed for each request. IFA 2733 and IFB 2734 always prefer the opposite stream, and when IFA 2733 prefers stream 0, IFB 2734 preferably streams 1, and vice versa.The arbiter 2731/2732 prior to each interface 2733/2734 applies the following basic protocol on each cycle it has available credit. The arbiter 2731/2732 checks if the preferred stream has a command ready for transmission. If so, the arbiter 2731/2732 selects the command. The arbiter 2731/2732 next checks if the alternate stream has at least two commands or one command ready for transmission and has no credit. If so, the arbiter 2731/2732 pulls the command out of the alternate stream. If either interface issues a command, the comments for the preferred stream and the alternate stream are interchanged for the next request. Using this simple algorithm, the two interfaces can dispatch requests as quickly as possible while maintaining fairness between the two flows. The first rule ensures that each flow can send a request on every cycle that has available credit. The second rule provides a mechanism for the first stream to borrow another stream's interface when the second interface is idle. The third rule is to extend bandwidth requirements for each flow across both interfaces, ensuring that neither interface will itself become a bottleneck.The coarse-grain rotator 2735/2736 enables the streaming engine 2700 to support transposed matrix addressing modes. In this mode, the flow engine 2700 swaps the two innermost dimensions of its multidimensional loop. This performs column-by-column access to the array instead of row-by-row access. In addition to enabling this transposed access mode, the spinner 2735/2736 is not architecturally visible.The flow definition template provides the complete structure of the flow that contains the data. Iterative counts and dimensions provide most of the structure, while various identifiers provide the remaining details. For all streams that contain data, the flow engine defines a single flow template. All the flow types it supports are suitable for this template. The number on each field indicates the number of bytes in the 256-bit vector. The flow engine defines a four-level loop nest that is used to address the elements in the flow. Most of the fields in the flow template map directly to the parameters in the algorithm. FIG. 28 illustrates the flow template register 2800. The number on the field is the number of bytes in the 256-bit vector. Table 8 shows the flow field definition of the flow template.Field name Description Size/Number of bits ICNT0 (Innermost) Number of iterations for loop 0 32 ICNT1 Number of iterations for loop 1 32 ICNT2 Number of iterations for loop 2 32 ICNT3 (Outermost) Number of iterations for loop 3 8 DIM1 There are 1 loops Signed dimension 32 DIM2 Signed dimension of loop 2 32 DIM3 Signed dimension of loop 3 32 FLAGS Stream modifiers identified 24Table 8In the current example, DIM0 is always equal to the ELEM_BYTES that physically defines continuous data. The flow template mainly includes 32-bit fields. Flow templates limit ICNT3 to 8 bits and limit the FLAGS field to 24 bits. The flow engine 2700 interprets all iteration counts as unsigned integers and interprets all dimensions as signed integers without scaling. The above template fully specifies the element's type, stream length, and dimensions. Stream instructions specify the starting address. This is usually regulated by a scalar register in the scalar register file 211 storing the starting address. This allows the program to open multiple streams using the same template.FIG. 29 illustrates the subfield definition of the identification field 2900. As shown in FIG. 28, the identification field 2900 is 3 bytes or 24 bits. FIG. 29 shows the number of bits of the field. Table 9 shows the definitions of these fields.Table 9The ELTYPE field defines the data type of the elements in the stream. The four-bit code of this field is defined as shown in Table 10.Table 10The sub-element size determines the type for the purpose of type promotion and vector channel width. For example, when the stream request type is raised, the 16-bit sub-elements are promoted to 32-bit sub-elements. Since the DSP CPU always lists vectors in little endian order, the vector channel width is important when the DSP CPU is operating in big endian mode.The total element size determines the minimum granularity of the stream. In the stream addressing model, it determines the number of bytes extracted by the stream for each iteration of the innermost loop. The stream always reads the complete element, either in ascending or descending order. Therefore, the innermost dimension of the stream spans ICNT0 x total element size bytes.The Real-Complex type determines whether the stream engine treats each element as a real number or as two parts of a complex number (real/imaginary or magnitude/angle). This field also specifies whether to interchange two parts of a complex number. Complex types have a total element size that is twice the size of their child elements. Otherwise, the child element size is equal to the total element size.The TRANSPOSE field determines whether the stream engine accesses the stream in transposed order. The transposed order exchanges two internal addressing stages. The TRANSPOSE field also indicates the granularity at which it transposes the stream. The four-bit code of this field is defined as shown in Table 11.Table 11Stream engine 2700 actually transposes at a different granularity than the element size. This allows the program to extract multiple columns of elements from each row. The transposition granularity must not be smaller than the element size.The PROMOTE field controls whether the flow engine promotes child elements and raised types in the flow. When enabled, the stream engine 2700 increases the size by a single power of two. The two-digit code of this field is defined as shown in Table 12.PROMOTE Description 00 Not raised 01 Unsigned integer promotion, zero extension 10 Signed integer promotion, Sign extension 11 Floating point promotionTable 12When the stream specification is not raised, each child element occupies a vector channel whose width is equal to the size specified by ELTYPE. Otherwise, each child element occupies twice as large a vector channel. When PROMOTE is 00, the streaming engine extracts half of the data from the memory to satisfy the same number of stream extractions.Lifting modes 01b and 10b treat input child elements as unsigned integers and signed integers, respectively. For unsigned integers, the flow engine is promoted by filling in new bits with zeros. For signed integers, the streaming engine is promoted by filling in new bits with a copy of the sign bit. Positive signed integers have the most significant bit equal to 0. For the promotion of a positive signed integer, the new bit is filled with zeros. A negative signed integer has the most significant bit equal to 1. For the lifting of a negative signed integer, the new bit is filled with 1.Lift mode 11b treats the input subelement as a floating point number. Floating-point promotions treat each child element as a floating-point type. The stream engine supports two types of floating-point promotion: short floating-point (16-bit) to single-precision floating-point (32-bit); single-precision floating-point (32-bit) to double-precision floating-point (64-bit).The THROTTLE field controls how the streaming engine leads the CPU aggressively. The two-digit encoding of this field is defined as shown in Table 13.THROTTLE Description 00 The smallest throttling, the largest in advance to extract 01 less throttling, more in advance to extract 10 more throttling, less to extract in advance 11 the largest throttling, the smallest in advance to extractTable 13THROTTLE does not change the meaning of the stream and is only used as a hint. The stream engine can ignore this field. The program should not rely on the specific section popularity to determine the correctness of the program, because the architecture does not specify the precise section behavior. THROTTLE allows programmers to provide hardware with hints about the program's own behavior. By default, the streaming engine tries to stay ahead of the CPU as far as possible to hide as much delay as possible, while providing the CPU with sufficient streaming throughput. Although several critical applications require this level of throughput, they can cause poor system-level behavior in other applications. For example, the stream engine discards all extracted data that passes through the environment switch. Therefore, aggressive early extraction may result in wasted bandwidth in a system with a large number of context switches. If the CPU consumes data very quickly, aggressive advance extraction is only sensed in these systems.The DSP CPU enables the streaming engine to be programmed with a small number of instructions and special registers. The STROPEN instruction turns on the stream. The STROPEN command specifies the stream number that indicates that stream 0 or stream 1 is enabled. STROPEN specifies a flow template register storing the flow template as described above. The arguments of the STROPEN instruction are listed in Table 14.Argument Description Stream start address register Scalar register storing stream start address Stream number Stream 0 or Stream 1 stream template register Vector register storing stream template dataTable 14Preferably, the stream start address register is a scalar register in the global scalar register file 211. The STROPEN instruction specifies stream 0 or stream 1 by its opcode. Preferably, the stream template register is a vector register in the global vector register file 221 . If the specified stream is active, the STROPEN instruction closes the previous stream and replaces the stream with the specified stream.The STRCLOSE instruction closes the stream. The STRCLOSE command specifies the stream number of the stream to be closed.The STRSAVE instruction captures enough status information for a given stream to restart the stream in the future. The STRRSTR instruction resumes the previously saved stream. The STRSAVE instruction does not save any data for the stream. The STRSAVE instruction saves only metadata. The stream re-extracts data in response to the STRRSTR instruction.The flow engine is in one of three states: inactive; active; or frozen. When not activated, the streaming engine does nothing. Any attempt to extract data from an unactivated stream engine is wrong. Before the program starts a stream, the stream engine is inactive. After the program consumes all the elements in the stream or after the program closes the stream, the stream engine also becomes inactive. The program that uses the stream explicitly activates and deactivates the stream engine. The operating environment manages the flow through the environment switching boundary via the implied freezing behavior of the flow engine and in combination with its own explicit preservation and restoration actions.The active flow engines have streams associated with them. The program can extract new stream elements from the active stream engine. The stream engine stays active until the next one appears. When the stream extracts the last element from the stream, it becomes inactive. When the program explicitly closes the stream, it becomes inactive. When the CPU responds to an interrupt or exception, the flow engine freezes. When the stream engine freezes, the frozen stream engine captures all the states necessary to recover the stream it is in. The flow engine freezes in response to interruptions and exceptions. This is combined with special instructions to save or restore the frozen stream environment so that the operating environment can cleanly switch the environment. When the CPU returns to the interrupted environment, the frozen stream is reactivated.The program accesses the stream data via the holding register 2718 for stream 0 and via the holding register 2728 for stream 1 . These registers are outside of other register files. These registers represent the corresponding stream of stream 0 and stream 1 headers. The private bits of the src1 field 1305 and src2/cst field 1304 in the corresponding program instructions encode the read stream data and the control advance of the stream. Table 15 shows an exemplary encoding of source operation number segments 1305 and 1304 in accordance with a preferred embodiment of the present invention.Table 15Bit codes 00000 to 01111 (first subset) specify the corresponding registers in the global vector register file 231. Note that only the vector data path side B includes the flow engine. For instructions having the src1 field 1305 and src2/cst fields 1304 bits encoded in the first subset, the instruction decoder 113 supplies the input operands for the corresponding functional unit from the register number specified in the global vector register file 231 . Bit codes 10000 to 10111 (second subset) specify the corresponding registers in the corresponding vector local register file. For instructions directed to L2 unit 241 or S2 unit 242, the local register file is an L2/S2 local register file 232. For instructions directed to M2 unit 243, N2 unit 244, or C unit 245, the local register file is an M2/N2/C local register file 233. For an instruction having the src1 field 1305 and src2/cst field 1304 bits coded in the second subset, the instruction decoder 113 supplies the input operand for the corresponding functional unit from the register number specified in the corresponding local register file, in this implementation In the example, the corresponding local register file is the L2/S2 local register file 232 or the M2/N2/C local register file 233.In this embodiment, bit codes 11000 to 11011 are reserved. These codes are not used. The instruction decoder 113 may ignore these bit codes or may generate errors. The compiler will not generate these codes.Bit codes 11100 and 11101 point to stream 0. Bit codes 11110 and 11111 point to stream 1 . Bit code 11100 is a read of stream 0. After detecting an instruction having the src1 field 1305 and the src2/cst field 1304 bits encoded as 11100, the instruction decoder 113 supplies the data stored in the holding register 2718 to the corresponding operand input terminal of the functional unit of the instruction. As disclosed above in connection with FIGS. 20 and 21, the holding register 2718 holds elements in the specified data stream 0. This supply of data is similar to the supply of data from data registers. Similarly, bit code 11110 is a read of stream 1 . After detecting an instruction having the src1 field 1305 and the src2/cst field 1304 bits encoded as 11110, the instruction decoder 113 supplies the data stored in the holding register 2728 for the corresponding operand input terminal of the functional unit of the instruction.Reading the streams of bit codes 11100 and 11110 is similarly considered as a bit-encoded register. Thus more than one functional unit may receive input from the same stream holding register 2718 or 2728. A single instruction may specify the same stream holding register 2718 or 2728 for both input operands. The instruction may specify one input operand from the holding register 2718 and specify another input operand from the holding register 2728, and the order of the two is arbitrary.Bit codes 11101 and 11111 trigger a read/advance stream operation. Bit code 11101 is the read/advance of stream 0. After detecting an instruction with src1 field 1305 and src2/cst field 1304 with bit encoding 11100, instruction decoder 113 supplies the data stored in holding register 2718 to the corresponding operand input terminal of the functional unit of the instruction. Then, as disclosed above in connection with FIGS. 20 and 21, the stream engine 2700 advances stream 0 to the next set of elements in the specified stream 0. Thus holding register 2918 will store the next element in stream 0. Similarly, bit 11111 is read/advance of stream 1 . After detecting an instruction having the src1 field 1305 and src2/cst field 1304 bits encoded as 11110, the instruction decoder 113 supplies the data stored in the holding register 2728 to the corresponding operand input terminal of the functional unit of the instruction, and then Trigger stream engine 2700 advances stream 1 to store the next data element in holding register 2728 . The data input operation is performed in the same manner as the reading operation of the bit codes 11100 and 11110. As described above, reading/advancing the bit code increases the advancement of the next stream data element defined.As before, the same stream holding register data can be supplied to more than one input and more than one functional unit of the functional unit. It is thus possible to code the instructions in the same execution package, where one of these inputs is a read code and the other input of the same stream is read/advance coding. In this case, the corresponding stream is advanced. Therefore, if any stream 0 or stream 1 operand bit code is a read/ear bit code, the stream is advanced regardless of whether any other stream 0 operand bit code is read or read/advanced.According to the characteristics of the stream, the stream data is read-only. Therefore, the bit encoding of Table 15 cannot be used for the dst field 1303. The instruction decoder 113 may ignore these bit codes in the dst field 1303 or may generate an error. The compiler will not generate these codes.FIG. 30 is a partial schematic diagram 3000 illustrating the above stream input operand encoding. FIG. 30 illustrates decoding of the src1 field 1305 of an instruction of the corresponding src1 input of the functional unit 3020. These same circuits are repeated for the src2/cst field 1304 and the src2 input of the functional unit 3020. In addition, these circuits are repeated for each instruction in the execution package that can be dispatched at the same time.Instruction decoder 113 receives bits 13 through 17 of the src1 field 1305 containing the instruction. The opcode field (bits 4 through 14 for all instructions, and bits 28 through 31 for additional unconditional instructions) explicitly specify the corresponding functional unit 3020. In this embodiment, the functional unit 3020 may be an L2 unit 241, an S2 unit 242, an M2 unit 243, an N2 unit 244, or a C unit 245. The relevant portion of the instruction decoder 113 illustrated in FIG. 30 decodes the src1 bit field 1305 . The sub-decoder 3011 determines whether the src1 bit field 1305 is in the range from 00000 to 01111. If this is the case, the sub-decoder 3011 supplies the corresponding register number to the global vector register file 231. In this example, this register field is the four least significant bits of src1 bit field 1305 . The global vector register file 231 calls the data stored in the register corresponding to this register number and supplies the data to the src1 input of the functional unit 3020. This decoding is generally known in the art.The sub-decoder 3012 determines whether the src1 bit field 1305 is in the range from 10000 to 10111. If this is the case, sub-decoder 3012 supplies the corresponding register number to the corresponding local vector register file. If the instruction points to the L2 unit 241 or the S2 unit 242, the corresponding local vector register file is the local vector register file 232. If the instruction points to the M2 unit 243, the N2 unit 244, or the C unit 245, the corresponding local vector register file is the local vector register file 233. In this example, this register field is the three least significant bits of src1 bit field 1305 . The local vector register file 231 calls the data stored in the register corresponding to this register number and supplies the data to the src1 input of the functional unit 3020. The corresponding local vector register file 232/233 calls the data stored in the register corresponding to this register number and supplies this data to the src1 input of the functional unit 3020. This decoding is generally known in the art.The sub-decoder 3013 determines whether the src1 bit field 1305 is 11100. If this is the case, the sub-decoder 3013 supplies the stream 0 read signal to the stream engine 2700. The stream engine 2700 then supplies the stream 0 data stored in the holding register 2718 to the src1 input of the functional unit 3020.Sub-decoder 3014 determines whether src1 bit field 1305 is 11101. If this is the case, sub-decoder 3014 supplies the stream 0 read signal to stream engine 2700. The stream engine 2700 then supplies the stream 0 data stored in the holding register 2718 to the src1 input of the functional unit 3020. Sub-decoder 3014 also supplies the advance signal to stream 0. As before, the stream engine 2700 stores the next consecutive data element of stream 0 in the holding register 2718 in advance.Sub-decoder 3015 determines whether src1 bit field 1305 is 11110. If this is the case, the sub-decoder 3015 supplies the stream 1 read signal to the stream engine 2700. The stream engine 2700 then supplies the stream 1 data stored in the holding register 2728 to the src1 input of the functional unit 3020.Sub-decoder 3016 determines if src1 bit field 1305 is 11111. If this is the case, the sub-decoder 3016 supplies the stream 1 read signal to the stream engine 2700. The stream engine 2700 then supplies the stream 1 data stored in the holding register 2728 to the src1 input of the functional unit 3020. Sub-decoder 3014 also supplies an advance signal to stream 1 . As previously described, the stream engine 2700 stores the next consecutive data element of stream 2 in the holding register 2728 in advance.A similar circuit is used to select the data supplied to the scr2 input of the functional unit 3002 in response to the bit code of the src2/cst field 1304 . The constant input can be supplied to the scr2 input of functional unit 3020 in the manner described above.The exact number of instruction bits devoted to the operand specification and the number of data registers and streams are design choices. Those skilled in the art will recognize that other numbers of options described in this application are possible. In particular, the specification of a single global vector register file and the omission of local vector register files are possible. The present invention employs bit encoding of the input operand selection field to indicate stream reading and another bit encoding to indicate stream reading and advance of the stream.FIG. 31 illustrates an alternative flow engine 3100. The stream engine 3100 is similar to the stream engine 2200 illustrated in FIG. As previously described in conjunction with FIG. 22, the flow engine 3100 includes an address generator 2211/2221, a μTLB 2212/2222, a command queue 2213/2223, a storage allocation and tracking block 2214/2224, a reference queue 2215/2225, an arbitrator 2231/2232, L2 interface 2233/2234, coarse adjustment rotator 2235/2236, data storage 2216/2225, and butterfly networks 2217 and 2227. Holding register 2218 is replaced with two holding registers 3118 and 3119 indicated as SE0L and SE0H. Each of the holding registers 3118 and 3119 is a 512-bit vector width. The holding register 3119 stores the next vector in the stream 0 data following the holding register 3118. Similarly, holding register 2228 is replaced with holding registers 3128 and 3129 of similar size as indicated by SE1L and SE1H. The holding register 3129 stores the next vector in the stream 1 data following the holding register 3128.Stream engine 3100 allows different stream accesses than stream engine 2200. The stream engine 3100 allows direct access to SEHO and SE1H data in the manner described below. Stream engine 3100 allows access to stream 0 dual vectors corresponding to the combined SE0L and SEOH data (SE0) data. Stream engine 3100 allows access to stream 1 dual vectors corresponding to the combined SE1L and SE1H data (SE1) data. This structure of holding registers 3118/3119 and 3128/3129 supports the following operations. Table 16 lists these flow access operations.Table 16Table 16 lists the first subset accessing the global vector register file 231 in the same manner as listed in Table 15. Table 16 lists the second subset accessing the corresponding local vector register file 242/423 in the same manner as listed in Table 15. Table 16 lists the accesses indicated as SE0. After decoding the source register indication, the stream engine 3100 supplies the data from the holding register 3118 to the corresponding functional unit and supplies the data from the holding register 3119 to the paired functional unit, thus supporting the dual vector operation. Table 16 lists the access indicated as SE0++. After decoding the source register indication, the stream engine 3100 supplies the dual vector data from the holding registers 3118 and 3119 to paired functional units supporting dual vector operations and advances stream 0 by two vectors (1024 bits). Table 16 lists similar accesses indicated as SE1 and SE1++, which have similar dual vector data supplied by holding registers 3128 and 3129.Table 16 lists four different single vector accesses. The accesses indicated as SE0L and SE0L++ correspond to the stream 0 read and read/advance operations listed in Table 15 that supply data from the holding register 3118. The accesses indicated as SE1L and SE1L++ correspond to the stream 1 read and read/advance operations that supply the data from the holding register 3128 listed in Table 15. The data indicating that the SEOH is accessed is supplied from the holding register 3119. As mentioned above, the holding register 3119 stores the next vector in the stream 0 data following the holding register 3118. Instructs access for SE0H++ to supply data from holding register 3119 and advance stream 0 two vectors (1024 bits). Indicates that data from the holding register 3129 is supplied for access to SE1H. As mentioned above, the holding register 3129 stores the next vector in the stream 1 data following the holding register 3128. The access indicated as SE1H++ supplies data from the holding register 3129 via the output OutL and advances stream 1 by two vectors (1024 bits).It can be easily seen that the 5-bit src1 field 1305 cannot specify an operation from a set that includes: fetches from one of the 16 global vector registers; extracts from one of the 8 local vector registers; and the above described 12 stream codes. The number of selections can be accommodated by increasing the number of bits in the src1 field 1305 to 6 bits. The number of choices can be reduced by limiting the number and identity of registers accessible in the global vector register file, in the local vector register file, or both. The alternative embodiment decodes src1 field 1305 for dual vector operation and decode src1 field 1305 for vector operation is different. Table 17 lists the decoding of the src1 field 1305 for vector operations in this alternative embodiment.Table 17Table 18 lists the decoding of the src1 field 1305 for dual vector operation.Table 18For the two encodings of Tables 17 and 18 in this alternative embodiment, the decoding of the first subset and the second subset is the same as Table 15. This alternative does not require any change in the instruction code of the instruction 1300. The opcode field 1306 (bits 4 to 12) (and additionally bits 28 to 31 for the unconditional instruction) for all instructions must specify the type of instruction and explicitly distinguish all double vector instructions from all vector instructions. Therefore, using this alternative embodiment only requires that the instruction decoder 113 manipulate the digital segment to decode the conditional on vector/bivector coding for streaming data access.32 is a partial schematic diagram 3200 similar to FIG. 25 illustrating stream input opcode encoding of this alternative embodiment. 32 illustrates the src1 field 1305 of one instruction of the corresponding src1 input of the decoding functional unit 3230 and the paired functional unit 3240. These same circuits are repeated for the src2/cst field 1304 and the src2 input of the functional units 3230 and 3240. In addition, these circuits are repeated for each instruction in the execution package that can be dispatched at the same time.Instruction decoder 113 receives bits 13 through 17 of the src1 field 1305 containing the instruction. The opcode field (bits 4 to 14 for all instructions and bit 28 to bit 31 for an unconditional instruction) specifies the corresponding functional unit 3230 explicitly. If the instruction is a dual vector operation, this field also explicitly specifies the paired functional unit 3240. In this embodiment, functional units 3230 and 3240 may be L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, or C unit 245. The relevant portion of the instruction decoder 113 illustrated in FIG. 32 decodes the src1 bit field 1305 . As illustrated in FIG. 32, the instruction decoder 113 generates a vector/double vector (D/DV) signal indicating whether the instruction being decoded is a normal control vector operation or a special dual vector operation. This vector/double vector signal is supplied to 3214, 3215, 3216, 3217, 3218, 3219, 3220, 3221, 3222, 3223, and 3224 for control in the manner described below.The sub-decoder 3211 determines whether the src1 bit field 1305 is in the first subset of the specified global vector register file 231. If this is the case, the sub-decoder 3211 supplies the corresponding register number to the global vector register file 231. The global vector register file 231 calls the data stored in the register corresponding to this register number and supplies the data to the src1 input of the functional unit 3230. This decoding is generally known in the art. Note that decoding this first subset does not depend on vector/dual vector signals.The sub-decoder 3212 determines if the src1 bit field 1305 is in the second subset specifying the corresponding local vector register file 232 or 233. If this is the case, sub-decoder 2512 supplies the corresponding register number to the corresponding local vector register file. If the instruction points to the L2 unit 241 or the S2 unit 242, the corresponding local vector register file is the local vector register field 232. If the instruction points to the M2 unit 243, the N2 unit 244, or the C unit 245, the corresponding local vector register file is the local vector register field 233. The local vector register file 232/233 calls the data stored in the register corresponding to this register number and supplies the data to the src1 input of the functional unit 3020. The corresponding local vector register file 232/233 calls the data stored in the register corresponding to this register number and supplies this data to the src1 input of the functional unit 2520. This decoding is generally known in the art. Note that decoding this second subset is not dependent on vector/dual vector signals.The sub-decoder 3213 is active if the vector/dual vector signal indicates dual vector operation. The sub decoder 3213 determines whether the src1 bit field 1305 specifies SE0. As listed in Table 18, this is a bit code of "11100". If this is the case, the sub-decoder 3213 supplies the SE0 read signal to the stream engine 3100. Then, the stream engine 3100 supplies the stream 0 data stored in the holding register 3118 to the src1 input terminal of the functional unit 3230 via the output terminal OutL, and supplies the stream 0 data stored in the holding register 3119 to the paired terminal via the output terminal OutH. The src1 input of functional unit 3240. This is a dual vector operation using paired functional units.The sub-decoder 3214 is active if the vector/dual vector signal indicates dual vector operation. The sub-decoder 3214 determines whether the src1 bit field 1305 specifies SE0++. As listed in Table 18, this is a bit encoding of "11101". If this is the case, the sub-decoder 3213 supplies the SE0 read signal to the stream engine 3100. Then, the stream engine 3100 supplies the stream 0 data stored in the holding register 3118 to the src1 input terminal of the functional unit 3230 via the output terminal OutL, and supplies the stream 0 data stored in the holding register 3119 to the paired terminal via the output terminal OutH. The src1 input of functional unit 3240. This is a dual vector operation using paired functional units. Sub-decoder 3214 also supplies an advance signal to stream 0. Stream engine 2200 advances the amount of dual vector (512 bits) and stores the next stream 0 data in holding registers 3118 and 3119.The sub-decoder 3215 is active if the vector/dual vector signal indicates a vector operation. The sub decoder 3215 determines whether the src1 bit field 1305 specifies SE0L. As listed in Table 17, this is the "11000" bit code. If this is the case, sub-decoder 3215 supplies the SELO read signal to stream engine 3100. Then, the stream engine 3100 supplies the stream 0 data stored in the holding register 3128 to the src1 input terminal of the functional unit 3230 via the output terminal OutL.The sub-decoder 3216 is active if the vector/dual vector signal indicates vector operation. The sub-decoder 3216 determines whether the src1 bit field 1305 specifies SE0L++. As listed in Table 17, this is a bit encoding of "11001". If this is the case, sub-decoder 3215 supplies the SELO read signal to stream engine 3100. Then, the stream engine 3100 supplies the stream 0 data stored in the holding register 3128 to the src1 input terminal of the functional unit 3230 via the output terminal OutL. Sub-decoder 3215 also supplies an advance signal to stream 0. Stream engine 2200 advances the amount of dual vector (512 bits) and stores the next stream 0 data in holding registers 3118 and 3119. In the preferred embodiment, all flow advances are in double vector (512 bit) increments, even for single vector reads.The sub-decoder 3217 is active if the vector/dual vector signal indicates vector operation. The sub-decoder 3217 determines whether the src1 bit field 1305 specifies SE0H. As listed in Table 17, this is a bit code of "11010". If this is the case, the sub-decoder 3217 supplies the SEHO read signal to the stream engine 3100. Then, the stream engine 3100 supplies the stream 0 data stored in the holding register 3129 to the src1 input terminal of the functional unit 3230 via the output terminal OutH.The sub-decoder 3218 is active if the vector/dual vector signal indicates a vector operation. Subdecoder 3218 determines if src1 bit field 1305 specifies SE0H++. As listed in Table 17, this is a bit code of "11011". If this is the case, sub-decoder 3218 supplies the SEHO read signal to stream engine 3100. Then, the stream engine 3100 supplies the stream 0 data stored in the holding register 3129 to the src1 input terminal of the functional unit 3230 via the output terminal OutL. Sub-decoder 3218 also supplies the advance signal to stream 0. Stream engine 2200 advances the amount of dual vector (512 bits) and stores the next stream 0 data in holding registers 3118 and 3119. In the preferred embodiment, all flow advances are in double vector (512 bit) increments, even for single vector reads.The sub-decoder 3219 is active if the vector/dual vector signal indicates dual vector operation. Subdecoder 3219 determines whether src1 bit field 1305 specifies SE1. As listed in Table 18, this is a bit code of "11110". If this is the case, the sub-decoder 3219 supplies the SE1 read signal to the stream engine 3100. Then, the stream engine 3100 supplies the stream 1 data stored in the holding register 3218 to the src1 input of the function unit 3230 via the output terminal OutL, and supplies the stream 1 data stored in the holding register 3219 to the paired through the output terminal OutH. The src1 input of functional unit 3240. This is a dual vector operation using paired functional units.The sub-decoder 3220 is active if the vector/dual vector signal indicates dual vector operation. As listed in Table 18, SE1++ is a bit encoding of "11111". Subdecoder 3220 determines if src1 bit field 1305 specifies SE1++. If this is the case, the sub-decoder 3220 supplies the SE1 read signal to the stream engine 3100. Then, the stream engine 3100 supplies the stream 1 data stored in the holding register 3218 to the src1 input of the function unit 3230 via the output terminal OutL, and supplies the stream 1 data stored in the holding register 3219 to the paired through the output terminal OutH. The src1 input of functional unit 3240. This is a dual vector operation using paired functional units. Sub-decoder 3220 also supplies an advance signal to stream 1 . Stream engine 2200 advances the amount of dual vector (512 bits) and stores the next stream 1 data in holding registers 3128 and 3129. In the preferred embodiment, all flow advances are in double vector (512 bit) increments, even for single vector reads.The sub-decoder 3221 is active if the vector/dual vector signal indicates vector operation. The sub decoder 3221 determines whether the src1 bit field 1305 specifies SE1L. As listed in Table 17, this is a bit code of "11100". If this is the case, the sub-decoder 3221 supplies the SE1L read signal to the stream engine 3100. Then, the stream engine 3100 supplies the stream 1 data stored in the holding register 3218 to the src1 input terminal of the functional unit 3230 via the output terminal OutL.The sub-decoder 3222 is active if the vector/dual vector signal indicates a vector operation. Subdecoder 3222 determines whether src1 bit field 1305 specifies SE1L++. As listed in Table 17, this is a bit encoding of "11101". If this is the case, sub-decoder 3222 supplies the SE1L read signal to stream engine 3100. Then, the stream engine 3100 supplies the stream 1 data stored in the holding register 3218 to the src1 input terminal of the functional unit 3230 via the output terminal OutL. Sub decoder 3222 also supplies an advance signal to stream 1 . Stream engine 2200 advances the amount of dual vector (512 bits) and stores the next stream 1 data in holding registers 3128 and 3129. In the preferred embodiment, all flow advances are in double vector (512 bit) increments, even for single vector reads.If the vector/dual vector signal indicates a vector operation, the sub-decoder 3223 is active. The subdecoder 3223 determines whether the src1 bit field 1305 specifies SE1H. As listed in Table 17, this is a bit code of "11110". If this is the case, the sub-decoder 3223 supplies the SE1L read signal to the stream engine 3100. Then, the stream engine 3100 supplies the stream 1 data stored in the holding register 3219 to the src1 input terminal of the functional unit 3230 via the output terminal OutL.The sub-decoder 3224 is active if the vector/dual vector signal indicates a vector operation. The sub-decoder 3224 determines whether the src1 bit field 1305 specifies SE1H++. As listed in Table 17, this is a bit encoding of "11111". If this is the case, the sub-decoder 3224 supplies the SE1H read signal to the stream engine 3100. Then, the stream engine 3100 supplies the stream 1 data stored in the holding register 3219 to the src1 input terminal of the functional unit 3230 via the output terminal OutL. Sub-decoder 3224 also supplies an advance signal to stream 1 . Stream engine 2200 advances the amount of dual vector (512 bits) and stores the next stream 1 data in holding registers 3128 and 3129. In the preferred embodiment, all flow advances are in double vector (512 bit) increments, even for single vector reads.Figure 32 further illustrates the provision of non-stream operations for dual vector operations. After decoding the dual vector operation, the instruction decoder 113 supplies the register number corresponding to the src1 field 1305 to an appropriate register file, which is either the global vector register file 231 or the corresponding local vector register file 232 or 233. The register file supplies the data stored at this register number to the src1 input of the main functional unit 3230. The instruction decoder 113 supplies the corresponding register number to the appropriate register file to call other vectors of the dual vector operation. It is known in the art to limit the effective encoding of the src1 field 1305 in dual vector operations to even-numbered registers. The data in the even-numbered registers is supplied to the main functional unit 3230. The data in the next higher register number (the next odd register number after the specified even-numbered register number) is supplied to the src1 input of the paired functional unit 3240. This is known in the art and is only illustrated for completeness.A similar circuit is used to select the data supplied to the scr2 input of the functional units 3230 and 3240 in response to the bit encoding of the src2/cst field 1304 . The scr2 inputs of the functional units 3230 and 3240 can be supplied with constant inputs in the manner described above.Therefore, for the vector instructions, the sub-decoders 3215, 3216, 3217, 3218, 3221, 3222, 3223, and 3243 are enabled (programmed as listed in Table 17). For dual vector instructions, sub-decoders 3213, 3214, 3219, and 3220 are enabled (programmed as listed in Table 18). As listed in Table 17 and Table 18, the encoding of the global vector register file 231 and the corresponding local vector register file 232 or 233 is the same for dual vector instructions as for vector instructions.The exact number of instruction bits devoted to the operand specification and the number of data registers and streams are design choices. Those skilled in the art will recognize that other numbers of options described in this application are possible. In particular, the specification of a single global vector register file and the omission of local vector register files are possible. The present invention employs a bit encoding of the input operand selection field to indicate low stream and high stream reads, double width reads, other bit codes to indicate stream reads and advance the stream. |
Methods, systems, and devices for prefetch signaling in a memory system or sub-system are described. A memory device (e.g., a local memory controller of memory device) of a main memory may transmit a prefetch indicator indicating a size of prefetch data associated with a first set of data requested by an interface controller. The size of the prefetch data may be equal to or different than the size of the first set of data. The main memory may, in some examples, store the size of prefetch data along with the first set of data. The memory device may transmit the prefetch indicator (e.g., an indicator signal) to the interface controller using a pin compatible with an industry standard or specification and/or a separate pin configured for transmitting command or control information. The memory device may transmit the prefetch indicator while the first set of data is being transmitted. |
CLAIMSWhat is claimed is:1. A method, comprising:receiving, from a controller, a read command for a first set of data;identifying, in response to the read command, an indicator associated with the first set of data that indicates a size of a second set of data to be transmitted in response to the read command for the first set of data; andtransmitting, to the controller, the indicator with a portion of the second set of data.2. The method of claim 1, wherein transmitting the indicator with the portion of the second set of data comprises:transmitting the indicator concurrently with at least a subset of the portion of the second set of data.3. The method of claim 1, wherein the indicator comprises at least one bit in a memory array that stores the second set of data, the memory array comprising non volatile memory cells.4. The method of claim 1, wherein the indicator comprises a dynamic counter that indicates the size of the second set of data.5. The method of claim 1, further comprising:determining that the second set of data is available in an open page of a memory array comprising non-volatile memory cells; andtransmitting, to the controller, a remainder of the second set of data after transmitting the portion of the second set of data.6. The method of claim 1, further comprising:determining that at least a subset of the second set of data is unavailable in an open page of a memory array comprising non-volatile memory cells; andtransmitting, to the controller, a second indicator indicating a time delay for at least the subset of the second set of data.7. The method of claim 1, wherein transmitting the indicator with the portion of the second set of data comprises:transmitting the indicator via a first pin coupled with a memory array and designated for command or control information, the memory array storing the indicator and the second set of data; andtransmitting the portion of the second set of data via a second pin coupled with the memory array.8. The method of claim 7, wherein the first pin is configured for transmitting at least one of data mask/inversion (DMI) information, link error correction code (ECC) parity information, or status information regarding the memory array, or any combination thereof.9. The method of claim 1, wherein identifying the indicator comprises: reading a first set of memory cells in a memory array, the first set of memory cells having a faster nominal access speed than a second set of memory cells in the memory array, the second set of memory cells storing the first set of data; andidentifying a value of at least one bit in the first set of memory cells.10. The method of claim 1, further comprising:receiving, from the controller, an instruction to update the indicator, the instruction being based at least in part on an access pattern associated with the first set of data; andupdating, in a memory array that stores the indicator and the first set of data, a value of the indicator based at least in part on the instruction.11. A method, comprising:transmitting, to a memory device, a read command for a first set of data; receiving, from the memory device, a portion of a second set of data and an indicator of a size of the second set of data, the second set of data including the first set of data; anddetermining the size of the second set of data based at least in part on the indicator; and receiving a remainder of the second set of data based at least in part on determining the size of the second set of data.12. The method of claim 11, wherein receiving the portion of the second set of data with the indicator comprises:receiving the indicator concurrently with at least one bit included in the portion of the second set of data.13. The method of claim 11, further comprising:transmitting at least the first set of data to a buffer based at least in part on determining the size of the second set of data.14. The method of claim 11, further comprising:receiving, from the memory device, a second indicator that indicates a latency for at least a subset of the remainder of the second set of data.15. The method of claim 14, further comprising:transmitting, to the memory device after a time duration associated with the latency, a subsequent read command for at least the subset of the remainder of the second set of data.16. A method, comprising:identifying a first set of data for eviction from a buffer;determining a size of a second set of data to be read in response to a subsequent read command for the first set of data, the second set of data including the first set of data; andtransmitting, to a memory device, a write command for a value of an indicator of the size of the second set of data.17. The method of claim 16, further comprising:determining an access pattern for the first set of data based at least in part on previous access operations performed by a system on a chip (SoC) or processor, wherein a first page size is associated with the SoC or processor and a second page size is associated with the memory device; and determining the size of the second set of data is based at least in part on the access pattern.18. The method of claim 16, further comprising:identifying a portion of the first set of data that has been modified relative to corresponding data stored in the memory device; andtransmitting, to the memory device, the portion of the first set of data that has been modified.19. The method of claim 16, wherein the write command for the indicator specifies a location within the memory device for storing the indicator.20. The method of claim 16, further comprising:determining that the first set of data is unmodified relative to corresponding data stored in the memory device; andtransmitting the write command for the indicator independent of transmitting a write command for the first set of data based at least in part on determining that the first set of data is unmodified compared to the corresponding data. |
PREFETCH SIGNALING IN MEMORY SYSTEM OR SUB-SYSTEMCROSS REFERENCE[0001] The present Application for Patent claims priority to ET.S. Patent Application No. 15/975,617 by Hasbun et al., entitled“Prefetch Signaling in Memory System or Sub- System,” filed May 9, 2018, and ET.S. Patent Application No. 16/116,533 by Hasbun et al., entitled“Prefetch Signaling in Memory System or Sub-System,” filed August 29, 2018, each of which is assigned to the assignee hereof and each of which is expressly incorporated by reference in its entirety herein.BACKGROUND[0002] The following relates generally to memory systems or sub-systems and more specifically to prefetch signaling in a memory system or sub-system.[0003] A memory system may include various kinds of memory devices and controllers, which may be coupled via one or more buses to manage information in numerous electronic devices such as computers, wireless communication devices, internet of things devices, cameras, digital displays, and the like. Memory devices are widely used to store information in such electronic devices. Information may be stored in a memory device by programing different states of one or more memory cells within the memory device. For example, a binary memory cell may store one of two states, often denoted as a logic“1” or a logic“0.” Some memory cells may be able to store more than two states.[0004] Various types of memory devices exist, including magnetic hard disks, random access memory (RAM), read only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), and others. Memory devices may be volatile or non-volatile. Non-volatile memory cells may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory cells, e.g., DRAM cells, may lose their stored logic state over time unless they are periodically refreshed by an external power source.[0005] Improving memory systems, generally, may include reducing system power consumption, increasing memory system capacity, improving read/write speeds, providing non-volatility by use of persistent main memory, or reducing manufacturing costs at a certain performance point, among other metrics.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 shows a diagram of a system including a memory system or sub-system that supports prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure.[0007] FIG. 2 illustrates an exemplary memory system or sub-system that supports prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure.[0008] FIG. 3 illustrates an exemplary data structure and state diagram that support prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure.[0009] FIGs. 4A and 4B illustrate examples of timing diagrams that support prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure.[0010] FIGs. 5 through 6 show block diagrams of a device that supports prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure.[0011] FIGs. 7 through 9 show flowcharts illustrating a method or methods for prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure.DETAILED DESCRIPTION[0012] A memory system may include one or more memory devices as a main memory for a host (e.g., a system on chip (SoC) or processor). In some examples, a memory device may include an array of non-volatile memory cells (e.g., FeRAM cells). The non-volatile memory array, when included as a main memory in a memory system, may provide benefits (e.g., relative to a volatile memory array) such as non-volatility, higher capacity, less power consumption, or variable page size. In the context of a memory device, a page size may refer to a size of data handled at various interfaces. Different memory device types may have different page sizes, and the page size of an individual memory device may be variable or non-variable.[0013] In some cases, one or more aspects of the non-volatile memory array may lack direct compatibility with corresponding aspects of the host— e.g., different latencies associated with access operations (e.g., read or write operations) or different page sizes. As such, the memory system may further include an interface controller to perform or manage various interactions between the host and the memory device. The memory system may also include additional memory elements (e.g., a buffer, a virtual memory bank) that further facilitate interactions between the host and the memory device. In some cases, the memory device may have a local memory controller (e.g., local to the memory device) that may, in conjunction with the interface controller, perform various operations associated with the array of non-volatile memory cells.[0014] An interface controller of a memory system, while operating with anSoC/processor, may prefetch a set of data from a memory device (e.g., a main memory). In some cases, the interface controller may anticipate that the SoC/processor is likely to access a certain set of data. For example, the interface controller may determine to prefetch a set of data based on characteristics of a currently on-going operation (e.g., the SoC/processor accessing a stream of data for a graphics application). In other examples, the interface controller may determine to prefetch a set of data based on the speed of a bus (e.g., a high bus speed processing a large volume of data with a low latency), on which the SoC/processor operates. Prefetching data from the memory device (and storing the data in a buffer or a virtual memory bank) may facilitate interactions between the SoC/processor and the memory device despite one or more incompatible aspects of the SoC/processor and the memory device (e.g., different access speeds, different page sizes).[0015] For example, by prefetching data from the memory device and making the prefetched data available in the buffer or the virtual memory bank of the memory system, the interface controller may provide the prefetched data to the SoC/processor while mitigating the impact of one or more incompatible aspects of the SoC/processor and the memory device.[0016] In some cases, the interface controller may prefetch data from the memory device in order to satisfy overall performance requirements (e.g., power consumption, read latency) of the memory system. For example, a size of data to be prefetched from the memory device by the interface controller may depend on an operation mode of the memory system. The size of data to be prefetched may be referred to as a prefetch size. For example, in a power conservation mode, the interface controller may prefetch a minimum size of data from the memory device.[0017] Prefetching a minimum size of data may minimize power consumption for the memory system but may result in additional time delay (e.g., read latency) from the perspective of the SoC/processor. In a high performance mode, on the other hand, the interface controller may prefetch a maximum size of data from the memory device although only a portion of the prefetched data may be useful for the SoC/processor. Prefetching a maximum size of data may minimize time delay (e.g., read latency) from the perspective of the SoC/processor but may result in increased power consumption for the memory system.[0018] The interface controller may determine and preconfigure the prefetch size associated with a set of data based at least in part on an access pattern for the set of data, which may be based on access operations by the SoC/processor while the set of data is in a buffer. For example, the SoC/processor may access a first set of data (e.g., 64 bytes) as a part of a second set of data (e.g., 256 bytes). In other examples, the SoC/processor may access a first set of data (e.g., 64 bytes) immediately after or before accessing a second set of data (e.g., 192 bytes).[0019] In some cases, the interface controller may determine a prefetch size associated with a set of data based on various criteria including an access pattern by the SoC/processor in a most recent access operation, a history of access patterns by the SoC/processor in a number of past access operations, a specific prefetch size specified by the SoC/processor, an operation mode of the memory system, a bus configuration between the SoC/processor and the memory system, or any combination thereof. When the interface controller evicts the set of data from the buffer, the interface controller may determine the prefetch size for the set of data and store the determined prefetch size in association with the set of data in the memory device that stores the set of data. In some cases, the interface controller may store the prefetch size as an indicator associated with the set of data.[0020] The memory device (e.g., a local memory controller of the memory device), upon receiving a read command from the interface controller requesting a set of data (e.g., when the interface controller prefetches the set of data), may identify a prefetch size based on the indicator that has been stored in the memory device in association with the set of data. The indicator may be referred to as a prefetch (PF) counter. The PF counter may comprise one or more memory bits preconfigured to indicate a size of data to be prefetched (e.g., a prefetch size). The memory device may identify the prefetch size for the set of data by reading the PF counter and may transmit to the interface controller an amount of data pursuant to the indicated prefetch size (e.g., the set of data subject to the read command received by the memory device from the interface controller plus any additional data necessary to satisfy the prefetch size). The memory device may also transmit a signal indicating the prefetch size (e.g., a signal indicative of the total amount of data being transmitted to the interface controller) to the interface controller, which may be referred to as a prefetch indicator signal.[0021] When storing a value of a PF counter (e.g., a prefetch size of data) in the memory device (e.g., in the non-volatile memory array), the interface controller may also designate a group of memory cells in the memory device for storing the value of the PF counter. For example, the interface controller may designate a group of memory cells that exhibit a faster access speed than other memory cells of the memory device (e.g., other memory cells in the memory device that store the data associated with the PF counter), which may increase the speed with which the memory device may identify a prefetch size for an associated set of data. In turn, increasing the speed with which the memory device may identify a prefetch size may facilitate the memory device determining the prefetch size and transmitting a signal related to the prefetch size (e.g., prefetch indicator signal) to the interface controller while (e.g., concurrently with) transmitting the requested data to the interface controller.[0022] The interface controller may dynamically update the value of the PF counter stored in the memory device when, for example, the interface controller determines that the SoC/process has established a different access pattern to the data. In some cases, where the interface controller determines that the data has not been modified by the SoC/processor while present in a buffer, the interface controller may update the value of the PF counter without writing the data back to the memory device— e.g., when evicting data from the buffer, the interface controller may write the memory device an updated value of the PF counter along with modified aspects of the associated set of data, if any.[0023] When the interface controller prefetches data from the memory device (e.g., main memory), the interface controller may transmit a read command for a first size of data (e.g., 64 bytes). The memory device, upon receiving the read command, may identify a prefetch size for the requested data by accessing the PF counter associated with the requested data. In some cases, the prefetch size indicated by the PF counter (e.g., 64 bytes) may be identical to the first size of data. In other cases, the prefetch size indicated by the PF counter (e.g., 192 bytes) may be different from the first size of data.[0024] The memory device (e.g., a local memory controller of the memory device) may transmit an amount of data pursuant to the prefetch size (e.g., 64 bytes or 192 bytes as identified from the PF counter) to the interface controller. The memory device may also transmit to the interface controller a signal indicating the prefetch size (e.g., a prefetch indicator signal). In some cases, the memory device may transmit the requested data to the interface controller via one or more data pins and transmit the prefetch indicator signal via one or more other pins. For example, the memory device may transmit the requested data to the interface controller via one or more data pins while concurrently transmitting the prefetch indicator signal via the one or more other pins.[0025] A prefetch indicator signal may inform the interface controller whether the size of data being transmitted to the interface controller by the memory device (e.g., a local memory controller of the memory device) in response to a read command (e.g., via data pins) is equal to or greater than the size of data requested by the read command. The interface controller, based on receiving the prefetch indicator signal, may determine a next operation (e.g., continuing to monitor the data pins to receive more data than requested) based on the prefetch size information included in the prefetch indicator signal. In this manner, the management of prefetch operations may be simplified from the interface controller’s perspective as the memory device may identify a prefetch size associated with the requested data based on the PF counter (e.g., the prefetch size previously determined by the interface controller) and inform the interface controller while sending the requested data.[0026] For example, by determining the prefetch size upon evicting a set of data from a buffer and causing the memory device to store the prefetch size in association with the set of data, the interface controller may not have to determine the prefetch size when subsequently initiating a prefetch operation for the set of data— the interface controller may instead be informed by the memory device of the prefetch size previously determined by the interface controller. This may provide benefits, such as latency or efficiency benefits, at the time of a prefetch operation by the interface controller, which may in some cases be a latency-sensitive time relative to other times (e.g., when eviction from the buffer may occur). [0027] The memory device (e.g., a local memory controller of the memory device) may transmit the prefetch indicator signal to the interface controller using a pin that is compatible with a low power double data rate (LPDDR) specification in some cases. For example, the memory device may use a data mask/inversion (DMI) pin or a link error correction code (ECC) parity pin to transmit the prefetch indicator signal to the interface controller. A separate pin of the memory device (e.g., a pin different than data pins or LPDDR-specified pins) may be configured for transmitting command or control information to the interface controller in order to transmit the prefetch indicator signal to the interface controller. In some cases, the separate pin may be referred to as a response (RSP) pin.[0028] The memory device (e.g., a local memory controller of the memory device) may determine that retrieving the data in accordance with the prefetch size indicated by the PF counter requires activating an additional set of memory cells beyond those necessary to retrieve only the data requested by the interface controller. For example, the memory device may determine that retrieving the data in accordance with the prefetch size indicated by the PF counter requires activating one or more subpages of the non-volatile memory array beyond the subpage(s) that include the requested data.[0029] As such, the memory device (e.g., a local memory controller of the memory device) may determine that an additional amount of time will be required for the memory device to transmit the prefetched data in its entirety to the interface controller. The memory device may transmit to the interface controller a wait signal indicating a time delay such that the interface controller may be informed of the additional time associated with transmitting the prefetched data in its entirety (e.g., with activating the one or more subpages). In some cases, the interface controller may transmit a second read command after the indicated time delay based at least in part on receiving the wait signal, the second read command for any unreceived prefetch data associated with the set of data subject to the initial read command by the interface controller.[0030] Features of the disclosure introduced above are further described below at an exemplary system level in the context of FIG. 1. Specific examples of memory systems and operations are then described in the context of FIGs. 2 through 4. These and other features of the disclosure are further illustrated by and described with reference to the apparatus diagrams of FIGs. 5 and 6, which describe various components related to controllers, as well as the flowcharts of FIGs. 7 through 9, which relate to operations of prefetch signaling in a memory system or sub-system.[0031] FIG. 1 shows a diagram of a system 100 including a memory system or sub- system that supports prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure. System 100 may include a device 105. The device 105 may include an interface controller 120, an SoC or processor 130, and various memory devices 170, 175, and 180. Device 105 may also include an input/output controller 135, a basic input/output system (BIOS) component 140, a board support package (BSP) 145, peripheral component(s) 150, and a direct memory access controller (DMAC) 155. The components of device 105 may be in electronic communication with one another through a bus 110.[0032] Device 105 may be a computing device, electronic device, mobile computing device, or wireless device. Device 105 may be a portable electronic device. For example, device 105 may be a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, or the like. In some examples, device 105 may be configured for bi-directional wireless communication via a base station or access point. Device 105 may be capable of machine-type communication (MTC), machine-to- machine (M2M) communication, or device-to-device (D2D) communication. Device 105 may be referred to as a user equipment (UE), station (STA), mobile terminal, or the like.[0033] Interface controller 120 may be configured to interface with SoC/processor 130. Interface controller 120 may also be configured to interface with various memory devices 170, 175, 180, or any combination thereof. In some cases, interface controller 120 may be configured to perform or to cause memory devices 170, 175, 180 to perform one or more functions ascribed herein to memory devices 170, 175, 180 (e.g., ascribed to a local memory controller of memory device 175 or 180).[0034] SoC/processor 130 may be configured to operate with various memory devices 170, 175, 180, or any combination thereof— either directly or via interface controller 120. SoC/processor 130 may also be referred to as a host and may include a host controller. A host may refer to a computing device coupled with other devices through any means of electronic communication (e.g., a bus, a link, a channel, or a wireless network). In the context of a memory system or sub-system, a host may be a computing device (e.g., central processing unit, graphics processing unit, microprocessor, application processor, baseband processor) coupled with one or more memory devices that collectively function as a main memory for the host. In some cases, SoC/processor 130 may perform some or all of the functions of interface controller 120 described herein.[0035] SoC/processor 130 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or it may be a combination of these types of components. In some cases, SoC/processor 130 may include a baseband processor that manages radio functions of device 105 in a wireless network environment. In some examples, a separate chip (e.g., a separate chip other than the chip including SoC/processor 130) may include the baseband processor and be coupled with bus 110. The baseband processor may adjust its operational mode as a part of an overall operational scheme of device 105. For example, the baseband processor may change its data transfer rate (e.g., data rate for transmitting or receiving a stream of data over a wireless network) when a memory component (e.g., memory device 180) transmits an indication of a time delay associated with an access command fromSoC/processor 130.[0036] Memory devices 170 may each include an array or arrays of memory cells to store digital information. Memory devices 170 may be configured to each operate withSoC/processor 130 and/or interface controller 120. In some examples, memory devices 170 may be configured to provide buffer memory for a memory bank for SoC/processor 130 or interface controller 120. In some cases, memory devices 170 may include an array of non volatile memory cells. Device 105 may include any number of memory devices 170.[0037] Memory device 175 may include an array of memory cells and a local memory controller configured to operate with the array of memory cells. In some cases, memory devices 175 may include an array of non-volatile memory cells. The array of memory cells included in memory device 175 may be structured in two or more tiers each having different performance capabilities. The local memory controller of memory device 175 may also be configured to operate with SoC/processor 130 or interface controller 120. First-tier memory cells may be 3D XPoint™ memory, which may provide a high number of input/output operations per second (IOPS) with a short response time to handle various workloads. [0038] Second-tier memory cells may be three-dimensional Not-AND (NAND) memory, which may provide high capacity for data storage at a relatively lower cost than the first-tier memory cells. The local memory controller of memory device 175 may be configured to facilitate the efficient operation of memory cells within memory device 175, which may have different characteristics among memory cells in the two or more tiers, with SoC/processor 130. Memory device 175 may include other types or combinations of memory arrays. In some examples, one or more memory devices 175 may be present in device 105.[0039] Memory devices 180 may include one or more arrays of memory cells and a local memory controller configured to operate with the one or more arrays of memory cells. The local memory controller of memory device 180 may also be configured to operate with SoC/processor 130 or interface controller 120. A memory device 180 may include non volatile memory cells, volatile memory cells, or a combination of both non-volatile and volatile memory cells. A non-volatile memory cell (e.g., an FeRAM memory cell) may maintain its stored logic state for an extended period of time in the absence of an external power source, thereby reducing or eliminating requirements to perform refresh operations (e.g., refresh operations such as those associated with DRAM cells). In some examples, one or more memory devices 180 may be present in device 105.[0040] In some examples, a memory device (e.g., a local memory controller of memory device 175 or 180) may transmit an indicator of a size of prefetch data associated with data requested by interface controller 120, which may be referred to as a prefetch indicator signal. The size of prefetch data may be equal to or different than the size of the requested data subject to a read command by the interface controller 120. The memory device may transmit a prefetch indicator signal using a pin compatible with an industry standard or specification (e.g., a JEDEC LPDDR specification). In some cases, a separate pin (e.g., RSP pin) of the memory device may be configured for transmitting command or control information to the interface controller 120, and the memory device may use the separate pin to transmit the prefetch indicator signal to the interface controller 120. In other words, the memory device may identify an indicator of a size of prefetch data associated with the requested data and inform the interface controller 120 of the size of prefetch data while (e.g., concurrently with) transmitting the requested data to the interface controller 120.[0041] The inclusion of an array of non-volatile memory cells (e.g., FeRAM memory cells) in a memory device (e.g., memory devices 170, 175, or 180) may provide various benefits (e.g., efficiency benefits) for device 105. Such benefits may include near-zero standby power (which may increase battery life), instant-on operation following a standby or un-powered (e.g.,“off’) state, and/or high areal memory density with low system power consumption relative to an array of volatile memory cells. Such features of non-volatile memory system or sub-system may, for example, support the use of computationally intensive (e.g., desktop applications) operations or software in mobile environments. In some cases, device 105 may include multiple kinds of non-volatile memory arrays employing different non-volatile memory technologies, such as one or more FeRAM arrays along with one or more non-volatile memory arrays using other memory technologies. Further, the benefits described herein are merely exemplary, and one of ordinary skill in the art may appreciate further benefits.[0042] In some cases, a memory device (e.g., memory devices 170, 175, or 180) may use a different page size than SoC/processor 130. In the context of a memory device, a page size may refer to a size of data handled at various interfaces, and different memory device types may have different page sizes. In some examples, SoC/processor 130 may use a DRAM page size (e.g., a page size in accord with one or more JEDEC low power double data rate(LPDDR) specifications), and a memory device within device 105 may include an array of non-volatile memory cells that are configured to provide a different page size (e.g., a page size smaller than a typical DRAM page size). In some examples, a memory device may support a variable page size— e.g., a memory device may include an array of non-volatile memory cells (e.g., an FeRAM array) that supports multiple page sizes, and the page size used may vary from one access operation to another— and in some examples, the local memory controller of a memory device (e.g., memory device 175 or 180) may be configured to handle a variable page size for a memory array within the memory device. For example, in some cases, a subset of non-volatile memory cells connected to an activated word line may be sensed simultaneously without having to sense all non-volatile memory cells connected to the activated word line, thereby supporting variable page-size operations within a memory device. In some cases, the page size for an array of non-volatile memory cells may vary dynamically depending on the nature of an access command and a characteristic of (e.g., size or associated latency) associated data (e.g., data subject to the access command). Smaller page size may provide benefits (e.g., efficiency benefits) as a smaller number of memory cells may be activated in connection with a given access operation. The use of variable page size may provide further benefits to device 105, such as configurable and efficient energy usage when an operation is associated with a small change in information by reducing the page size while supporting a high-performance operation by increasing the page size when desired.[0043] DMAC 155 may support direct memory access (e.g., read or write) operations by SoC/processor 130 with respect to memory devices 170, 175, or 180. For example, DMAC 155 may support access by SoC/processor 130 of a memory device 170, 175, or 180 without the involvement or operation of interface controller 120.[0044] Peripheral component(s) 150 may include any input or output device, or an interface for any such device, that may be integrated into device 105. Examples of such peripheral component s) 150 may include disk controllers, sound controllers, graphics controllers, Ethernet controllers, modems, universal serial bus (ETSB) controllers, serial or parallel ports, or peripheral card slots, such as peripheral component interconnect (PCI) or accelerated graphics port (AGP) slots. In some cases, peripheral component(s) 150 may include a component (e.g., a control component) that determines an operational mode of device 105 (e.g., a power usage mode, a clock frequency mode). In some cases, the component may include a power-management integrated circuit (PMIC) that provides power to device 105. For example, the component may be an operation mode manager for the device 105 that determines a level of power usage associated with some aspects of the device 105 operations. For example, the operation mode manager may change a power usage level for the device 105 (e.g., by activating or deactivating, or adjusting an operation mode, of one or more aspects of device 105) when a memory component (e.g., memory device 180) transmits an indication of a time delay associated with an access command fromSoC/processor 130. In some cases, a PMIC may increase or decrease voltage or current supply levels to device 105 (e.g., to interface controller 120, memory devices 170, 175, or 180) to support an increase or decrease in a bandwidth requirement of device 105. In some cases, the component may receive signals associated with a change in operating clock frequency of interface controller 120. Peripheral component(s) 150 may also include other components or interfaces for other components understood by those skilled in the art as peripherals.[0045] BIOS component 140 or board support package (BSP) 145 may be software components that include a basic input/output system (BIOS) operated as firmware, which may initialize and ran various hardware components of system 100. BIOS component 140 or BSP 145 may also manage data flow between SoC/processor 130 and the variouscomponents, e.g., peripheral component(s) 150, input/output controller 135, etc. BIOS component 140 or BSP 145 may include a program or software stored in read-only memory (ROM), flash memory, or any other non-volatile memory. In some cases, BIOS component 140 and BSP 145 may be combined as a single component.[0046] Input/output controller 135 may manage data communication betweenSoC/processor 130 and other devices, including peripheral component(s) 150, input devices 160, or output devices 165. Input/output controller 135 may also manage peripherals that are not integrated into device 105. In some cases, input/output controller 135 may include a physical connection or port to the external peripheral.[0047] Input device 160 may represent a device or signal external to device 105 that provides input to device 105 or its components. Input device 160 may include a user interface or an interface with or between other devices (not shown in FIG. 1). In some cases, input device 160 may be a peripheral that interfaces with device 105 via peripheral component s) 150 or is managed by input/output controller 135.[0048] Output device 165 may represent a device or signal external to device 105 that is configured to receive output from device 105 or any of its components. For example, output device 165 may include a display, audio speakers, a printing device, or another processor on printed circuit board, etc. In some cases, output device 165 may be a peripheral that interfaces with device 105 via peripheral component(s) 150 or is managed by input/output controller 135.[0049] The components of device 105 may be made up of general purpose or specialized circuitry designed to carry out their respective functions. This may include various circuit elements, for example, conductive lines, transistors, capacitors, inductors, resistors, amplifiers, or other active or passive elements configured to carry out the functions described herein.[0050] FIG. 2 illustrates an exemplary system that supports prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure. System 200 may include aspects of system 100 as described with reference to FIG. 1 and may include a device 210. Device 210 may include aspects of device 105 as described with reference to FIG. 1. Device 210 may include memory system or sub-system 220, SoC/processor 250, and storage 260. SoC/processor 250 may be an example of an SoC/processor 130 as described with reference to FIG. 1. Memory sub-system 220 may include aspects of a memory device 180 as described with reference to FIG. 1 as well as other aspects of a device 105 as described with reference to FIG. 1. Storage 260 may be an example of a memory device 175 as described with reference to FIG. 1.[0051] SoC/processor 250 (e.g., a host) may be configured to operate with storage 260 via a bus 280 and with memory sub-system 220 via buses 270 and 275. In some examples, bus 280 may be configured to support periphery component interconnect express (PCIe) signaling. Bus 270 may be configured to support LPDDR command and address (CA) signaling, and bus 275 may be configured to support LPDDR input/output (I/O) signaling. In some examples, a local memory array may be disposed on a same substrate as SoC/processor 250 and may be configured to function as a cache memory 255 for SoC/processor 250.[0052] Memory sub-system 220 may include non-volatile memory 225 and interface controller 230. Memory sub-system 220 and non-volatile memory 225 may each be referred to as a memory device or memory devices. Non-volatile memory 225 may be an example of a memory device (e.g., memory devices 170, 175, or 180) as described with reference to FIG.1. Interface controller 230 may be an example of an interface controller 120 as described with reference to FIG. 1. Interface controller 230 may be configured to operate withSoC/processor 250 via buses 270 and 275 pursuant to one or more LPDDR specifications (e.g., page size, timing requirements). Interface controller 230 may include virtual memory bank 235, which may be an example of a memory device 170 as described with reference to FIG. 1. In some examples, virtual memory bank 235 may include DRAM memory cells and may be configured to operate pursuant to an LPDDR specification. Virtual memory bank 235 may be disposed on a same substrate as interface controller 230. In addition, interface controller 230 may be configured to operate with non-volatile memory 225 via buses 271 and 276. In some cases, interface controller 230 may be configured to perform some functions ascribed herein to non-volatile memory 225 (e.g., to a local memory controller of non-volatile memory 225).[0053] In some examples, memory sub-system 220 may further include buffer 240.Buffer 240 may include DRAM memory cells. Buffer 240 may be an example of a memory device 170 or a memory device 180 as described with reference to FIG. 1. In addition, interface controller 230 may be configured to operate with buffer 240 via buses 272 and 277. In some examples, bus 272 may be a buffer CA bus. Bus 277 may be an interface (IF) buffer I/O bus. Interface controller 230 and buses 272 and 277 may be compatible with DRAM protocols. For example, interface controller 230 and buses 272 and 277 may utilize LPDDR page sizes and timings. SoC/processor 250 may be configured to directly operate with buffer 240 via bus 275. In some examples, buffer 240 may be configured to have a page size compatible with bus 275, which may support direct access of buffer 240 by SoC/processor 250.[0054] Buffer 240 may be configured to operate as a logical augmentation of cache memory 255 within SoC/processor 250. The capacity of buffer 240 may be on the order of 256 Megabytes. The capacity of buffer 240 may be based at least in part on the size of cache memory 255 in SoC/processor 250. For example, the capacity of buffer 240 may be relatively large when the size of cache memory 255 is relatively small, or vice versa. In some cases, buffer 240 may have a relatively small capacity, which may facilitate improved (e.g., faster) performance of memory sub-system 220 relative to a DRAM device of a larger capacity due to potentially smaller parasitic components, e.g., inductance associated with metal lines. A smaller capacity of buffer 240 may also provide benefits in terms of reducing system power consumption associated with periodic refreshing operations.[0055] Memory sub-system 220 may be implemented in various configurations, including one-chip versions and multi-chip versions. A one-chip version may include interface controller 230, virtual memory bank 235, and non-volatile memory 225 on a single chip. In some examples, buffer 240 may also be included in the single-chip. In contrast, a multi-chip version may include one or more constituents of memory sub-system 220, including interface controller 230, virtual memory bank 235, non-volatile memory 225, and buffer 240, in a chip that is separate from a chip that includes one or more other constituents of memory sub system 220. For example, in one multi-chip version, respective separate chips may include each of interface controller 230, virtual memory bank 235, and non-volatile memory 225. As another example, a multi-chip version may include one chip that includes both virtual memory bank 235 and interface controller 230 and a separate chip that includes buffer 240. Additionally, a separate chip may include non-volatile memory 225.[0056] Another example of a multi-chip version may include one chip that includes both buffer 240 and virtual memory bank 235. Additionally, a separate chip may include both interface controller 230 and non-volatile memory 225 or respective separate chips may include each of interface controller 230 and non-volatile memory 225. In yet another example of a multi-chip version, a single chip may include non-volatile memory 225 and buffer 240. Additionally, a separate chip may include both interface controller 230 and virtual memory bank 235 or respective separate chips may include each of interface controller 230 and virtual memory bank 235. Non-volatile memory 225 may include both an array of non-volatile memory cells and an array of DRAM cells. In some cases of a multi-chip version, interface controller 230, virtual memory bank 235, and buffer 240 may be disposed on a single chip and non-volatile memory 225 on a separate chip.[0057] In some examples, non-volatile memory 225 may include an array of non-volatile memory cells (e.g., FeRAM memory cells). The non-volatile array included in non-volatile memory 225 may be configured to support variable page sizes, which may in some cases differ from a page size associated with SoC/processor 250. Further, non-volatile memory 225 may be configured to determine a variable page size for non-volatile memory 225. Non volatile memory 225 may be referred to as a non-volatile near memory to SoC/processor 250 (e.g., in comparison to storage 260). In the context of a memory system, a near memory may refer to a memory component placed near SoC/processor 250, logically and/or physically, to provide a faster access speed than other memory components. Configuring non-volatile memory 225 as a near memory for SoC/processor 250 may, for example, limit or avoid overhead that may be associated with SoC/processor 250 retrieving data from storage 260. SoC/processor 250 may store critical information in non-volatile memory 225 upon occurrence of an unexpected power interruption— e.g., instead of accessing storage 260, as accessing storage 260 may be associated with an undesired delay. In some cases, non-volatile memory 225 may include a local memory controller (not shown), which may facilitate various operations in conjunction with interface controller 230 or perform some functions ascribed herein to non-volatile memory 225.[0058] Interface controller 230 may be configured to operate with non-volatile memory 225 via buses 271 and 276. In some examples, bus 271 may be an FeRAM CA bus, and bus 276 may be an FeRAM interface (IF) bus. Interface controller 230 and buses 271 and 276 may be compatible with the page size of non-volatile memory 225. In some examples, bus 280 may be configured to facilitate data transfer between buffer 240 and non-volatile memory 225. In some examples, bus 290 may be configured to facilitate data transfer between non-volatile memory 225 and virtual memory bank 235.[0059] Interface controller 230 may support low latency or reduced power operation (e.g., from the perspective of SoC/processor 250) by leveraging virtual memory bank 235 or buffer 240. For example, upon receiving a read command from SoC/processor 250, interface controller 230 may attempt to retrieve requested data from virtual memory bank 235 or buffer 240 for transmission to SoC/processor 250. If data subject to the read command is not present in virtual memory bank 235 or buffer 240, interface controller 230 may retrieve data from non-volatile memory 225 to store the data in virtual memory bank 235 and also (e.g., concurrently) send the data to SoC/processor 250.[0060] Interface controller 230 may manage operations of virtual memory bank 235. For example, interface controller 230 may use a set of flags located in virtual memory bank 235 to identify portions of virtual memory bank 235 storing valid data from non-volatile memory 225. As another example, upon receiving a write command from SoC/processor 250, interface controller 230 may store data at virtual memory bank 235.[0061] Another set of flags located in virtual memory bank 235 may indicate which portions of virtual memory bank 235 store valid data that are modified from corresponding contents of non-volatile memory 225. Valid data stored at virtual memory bank 235 may include data that has been retrieved from non-volatile memory 225 pursuant to a read command from SoC/processor 250 or data that has been received from SoC/processor 250 as a part of write command. In some cases, invalid data present at virtual memory bank 235 may include a set of filler data (e.g., a sequence of“0” or“1” without representing meaningful information). Flags indicating which portions of virtual memory bank 235 store valid data or modified data may support interface controller 230 in saving only the data that has been modified from the corresponding contents in non-volatile memory 225. Furthermore, interface controller 230 may determine where to store data upon removal of the data from virtual memory bank 235 (e.g., when SoC/processor 250 no longer needs the data). Interface controller 230 may monitor and identify the contents of virtual memory bank 235.[0062] In some cases, interface controller 230 may include a counter that records a number of access attempts by SoC/processor 250 to the contents of virtual memory bank 235 during a certain time interval. By way of example, if the counter shows that the number of access attempts by SoC/processor 250 during the time interval is less than a pre-determined threshold value, then upon removal of the data from virtual memory bank 235, interface controller 230 may store modified data (that is, data that was modified by the access attempts by SoC/processor 250) in non-volatile memory 225, as the interface controller 230 may anticipate, based on the relatively low number of prior access attempts, that SoC/processor 250 is not likely to access the data again for some duration of time.[0063] Or, if the counter indicates that the number of access attempts by SoC/processor 250 during the time interval is equal to or larger than the pre-determined threshold value, then interface controller 230 may, upon removal of the data from virtual memory bank 235, store the data in buffer 240, as the interface controller 230 may anticipate that SoC/processor 250 is likely to access the data again soon. One skilled in the art may, in view of overall system requirements, devise various criteria (e.g., criteria including the threshold value of the counter, a clock, a value of the time interval, etc.) for interface controller 230 to use in making such determinations.[0064] In addition, interface controller 230 may set up a by-pass indicator based on the counter when the number of access attempts by SoC/processor 250 is less than the pre- determined threshold value in order to by-pass saving the contents of virtual memory bank 235 to buffer 240. Then, interface controller 230 may directly save the modified contents of virtual memory bank 235 to non-volatile memory 225 based on the by-pass indicator. In some cases, upon removal of the data from virtual memory bank 235, interface controller 230 may determine that the data has not been modified since it was last retrieved from non volatile memory 225 and may, based on that determination, discard the data (e.g., not write the data to either buffer 240 or non-volatile memory 225).[0065] Additionally, interface controller 230 may prefetch data from non-volatile memory 225 by transmitting a read command for a first set of data to local memory controller 226. The local memory controller 226 may, upon receiving the read command, identify a size of a second set of data to be prefetched (e.g., a prefetch size), which includes the first set of data. The local memory controller 226 may transmit an indicator of the prefetch size (e.g., a prefetch indicator signal) to the interface controller 230 while transmitting the second set of data in order to inform the interface controller 230 whether the prefetch size is greater than the first set of data (e.g., the second set of data may include the first set of data as well as an additional set of data). [0066] In some cases, transmitting the entire prefetched data may be associated with a time delay (e.g., a read latency associated with activating a portion of memory array to retrieve the additional set of data). In such cases, the local memory controller 226 may signal the interface controller 230 to transmit a subsequent read command for a remaining portion of the second set of data after a predetermined delay.[0067] FIG. 3 illustrates an example of a data structure 300-a and a state diagram 300-b that support prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure. Data structure 300-a illustrates a non-volatile memory page 310, a first field 315, and a second field 320. In some examples, the non-volatile memory page 310 may be 128 or 256 bytes. In some examples, a memory device (e.g., non volatile memory 225 as described with reference to FIG. 2, not shown in FIG. 3) may utilize data structure 300-a. In some examples, an interface controller (e.g., interface controller 120 or interface controller 230 as described with reference to FIGs 1 and 2, not shown in FIG. 3) may perform or manage various operations (e.g., operations 360 through 380) of state diagram 300-b. In some cases, a local memory controller of a memory device (e.g., of non volatile memory 225 as described with reference to FIG. 2, not shown in FIG. 3) may facilitate various operations in conjunction with the interface controller.[0068] The non-volatile memory page 310 may include a plurality of subpages 312. An interface controller (or an interface controller in conjunction with a local memory controller of non-volatile memory, in some cases) may activate each of the plurality of subpages (e.g.,3 l2-a) independent of other subpages (e.g., 3 l2-b through 312-h) in order to facilitate an energy-efficient page size management. In some examples, the first field 315 and the second field 320 may be stored in a portion of a memory array that is physically located closer to the interface controller (or the local memory controller, in some cases) than the non-volatile memory page 310. The physical proximity of the first field 315 and the second field 320 to the interface controller (or the local memory controller, in some cases) may reduce a delay time associated with activating the first field 315 or the second field 320 (e.g., a delay time to charge a word line associated with a group of memory cells) and retrieving the contents therefrom.[0069] Thus, the portion of memory array corresponding to the first field 315 or the second field 320 may exhibit an access speed faster than a nominal access speed, which may correspond to the access speed of other portions of memory array corresponding to the non- volatile memory page 310. In some cases, an interface controller (e.g., interface controller 230 described with reference to FIG. 2) may specify the portion of memory array having the faster access speed when storing the contents of the first field 315 and the second field 320 in the non-volatile memory. In some cases, a local memory controller may specify the portion of memory array having the faster access speed when storing the contents of the first field 315 and the second field 320.[0070] In some examples, the first field 315 may be configured to indicate (and may be updated to track) a number of times a corresponding non-volatile memory page 310 has been accessed (e.g., read or write) by an SoC/processor (e.g., SoC/processor 250 described with reference to FIG. 2). The first field 315 may be referred to as a saturating counter (SC). The first field 315 may include two bits of information, but it is to be understood that any number of bits may be used in accordance with the teachings herein.[0071] In some examples, the second field 320 may be configured to indicate a size of data in a corresponding non-volatile memory page 310 to be retrieved upon receiving a read command. An interface controller may determine the size of data based on an access pattern to the data made by an SoC/processor in one or more previous access operations and may be referred to as a prefetch size in some cases. A prefetch size may be an amount of data that is to be read in response to a read command for data included in the non-volatile memory page 310. For example, if data from the non-volatile memory page 310 is subject to a read command (e.g., a read command from the interface controller 230 accessing the non-volatile memory page 310, anticipating an access from an SoC/processor), the interface controller (or the interface controller in conjunction with a local memory controller, in some cases) may identify the associated second field 320 and may determine a prefetch size for the requested data based on the associated second field 320, where the prefetch size indicates a size of data (that includes and thus is at least as large as the requested data) to be read from the non volatile memory 225 in response to the read request.[0072] In some examples, logic states stored in the second field 320 may indicate a prefetch size of the corresponding non-volatile memory page 310. For example,“00” may correspond to 64 bytes,“01” may correspond to 128 bytes,“01” may correspond to 192 bytes, and“11” may correspond to 256 bytes. In such an example, if a read command requests 64 bytes of data from a non-volatile memory page 310, and the associated second field 320 is 01, then the interface controller (or the interface controller in conjunction with a local memory controller, in some cases) may identify the prefetch size for the requested data as 192 bytes and read from the non-volatile memory 225 192 bytes of data, where the 192 bytes includes the requested 64 bytes. It is to be understood that the second field 320 may include any number of bits supporting any number of logic states and may indicate prefetch sizes of any size. In some examples, the second field 320 may be referred to as a prefetch (PF) counter.[0073] In some examples, an interface controller (e.g., interface controller 230 described with reference to FIG. 2, not shown in FIG. 3) may use a set of mode register bits to facilitate the SC and PF counter functionality of a non-volatile memory (e.g., non-volatile memory 225 described with reference to FIG. 2). Mode registers may establish various operation modes (e.g., different test modes, different read or write modes, different performance modes) of a memory device and a set of bits associated with mode registers, which may be referred to as mode register bits, may be used to determine a particular mode of operation.[0074] An interface controller may access the contents of the SC and PF counter using a data mask inversion (DMI) pin along with data during a read operation. In some examples, an interface controller may write the contents of the SC and PF counter with a special command sequence. For example, an interface controller may provide the contents of SC and PF counter to registers associated with the SC and PF counter via column address pins during a write command issued to a non-volatile memory (e.g., non-volatile memory 225 described with reference to FIG. 2).[0075] Diagram 300-b illustrates exemplary operational characteristics of a memory system or sub-system that support features and techniques as described herein. Diagram 300- b illustrates non-volatile memory 325, virtual page 335, and buffer 340. Non-volatile memory 325 may be an example of non-volatile memory 225 described with reference to FIG. 2. Virtual page 335 may be a page within virtual memory bank 235 described with reference to FIG. 2.[0076] In some examples, virtual memory bank 235 may be a superset of multiple virtual pages 335. Buffer 340 may be an example of buffer 240 described with reference to FIG. 2. An interface controller (e.g., interface controller 230 described with reference to FIG. 2, not shown in FIG. 3) may perform or manage various operations (e.g., operations 360 through 380) associated with non-volatile memory 325, virtual page 335, and buffer 340. In some cases, an interface controller may manage an operation by requesting another entity (e.g., a local memory controller of a memory device) to perform the operation.[0077] Operation 360 may include transmitting the contents of a non-volatile memory page 310 from non-volatile memory 325 to virtual page 335 and storing the contents in virtual page 335. The interface controller may carry out operation 360 when anSoC/processor requests data corresponding to the contents of non-volatile memory page 310 that is not present either in the virtual page 335 or the buffer 340.[0078] Additionally, the interface controller may, as part of operation 360, update a value of the first field 315 (e.g., a value of SC) associated with the non-volatile memory page 310, in order to track a number of access events by the SoC/processor for the non-volatile memory page 310.[0079] The interface controller may as part of operation 360 prefetch data from non volatile memory 325 by transmitting a read command for a first set of data. Non-volatile memory 325 (e.g., a local memory controller of non-volatile memory 325) may transmit the first set of data as requested by the interface controller during operation 360 where the first set of data is transmitted using a signal over a pin designated for transmitting data (e.g., signal 410 described with reference to FIG. 4). In some cases, non-volatile memory 325 (e.g., a local memory controller of non-volatile memory 325) may transmit the first set of data over bus 271 described with reference to FIG. 2 in response to the read command received from the interface controller.[0080] In addition to transmitting the first set of data, non-volatile memory 325 (e.g., a local memory controller of non-volatile memory 325) may transmit an indicator of a prefetch size (e.g., prefetch indicator signal) to the interface controller in order to inform the interface controller of the prefetch size before completing transmission of the first set of data. The prefetch size may be equal to or different from the size of first set of data requested by the interface controller. In some cases, the prefetch data may include an additional set of data accompanying the first set of data. Non-volatile memory 325 may transmit the prefetch indicator signal (e.g., signal 415 described with reference to FIG. 4) over a pin that is compatible with an LPDDR specification (e.g., a DMI pin, a link ECC parity pin). Non volatile memory 325 may transmit the prefetch indicator signal (e.g., signal 420 described with reference to FIG. 4) over a separate pin configured for transmitting command or control information. In some cases, the separate pin may be referred to as a response (RSP) pin. Non volatile memory 325 may transmit such a prefetch indicator signal over bus 276 described with reference to FIG. 2 in response to a read command received from the interface controller.[0081] The interface controller may perform operation 365 when data requested by an SoC/processor (e.g., subject to a read command sent to the interface controller by the SoC/processor) is found in virtual page 335. As part of operation 365, the interface controller may retrieve the requested data from the virtual page 335 and provide the requested data to the SoC/processor without accessing either non-volatile memory 325 or buffer 340.Additionally, the interface controller may update a value of the first field 315 (e.g., a value of SC) associated with the data, in order to track a number of access events by theSoC/processor for the non-volatile memory page 310.[0082] The interface controller may perform operation 370 when a page in virtual page 335 is closed and a value of the first field 315 (e.g., a value of SC) associated with the closed page does not satisfy a threshold value. Virtual page 335 may include one or more pages within virtual memory bank 235 described with reference to FIG. 2. The interface controller may determine to close a page in virtual page 335 when the SoC/processor no longer needs the data associated with the page. Upon determining to close a page in virtual page 335, the interface controller may remove the data to make the memory space corresponding to the page available for the SoC/processor.[0083] In some cases, the interface controller may use a threshold value to determine how to dispose data from a closed page of virtual page 335. In some examples, when a value corresponding to first field 315 (e.g., a value of SC) is less than the threshold value, the interface controller may bypass saving data from a closed page to buffer 340. Instead, the interface controller may store any modified data from the closed page in non-volatile memory 325 and discard any unmodified data from the closed page. In such cases, the interface controller may determine whether data from a closed page include a portion that theSoC/processor has modified relative to corresponding data stored in non-volatile memory 325.[0084] During operation 370, the interface controller may store any modified portion of the data of the closed page in non-volatile memory 325 from virtual page 335. Further, the interface controller may discard any unmodified data from a closed page after determining that the data has not been modified (that is, the interface controller may bypass storing an unmodified portion of the data in non-volatile memory 325). The interface controller may, in view of overall system requirements, determine the threshold value based on various criteria (e.g., a pre-determined value associated with a number of access to the page, a value of a time interval associated with lack of access to the page).[0085] The interface controller may perform operation 375 when the interface controller determines to close a page in virtual page 335 and determines that a value of the first field 315 (e.g., a value of SC) associated with the closed page satisfies the threshold value described above. In some examples, when a value of the first field 315 (e.g., a value of SC) is equal to or greater than the threshold value, the interface controller may save data from a closed page to buffer 340, as the interface controller may determine that the SoC/processor is likely to access the data soon. As such, as a part of operation 375, the interface controller may store data from the closed page in buffer 340.[0086] The interface controller may perform operation 380 when it evicts a page from buffer 340. The interface controller may determine to evict a page from buffer 340 when the page is not accessed by the SoC/processor for a predetermined duration. In some cases, data from an evicted page may include a portion that has been modified by the SoC/processor relative to corresponding data stored in non-volatile memory 325. In such cases, as a part of operation 380, the interface controller may store only a modified portion of the evicted data in non-volatile memory 325. Additionally, as part of operation 380, the interface controller may update (e.g., reset to zero) a value of the first field 315 (e.g., a value of the SC) associated with the evicted page. Further, the interface controller may discard data after determining that the data has not been modified (that is, the interface controller may bypass storing an unmodified portion of the evicted data in non-volatile memory 325).[0087] The interface controller may also, as a part of operation 380, determine a prefetch size to associate with the evicted data and store the prefetch size along with the evicted data. The interface controller may determine the prefetch size of the data based at least in part on an access pattern (e.g., an amount of accessed data) to the data made by the SoC/processor while the data is present in buffer 340. In some cases, the interface controller may determine the prefetch size based on a history of an access pattern (e.g., a size of data) by the SoC/processor, an operation mode (e.g., a power conservation mode, a high performance mode), a bus speed of a memory system or sub-system, or any combination thereof.[0088] When storing a value of PF counter (e.g., a prefetch size) in non-volatile memory 325, the interface controller may also designate a portion of memory cells in non-volatile memory 325 (e.g., memory cells corresponding to the second field 320) for storing the value of PF counter. For example, the interface controller may designate a portion of memory cells that exhibits a faster access speed than other portions of memory cells of the non-volatile memory 325, which may increase the speed with which the non-volatile memory 325 (e.g., a local memory controller of non-volatile memory 325) may determine the prefetch size of data associated with the PF counter. In turn, increasing the speed with which the non-volatile memory 325 may determine the prefetch size may facilitate the non-volatile memory 325 transmitting a signal related to the prefetch size (e.g., prefetch indicator signal) to the interface controller in a timely manner (e.g., while the requested data is being transmitted to the interface controller).[0089] In some cases, the interface controller may dynamically update the value of the PF counter stored in the memory device based upon determining that a different access pattern to the data by the SoC/processor is established while the data is present in buffer 340. If the evicted data is not modified compared to the corresponding data stored in non-volatile memory 325, the interface controller may update the PF counter independent of storing the evicted data to non-volatile memory 325. In some cases, the interface controller may, as a part of operation 380, write an updated value of the PF counter in a register associated with the PF counter without activating a group of memory cells corresponding to the second field 320.[0090] FIG. 4A illustrates an example of timing diagram 400-a that supports prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure. The timing diagram 400-a illustrate prefetch signaling procedures during a prefetch operation. The prefetch operation may include signals 410, 415, and 420, which a non-volatile memory (e.g., non-volatile memory 225 described with reference to FIG. 2) may transmit to an interface controller (e.g., interface controller 230 described with reference to FIG. 2). Although additional signals (e.g., clock signals, command signals) between the non volatile memory and the interface controller may accompany the signals 410, 415, and 420 during the prefetch operation, they are omitted in FIG. 4A in an effort to increase visibility and clarity of the depicted features of prefetch signaling.[0091] The non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may transmit the signal 410 to the interface controller over a pin designated for transmitting data. The signal 410 may be an example of a signal transmitting data associated with operation 360 described with reference to FIG. 3. The non-volatile memory may transmit the signal 410 (e.g., data) over bus 271 described with reference to FIG. 2 in response to a read command received from the interface controller.[0092] The non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may transmit the signal 415 to the interface controller over a pin (e.g., a DMI pin, a link ECC parity pin) that is compatible with an LPDDR specification. In some examples, the non volatile memory may transmit only one of the signal 415 or signal 420. The signal 415 may include a prefetch indicator signal (e.g., indicator of a prefetch size). The non-volatile memory may transmit the signal 415 as a part of operation 360 described with reference to FIG. 3. The non-volatile memory may transmit the signal 415 over bus 276 described with reference to FIG. 2 in response to a read command received from the interface controller. The signal 415 may include the contents of PF counter related to a prefetch size in some cases. The signal 415 may also include the contents of SC. The non-volatile memory may transmit the signal 415 to inform the interface controller whether there exists an additional set of data to be transmitted beyond the data currently being transmitted on the data pin (e.g., signal 410).[0093] Additionally or alternatively to the signal 415, the non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may transmit the signal 420 to the interface controller over a separate pin configured for transmitting command or control information. In some cases, the separate pin may be referred to as a response (RSP) pin. The non-volatile memory may transmit the signal 420 in lieu of transmitting the signal 415. The signal 420 may include a prefetch indicator signal (e.g., indicator of a prefetch size). The non-volatile memory may transmit the signal 420 as a part of operation 360 described with reference to FIG. 3. The non-volatile memory may transmit the signal 420 over bus 276 described with reference to FIG. 2 in response to a read command received from the interface controller. The signal 420 may include one or more pulses, and a number, a duration, or a pattern of the pulses that may be indicative of the contents of PF counter related to a prefetch size in some cases. The non-volatile memory may transmit the signal 420 to inform the interface controller whether there exists an additional set of data to be transmitted beyond the data currently being transmitted on the data pin (e.g., signal 410).[0094] During duration 425 (e.g., time tO through t2), the non-volatile memory may transmit data 430 using the signal 410 to the interface controller in response to receiving a read command for the data 430. For example, the data 430 may correspond to 64 bytes. Upon receiving the read command, the non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may access a PF counter (e.g., second field 320 described with reference to FIG. 3) associated with the data 430 and determine a logic state stored in the PF counter. For example, the logic state of the PF counter may correspond to“00,” which may indicate the prefetch size for the requested data as 64 bytes. At time tO, the non-volatile memory may transmit the data 430 (e.g., 64 bytes) using the signal 410. At time tl, the non volatile memory may transmit an indicator of prefetch size (e.g., 64 bytes) using the signal 415. For example, the signal 415 may include two (2) bits (e.g., PF bits 435) corresponding to the logic state of the PF counter“00,” which may indicate that the data 430 being transmitted using the signal 410 is the same size of data (e.g., 64 bytes) requested by the interface controller.[0095] Based on the signal 415 received during duration 425, the interface controller may complete receiving the data 430 at time t2 and move on to a next operation without further monitoring the signal 410. It should be appreciated that the non-volatile memory may transmit the indicator (e.g., PF bits 435) using the signal 415 at time tl such that the interface controller may receive the indicator of prefetch size before the transmission of the data 430 (e.g., using the signal 410) completes at time t2. In this manner, the interface controller may determine a next operation before completing reception of the data 430.[0096] Additionally or alternatively to the signal 415, the non-volatile memory may, during duration 425, transmit to the interface controller an indicator of a prefetch size using the signal 420. For example, after determining that the data 430 being transmitted is the same size of data requested by the interface controller (that is, based on accessing the PF counter associated with data 430 indicating a prefetch size of 64 bytes), the non-volatile memory may maintain the signal 420 in a particular state (e.g.,“low”) during duration 425. The interface controller, by monitoring the signal 420, may identify that the non-volatile memory has not asserted signal 420 (e.g., based on the signal 420 remaining“low”) and thus determine that the data 430 being transmitted using the signal 410 is the same size of data originally requested. Thus, the interface controller may complete receiving the data 430 at time t2 and move on to a next operation without further monitoring the signal 410.[0097] During duration 440 (e.g., time t2 through t5), the non-volatile memory may transmit data 445 using the signal 410 to the interface controller in response to receiving a read command from the interface controller. As an example depicted in FIG. 4A, the data 445 may include two sets of data 450-a and 450-b. For example, the data 450-a and data 450-b may correspond to 64 bytes each. The read command from the interface controller may have requested for data 450-a or 450-b. Upon receiving the read command, the non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may access the PF counter (e.g., second field 320 described with reference to FIG. 3) associated with the requested data 450-a (or data 450-b) and determine a logic state stored in the PF counter. For example, the logic state of the PF counter may correspond to“01,” which may indicate the prefetch size for the requested data as 128 bytes.[0098] At time t2, the non-volatile memory may transmit the data 445 that includes the data 450-a accompanied by the data 450-b using the signal 410. In some cases, the non volatile memory (e.g., a local memory controller of non-volatile memory 225) may be configured to first transmit data specifically requested by the read command, and thus data 450-a may be a set of data specifically requested by the read command, and data 450-b may be additional data included in the prefetch set of data 445 that includes data 450-a. In other cases, a particular sequence of requested data (e.g., the requested data may be the data 450-a or 450-b) with respected to the other data in the data 445 may be of no consequence so long as the data 445 is a superset of data that includes the requested data, and the non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may be configured to transmit the requested data and any additional data included in the prefetch set of data 445 on a first-available basis.[0099] At time t3, the non-volatile memory may transmit, using the signal 415, an indicator of the prefetch size (e.g., prefetch indicator signal) associated with the prefetch size (e.g., 128 bytes) associated with the requested data. For example, the signal 415 may include the two (2) bits (e.g., PF bits 455) corresponding to the logic state of the PF counter“01,” which may indicate that the data 445 being transmitted using the signal 410 includes a total of 128 bytes of data. Thus, the interface controller may complete receiving the data 450-a at time t4 (e.g., 64 bytes of data) and, based on receiving the prefetch indicator signal (e.g., the signal 415 indicating the prefetch size of 128 bytes), may continue to monitor the signal 410 such that the interface controller may complete receiving the data 450-b (e.g., another 64 bytes of data) at time t5 pursuant to the prefetch size (e.g., 128 bytes) indicated by the PF counter.[0100] It is to be understood that the non-volatile memory may transmit the data 450-a and data 450-b in a sequence without a significant delay in-between. This may correspond to a situation where the data 450-a and data 450-b are available in one or more activated subpages (e.g., subpages 312 described with reference to FIG. 3) for retrieving data. As such, the non-volatile memory may retrieve and send both data 450-a and data 450-b during the duration 440 without an additional time delay (e.g., may send data 450-b immediately subsequent to sending data 450-a).[0101] Additionally or alternatively to the signal 415, the non-volatile memory may, during duration 440, transmit to the interface controller an indicator of a prefetch size (e.g., prefetch indicator signal) using the signal 420. For example, after determining that the data 445 being transmitted corresponds to 128 bytes (that is, based on accessing the PF counter associated with the requested data 450-a or 450-b indicating a prefetch size of 128 bytes), the non-volatile memory may assert the signal 420 to a particular logic state for a certain duration (e.g.,“high” during duration 460), or may otherwise indicate the prefetch size using a number, duration, or pattern of pulses on signal 420. The interface controller, by monitoring the signal 420, may identify that the non-volatile memory has asserted the signal 420 (e.g., “high” during duration 460) and thus determine that the data 445 being transmitted using the signal 410 corresponds to 128 bytes.[0102] In some cases, the interface controller may make such a determination based on a length of duration asserted by the non-volatile memory (e.g., the duration 460). For example, the non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may double duration 460 to indicate that the size of the data being transmitted is 256 bytes, instead of 128 bytes. In some cases, the non-volatile memory may make another assertion (e.g., a second pulse) following the duration 460 before time t4 (e.g., before completing transmission of data 450-a) to indicate a different size of data being transmitted (e.g., 192 bytes instead of 128 bytes). In yet another cases, the non-volatile memory may make an assertion using the signal 420 (e.g., bringing the signal 420“high”) while a first set of data (e.g., a first 64 bytes of data) is being transmitted so long as there exists a second set of data (e.g., a second 64 bytes of data) to follow the first set of data. The non-volatile memory makes various indications (e.g., prefetch indicator signal) using a pulse duration, a pulse count, a pulse pattern, or any combination thereof.[0103] FIG. 4B illustrates an example of timing diagram 400-b that supports prefetch signaling in a memory system or sub-system in accordance with examples of the present disclosure. The timing diagram 400-b illustrates prefetch signaling procedures during a prefetch operation. The prefetch operation may include signals 4l0-a, 4l5-a, and 420-a, which correspond to the signals 410, 415, and 420 described with reference to FIG. 4A. Timing diagram 400-b illustrates a prefetch signaling procedure in which some portions of the data to be prefetched may be unavailable in one or more activated subpages for retrieving data in response to a read command from an interface controller. Although additional signals (e.g., clock signals, command signals) between the non-volatile memory and the interface controller may accompany the signals 4l0-a, 4l5-a, and 420-a during the prefetch operation, they are omitted in FIG. 4B in an effort to increase visibility and clarity of the depicted features of prefetch signaling.[0104] Upon receiving the read command from the interface controller, the non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may access the PF counter (e.g., second field 320 described with reference to FIG. 3) associated with the requested data (e.g., data 470-a of 64 bytes) and determine a logic state stored in the PF counter. For example, the logic state of the PF counter may correspond to“01,” which may indicate the prefetch size for the requested data as 128 bytes (e.g., the size of prefetch data including both data 470-a and data 470-b of 64 bytes each). Subsequently, the non-volatile memory may determine that accessing data 470-b requires activating a subpage that stores data 470-b. Activating a subpage to retrieve a set of data therefrom may be associated with an additional delay (e.g., a greater read latency). For example, duration 475 may correspond to a time delay associated with retrieving data 470-b by activating the subpage that stores data 470-b.[0105] At time t6, the non-volatile memory may start transmitting data 470-a using the signal 4l0-a to the interface controller. At time t7, the non-volatile memory may transmit, using the signal 4l5-a, an indicator of the prefetch size (e.g., prefetch indicator signal) associated with the prefetch size (e.g., 128 bytes) associated with the requested data. For example, the signal 4l5-a may include the two (2) bits (e.g., PF bits 480) corresponding to the logic state of the PF counter“01,” which may indicate that the prefetch data size corresponds to a total of 128 bytes of data. In addition, the non-volatile memory may include a second indicator as a part of the signal 4l5-a indicating that a remainder of the prefetch data is associated with a time delay (e.g., duration 475). In some cases, the non-volatile memory may use an additional number of bits in the signal 4l5-a (e.g., next bits to the PF bits 480) for the second indicator. The second indicator may indicate a specific duration of the time delay (e.g., a dynamic duration), or may indicate the existence of the time delay, and the duration may be preconfigured (e.g., a static duration).[0106] In this manner, the interface controller may complete receiving the data 470-a at time t8 (e.g., 64 bytes of data) and, based on receiving the prefetch indicator signal (e.g., indicating the prefetch size of 128 bytes) and the second indicator (e.g., indicating duration 475 associated with data 470-b) using the signal 4l5-a, may transmit a subsequent read command for at least a subset of the remainder of the prefetch data (e.g., data 470-b), after a time duration. In some cases, the interface controller may transmit the subsequent read command any time after the time delay (e.g., duration 475) is expired. At time t9, the non volatile memory may transmit data 470-b using the signal 4l0-a to the interface controller in response to receiving the subsequent read command. At time tlO, the interface controller may complete receiving data 470-b and thus the prefetched data of 128 bytes as indicated by the PF counter.[0107] Additionally or alternatively to the signal 4l5-a, the non-volatile memory (e.g., a local memory controller of non-volatile memory 225) may transmit to the interface controller an indicator of a prefetch size (e.g., prefetch indicator signal) using the signal 420-a. For example, after determining that the size of prefetch data corresponds to 128 bytes (that is, based on accessing the PF counter associated with the requested data 470-a indicating a prefetch size of 128 bytes), the non-volatile memory may assert the signal 420-a to a particular logic state for a certain duration (e.g.,“high” during duration 485).[0108] The interface controller, by monitoring the signal 420-a, may identify that the non-volatile memory has asserted the signal 420-a (e.g.,“high” during duration 485) (or has otherwise indicated the prefetch size using a number, duration, or pattern of pulses on signal 420) and thus determine that the incoming prefetch data using the signal 4l0-a corresponds to 128 bytes. In addition, the signal 420-a may include a second indicator (e.g., pulse 490) to indicate that a remainder of the prefetch data is associated with a time delay (e.g., duration 475). The second indicator may indicate a specific duration of the time delay (e.g., a dynamic duration), or may indicate the existence of the time delay, and the duration may be preconfigured (e.g., a static duration). In some cases, the non-volatile memory makes various indications (e.g., prefetch indicator signal, a second indicator associated with a time delay) using a pulse duration, a pulse count, a pulse pattern, or any combination thereof.[0109] In this manner, the interface controller may complete receiving the data 470-a at time t8 (e.g., 64 bytes of data) and, based on receiving the prefetch indicator signal (e.g., indicating the prefetch size of 128 bytes) and the second indicator (e.g., indicating duration 475 associated with transmitting data 470-b) using the signal 420-a, may transmit a subsequent read command for at least a subset of the remainder of the prefetched data (e.g., data 470-b) after a time duration. In some cases, the interface controller may transmit the subsequent read command any time after the time delay (e.g., duration 475) is expired. At time t9, the non-volatile memory may transmit data 470-b using the signal 4l0-a to the interface controller in response to receiving the subsequent read command. At time tlO, the interface controller may complete receiving data 470-b and thus the prefetched data of 128 bytes as indicated by the PF counter.[0110] FIG. 5 shows a block diagram 500 of a local memory controller 515 that supports prefetch signaling in memory system or sub-system in accordance with examples of the present disclosure. The local memory controller 515 may be an example of aspects of a local memory controller 226 described with reference to FIG. 2. The local memory controller 515 may include biasing component 520, timing component 525, interface component 530, and prefetch component 535. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0111] Interface component 530 may receive, from a controller, a read command for a first set of data. Prefetch component 535 may identify, in response to the read command, an indicator associated with the first set of data that indicates a size of a second set of data to be transmitted in response to the read command for the first set of data. Interface component 530 may transmit, to the controller, the indicator with a portion of the second set of data.[0112] In some cases, interface component 530 may also transmit, to the controller, a remainder of the second set of data after transmitting the portion of the second set of data. Interface component 530 may also transmit, to the controller, a second indicator indicating a time delay for at least the subset of the second set of data. Interface component 530 may also transmit the portion of the second set of data via a second pin coupled with the memory array. In some examples, interface component 530 may receive, from the controller, an instruction to update the indicator, the instruction being based on an access pattern associated with the first set of data.[0113] Transmitting the indicator with the portion of the second set of data includes transmitting the indicator concurrently with at least a subset of the portion of the second set of data, in some cases. Transmitting the indicator with the portion of the second set of data includes transmitting the indicator via a first pin coupled with a memory array and designated for command or control information, the memory array storing the indicator and the second set of data, in some cases. In some cases, the first pin is configured for transmitting at least one of data mask/inversion (DMI) information, link error correction code (ECC) parity information, or status information regarding the memory array, or any combination thereof.[0114] Prefetch component 535 may identify, in response to the read command, an indicator associated with the first set of data that indicates a size of a second set of data to be transmitted in response to the read command for the first set of data. Prefetch component 535 may also determine that the second set of data is available in an open page of a memory array including non-volatile memory cells. Prefetch component 535 may also determine that at least a subset of the second set of data is unavailable in an open page of a memory array including non-volatile memory cells. In some examples, prefetch component 535 may identify a value of at least one bit in the first set of memory cells. In some examples, prefetch component 535 may update, in a memory array that stores the indicator and the first set of data, a value of the indicator based on the instruction.[0115] The indicator includes at least one bit in a memory array that stores the second set of data, the memory array including non-volatile memory cells. In some cases, the indicator includes a dynamic counter that indicates the size of the second set of data. In some cases, identifying the indicator includes reading a first set of memory cells in a memory array, the first set of memory cells having a faster nominal access speed than a second set of memory cells in the memory array, the second set of memory cells storing the first set of data [0116] FIG. 6 shows a block diagram 600 of an interface controller 615 that supports prefetch signaling in memory system or sub-system in accordance with examples of the present disclosure. The interface controller 615 may be an example of aspects of an interface controller 120 or 230 described with reference to FIGs. 1 and 2. The interface controller 615 may include memory interface component 640, prefetch data component 645, and data management component 650. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0117] Memory interface component 640 may transmit, to a memory device, a read command for a first set of data, receive, from the memory device, a portion of a second set of data and an indicator of a size of the second set of data, the second set of data including the first set of data, and receive a remainder of the second set of data based on determining the size of the second set of data. Memory interface component 640 may also receive, from the memory device, a second indicator that indicates a latency for at least a subset of the remainder of the second set of data.[0118] Memory interface component 640 may transmit, to the memory device after a time duration associated with the latency, a subsequent read command for at least the subset of the remainder of the second set of data. Memory interface component 640 may transmit, to a memory device, a write command for a value of an indicator of the size of the second set of data. In some examples, memory interface component 640 may transmit, to the memory device, the portion of the first set of data that has been modified. Memory interface component 640 may transmit the write command for the indicator independent oftransmitting a write command for the first set of data based on determining that the first set of data is unmodified compared to the corresponding data.[0119] In some cases, receiving the portion of the second set of data with the indicator includes: receiving the indicator concurrently with at least one bit included in the portion of the second set of data. In some cases, the write command for the indicator specifies a location within the memory device for storing the indicator[0120] Prefetch data component 645 may determine the size of the second set of data based on the indicator. Prefetch data component 645 may transmit at least the first set of data to a buffer based on determining the size of the second set of data. In some examples, prefetch data component 645 may determine an access pattern for the first set of data based on previous access operations performed by a system on a chip (SoC) or processor, where a first page size is associated with the SoC or processor and a second page size is associated with the memory device and determine a size of a second set of data to be read in response to a subsequent read command for the first set of data, the second set of data including the first set of data. In some examples, prefetch data component 645 may determine the size of the second set of data is based on the access pattern.[0121] Data management component 650 may identify a first set of data for eviction from a buffer, identify a portion of the first set of data that has been modified relative to corresponding data stored in the memory device, and determine that the first set of data is unmodified relative to corresponding data stored in the memory device.[0122] FIG. 7 shows a flowchart illustrating a method 700 for prefetch signaling in memory system or sub-system in accordance with examples of the present disclosure. The operations of method 700 may be implemented by a memory system, a memory sub-system, or its components as described herein. For example, the operations of method 700 may be performed by a non-volatile memory 225 (e.g., a local memory controller of non-volatile memory 225) as described with reference to FIG. 2. In some examples, a non-volatile memory 225 may execute a set of codes to control the functional elements of the device to perform the functions described below. Additionally or alternatively, the non-volatile memory 225 may perform aspects of the functions described below using special-purpose hardware.[0123] At 705 the non-volatile memory 225 may receive, from a controller, a read command for a first set of data. The operations of 705 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 705 may be performed by an interface component 530 as described with reference to FIG. 5.[0124] At 710 the non-volatile memory 225 may identify, in response to the read command, an indicator associated with the first set of data that indicates a size of a second set of data to be transmitted in response to the read command for the first set of data. The operations of 710 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 710 may be performed by a prefetch component 535 as described with reference to FIG. 5. [0125] At 715 the non-volatile memory 225 may transmit, to the controller, the indicator with a portion of the second set of data. The operations of 715 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 715 may be performed by an interface component 530 as described with reference to FIG. 5.[0126] An apparatus for performing the method 700 is described. The apparatus may include means for receiving, from a controller, a read command for a first set of data, means for identifying, in response to the read command, an indicator associated with the first set of data that indicates a size of a second set of data to be transmitted in response to the read command for the first set of data, and means for transmitting, to the controller, the indicator with a portion of the second set of data.[0127] Another apparatus for performing the method 700 is described. The apparatus may include a memory cell and a local memory controller in electronic communication with the memory cell and a controller, wherein the local memory controller is operable to receive, from the controller, a read command for a first set of data, identify, in response to the read command, an indicator associated with the first set of data that indicates a size of a second set of data to be transmitted in response to the read command for the first set of data, and transmit, to the controller, the indicator with a portion of the second set of data.[0128] In some examples of the method 700 and apparatus described above, transmitting the indicator with the portion of the second set of data comprises: transmitting the indicator concurrently with at least a subset of the portion of the second set of data. In some examples of the method 700 and apparatus described above, the indicator comprises at least one bit in a memory array that stores the second set of data, the memory array comprising non-volatile memory cells. In some examples of the method 700 and apparatus described above, the indicator comprises a dynamic counter that indicates the size of the second set of data.[0129] Some examples of the method 700 and apparatus described above may further include processes, features, means, or instructions for determining that the second set of data may be available in an open page of a memory array comprising non-volatile memory cells. Some examples of the method 700 and apparatus described above may further include processes, features, means, or instructions for transmitting, to the controller, a remainder of the second set of data after transmitting the portion of the second set of data. [0130] Some examples of the method 700 and apparatus described above may further include processes, features, means, or instructions for determining that at least a subset of the second set of data may be unavailable in an open page of a memory array comprising non volatile memory cells. Some examples of the method 700 and apparatus described above may further include processes, features, means, or instructions for transmitting, to the controller, a second indicator indicating a time delay for at least the subset of the second set of data.[0131] In some examples of the method 700 and apparatus described above, transmitting the indicator with the portion of the second set of data comprises: transmitting the indicator via a first pin coupled with a memory array and designated for command or control information, the memory array storing the indicator and the second set of data. Some examples of the method 700 and apparatus described above may further include processes, features, means, or instructions for transmitting the portion of the second set of data via a second pin coupled with the memory array. In some examples of the method 700 and apparatus described above, the first pin may be configured for transmitting at least one of data mask/inversion (DMI) information, link error correction code (ECC) parity information, or status information regarding the memory array, or any combination thereof.[0132] In some examples of the method 700 and apparatus described above, identifying the indicator comprises: reading a first set of memory cells in a memory array, the first set of memory cells having a faster nominal access speed than a second set of memory cells in the memory array, the second set of memory cells storing the first set of data. Some examples of the method 700 and apparatus described above may further include processes, features, means, or instructions for identifying a value of at least one bit in the first set of memory cells.[0133] Some examples of the method 700 and apparatus described above may further include processes, features, means, or instructions for receiving, from the controller, an instruction to update the indicator, the instruction being based at least in part on an access pattern associated with the first set of data. Some examples of the method 700 and apparatus described above may further include processes, features, means, or instructions for updating, in a memory array that stores the indicator and the first set of data, a value of the indicator based at least in part on the instruction. [0134] FIG. 8 shows a flowchart illustrating a method 800 for prefetch signaling in memory system or sub-system in accordance with examples of the present disclosure. The operations of method 800 may be implemented by a memory system, sub-system, or its components as described herein. For example, the operations of method 800 may be performed by an interface controller 230 as described with reference to FIG. 2. In some examples, the interface controller 230 may execute a set of codes to control the functional elements of the device to perform the functions described below. Additionally oralternatively, the interface controller 230 may perform aspects of the functions described below using special-purpose hardware.[0135] At 805 the interface controller 230 may transmit, to a memory device, a read command for a first set of data. The operations of 805 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 805 may be performed by a memory interface component 640 as described with reference to FIG. 6.[0136] At 810 the interface controller 230 may receive, from the memory device, a portion of a second set of data and an indicator of a size of the second set of data, the second set of data including the first set of data. The operations of 810 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 810 may be performed by a memory interface component 640 as described with reference to FIG. 6.[0137] At 815 the interface controller 230 may determine the size of the second set of data based at least in part on the indicator. The operations of 815 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 815 may be performed by a prefetch data component 645 as described with reference to FIG. 6.[0138] At 820 the interface controller 230 may receive a remainder of the second set of data based at least in part on determining the size of the second set of data. The operations of 820 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 820 may be performed by a memory interface component 640 as described with reference to FIG. 6. [0139] An apparatus for performing the method 800 is described. The apparatus may include means for transmitting, to a memory device, a read command for a first set of data, means for receiving, from the memory device, a portion of a second set of data and an indicator of a size of the second set of data, the second set of data including the first set of data, means for determining the size of the second set of data based at least in part on the indicator, and means for receiving a remainder of the second set of data based at least in part on determining the size of the second set of data.[0140] Another apparatus for performing the method 800 is described. The apparatus may include a memory device and an interface controller in electronic communication with the memory device, wherein the interface controller is operable to transmit, to the memory device, a read command for a first set of data, receive, from the memory device, a portion of a second set of data and an indicator of a size of the second set of data, the second set of data including the first set of data, determine the size of the second set of data based at least in part on the indicator, and receive a remainder of the second set of data based at least in part on determining the size of the second set of data.[0141] In some examples of the method 800 and apparatus described above, receiving the portion of the second set of data with the indicator comprises: receiving the indicator concurrently with at least one bit included in the portion of the second set of data. Some examples of the method 800 and apparatus described above may further include processes, features, means, or instructions for transmitting at least the first set of data to a buffer based at least in part on determining the size of the second set of data.[0142] Some examples of the method 800 and apparatus described above may further include processes, features, means, or instructions for receiving, from the memory device, a second indicator that indicates a latency for at least a subset of the remainder of the second set of data. Some examples of the method 800 and apparatus described above may further include processes, features, means, or instructions for transmitting, to the memory device after a time duration associated with the latency, a subsequent read command for at least the subset of the remainder of the second set of data.[0143] FIG. 9 shows a flowchart illustrating a method 900 for prefetch signaling in memory system or sub-system in accordance with examples of the present disclosure. The operations of method 900 may be implemented by a memory system, sub-system, or its components as described herein. For example, the operations of method 900 may be performed by an interface controller 230 as described with reference to FIG. 2. In some examples, the interface controller 230 may execute a set of codes to control the functional elements of the device to perform the functions described below. Additionally oralternatively, the interface controller 230 may perform aspects of the functions described below using special-purpose hardware.[0144] At 905 the interface controller 230 may identify a first set of data for eviction from a buffer. The operations of 905 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 905 may be performed by a data management component 650 as described with reference to FIG. 6[0145] At 910 the interface controller 230 may determine a size of a second set of data to be read in response to a subsequent read command for the first set of data, the second set of data including the first set of data. The operations of 910 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 910 may be performed by a prefetch data component 645 as described with reference to FIG. 6.[0146] At 915 the interface controller 230 may transmit, to a memory device, a write command for a value of an indicator of the size of the second set of data. The operations of 915 may be performed according to the methods described with reference to FIGs. 1 through 4. In certain examples, aspects of the operations of 915 may be performed by a memory interface component 640 as described with reference to FIG. 6.[0147] An apparatus for performing the method 900 is described. The apparatus may include means for identifying a first set of data for eviction from a buffer, means for determining a size of a second set of data to be read in response to a subsequent read command for the first set of data, the second set of data including the first set of data, and means for transmitting, to a memory device, a write command for a value of an indicator of the size of the second set of data.[0148] In some examples, the apparatus may include means for determining an access pattern for the first set of data based at least in part on previous access operations performed by a system on a chip (SoC) or processor, wherein a first page size is associated with the SoC or processor and a second page size is associated with the memory device, and means for determining the size of the second set of data is based at least in part on the access pattern.[0149] In some examples, the apparatus may include means for identifying a portion of the first set of data that has been modified relative to corresponding data stored in the memory device, and means for transmitting, to the memory device, the portion of the first set of data that has been modified.[0150] In some examples, the write command for the indicator specifies a location within the memory device for storing the indicator.[0151] In some examples, the apparatus may include means for determining that the first set of data is unmodified relative to corresponding data stored in the memory device, and means for transmitting the write command for the indicator independent of transmitting a write command for the first set of data based at least in part on determining that the first set of data is unmodified compared to the corresponding data.[0152] Another apparatus for performing the method 900 is described. The apparatus may include a memory device and an interface controller in electronic communication with the memory device, wherein the interface controller is operable to identify a first set of data for eviction from a buffer, determine a size of a second set of data to be read in response to a subsequent read command for the first set of data, the second set of data including the first set of data, and transmit, to the memory device, a write command for a value of an indicator of the size of the second set of data.[0153] Some examples of the method 900 and apparatus described above may further include processes, features, means, or instructions for determining an access pattern for the first set of data based at least in part on previous access operations performed by a system on a chip (SoC) or processor, wherein a first page size may be associated with the SoC or processor and a second page size may be associated with the memory device. Some examples of the method 900 and apparatus described above may further include processes, features, means, or instructions for determining the size of the second set of data may be based at least in part on the access pattern.[0154] Some examples of the method 900 and apparatus described above may further include processes, features, means, or instructions for identifying a portion of the first set of data that may have been modified relative to corresponding data stored in the memory device. Some examples of the method 900 and apparatus described above may further include processes, features, means, or instructions for transmitting, to the memory device, the portion of the first set of data that may have been modified.[0155] Some examples of the method 900 and apparatus described above may further include processes, features, means, or instructions for determining that the first set of data may be unmodified relative to corresponding data stored in the memory device. Some examples of the method 900 and apparatus described above may further include processes, features, means, or instructions for transmitting the write command for the indicator independent of transmitting a write command for the first set of data based at least in part on determining that the first set of data may be unmodified compared to the corresponding data.[0156] In some examples of the method 900 and apparatus described above, the write command for the indicator specifies a location within the memory device for storing the indicator.[0157] An apparatus is described. In some examples, the apparatus may include a memory array to store an indicator, a first set of data, and a second set of data and a first controller coupled with a second controller. In some examples, the first controller may be operable to receive, from the second controller, a read command for the first set of data, identify a size of the second set of data to be transmitted for the read command based at least in part on the indicator, and transmit, to the second controller, the indicator with a portion of the second set of data based at least in part on the size of the second set of data.[0158] In some examples, transmitting the indicator with the portion of the second set of data may include transmitting the indicator concurrently with the portion of the second set of data. In some examples, the indicator may include at least one bit in the memory array, the memory array comprising non-volatile memory cells. In some examples, the indicator may include a dynamic counter that indicates the size of the second set of data.[0159] In some examples, the first controller may be operable to determine that the second set of data is available in an open page of the memory array comprising non-volatile memory cells and transmit, to the second controller, a remainder of the second set of data after transmitting the portion of the second set of data based at least in part on the second set of data being available in the open page. In some examples, the first controller may be operable to determine that at least a subset of the second set of data is unavailable in an open page of the memory array comprising non-volatile memory cells and transmit, to the second controller, a second indicator indicating a time delay for at least the subset of the second set of data based at least in part on the subset of the second set of data being unavailable in the open page. In some examples, the first controller may be operable to transmit the indicator via a first pin coupled with the memory array, the first pin designated for command or control information and transmit the portion of the second set of data via a second pin coupled with the memory array.[0160] In some examples, the first pin may be configured for transmitting at least one of data mask/inversion (DMI) information, link error correction code (ECC) parity information, or status information regarding the memory array, or any combination thereof. In some examples, the first controller may be operable to read a first set of memory cells in the memory array, the first set of memory cells having a faster nominal access speed than a second set of memory cells in the memory array, the second set of memory cells storing the first set of data and identify a value of at least one bit in the first set of memory cells, wherein identifying the indicator is based at least in part on identifying the value of the at least one bit. In some examples, first controller may be operable to receive, from the second controller, an indication to update the indicator based at least in part on an access pattern associated with the first set of data and update, in the memory array, a value of the indicator based at least in part on the indication.[0161] An apparatus is described. In some examples, the apparatus may include a memory device comprising a memory array configured to store a first set of data and a second set of data and a controller coupled with the memory device. In some examples, the controller may be operable to transmit, to the memory device, a read command for the first set of data, receive, from the memory device, a portion of the second set of data and an indicator of a size of the second set of data that includes the first set of data, determine the size of the second set of data based at least in part on the indicator, and receive a remainder of the second set of data based at least in part on determining the size of the second set of data.[0162] In some examples, receiving the portion of the second set of data with the indicator may include receiving the indicator concurrently with at least one bit included in the portion of the second set of data. In some examples, the controller may be operable to transmit the first set of data to a buffer based at least in part on determining the size of the second set of data. In some examples, the controller may be operable to receive, from the memory device, a second indicator that indicates a latency for at least a subset of the remainder of the second set of data. In some examples, the controller may be operable to transmit, to the memory device and based at least in part on the latency, a subsequent read command for at least the subset of the remainder of the second set of data.[0163] An apparatus is described. In some examples, the apparatus may include a memory device comprising a memory array to store a first set of data and a second set of data and a controller coupled with the memory device and operable to interface with a system on a chip (SoC) or processor. In some examples, the controller may be operable to identify the first set of data for eviction from a buffer, determine a size of the second set of data to be read based at least in part on a subsequent read command for the first set of data, the second set of data comprising the first set of data, and transmit, to the memory device, a write command for a value of an indicator of the size of the second set of data.[0164] In some examples, the controller may be operable to determine an access pattern for the first set of data based at least in part on previous access operations performed by the SoC or processor, wherein a first page size is associated with the SoC or processor, and wherein a second page size is associated with the memory device and determine the size of the second set of data based at least in part on the access pattern. In some examples, the controller may be operable to identify a portion of the first set of data as different from corresponding data stored in the memory device and transmit, to the memory device, the portion of the first set of data based at least in part on identifying the portion of the first set of data as different.[0165] In some examples, the write command for the indicator specifies a location within the memory device for storing the indicator. In some examples, the controller may be operable to compare the first set of data to corresponding data stored in the memory device and transmit the write command for the indicator independent of transmitting a write command for the first set of data based at least in part on comparing the first set of data to the corresponding data.[0166] It should be noted that the methods described above describe possibleimplementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, features from two or more of the methods may be combined.[0167] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, it will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0168] The terms“electronic communication” and“coupled” refer to a relationship between components that support electron flow between the components. This may include a direct connection between components or may include intermediate components.Components in electronic communication or coupled to one another may be actively exchanging electrons or signals (e.g., in an energized circuit) or may not be actively exchanging electrons or signals (e.g., in a de-energized circuit) but may be configured and operable to exchange electrons or signals upon a circuit being energized. By way of example, two components physically connected via a switch (e.g., a transistor) are in electronic communication or may be coupled regardless of the state of the switch (i.e., open or closed).[0169] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term“exemplary” used herein means“serving as an example, instance, or illustration,” and not“preferred” or“advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples.[0170] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0171] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0172] The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a digital signal processor (DSP) and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0173] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims,“or” as used in a list of items (for example, a list of items prefaced by a phrase such as“at least one of’ or“one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase“based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as“based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase“based on” shall be construed in the same manner as the phrase “based at least in part on.”[0174] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0175] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. |
Embodiments include apparatuses, methods, and systems for voltage level shifting a data signal between a low voltage domain and a high voltage domain. In embodiments, a voltage level shifter circuit may include adaptive keeper circuitry, enhanced interruptible supply circuitry, and/or capacitive boosting circuitry to reduce a minimum voltage of the low voltage domain that is supported by the voltage level shifter circuit. Other embodiments may be described and claimed. |
1.A voltage level shifter circuit comprising:An input node for receiving an input signal in a first voltage domain;a data node for maintaining a logic state of the input signal to generate an output signal, the output signal corresponding to the input signal and in a second voltage domain;Inverting a data node, the inverting data node is configured to maintain a logic state of the inverted input signal, and the inverting input signal is an inverted signal of the input signal;A keeper transistor having a source terminal coupled to the data node, a gate terminal coupled to the negated data node, and a drain terminal for receiving the negated input signal.2.The circuit of claim 1 wherein said keeper transistor is a first keeper transistor, and wherein said circuit further comprises a second keeper transistor, said second keeper transistor having a coupling to said fetch a source terminal of the inverse data node, a gate terminal coupled to the data node, and a drain terminal for receiving a delayed signal of the input signal.3.The circuit of claim 2 further comprising:a first firewall transistor coupled between the first keeper transistor and a ground terminal;a second firewall transistor coupled between the second keeper transistor and the ground terminal, wherein a gate terminal of the second firewall transistor is coupled to a gate of the first firewall transistor An extreme, and wherein the gate terminals of the first firewall transistor and the second firewall transistor are configured to receive a firewall signal to selectively enable the first voltage domain when power gated The data node and the negated data node are driven to 0 volts.4.The circuit of any one of claims 1 to 3, further comprising:a pull-down transistor coupled between the data node and a ground terminal;Interrupting a transistor, the interrupt transistor being coupled to the data node;A pull-up transistor is coupled between the interrupt transistor and a power rail for receiving a power supply voltage.5.The circuit of claim 4 wherein a gate terminal of said interrupt transistor and said pull-down transistor is operative to receive said input signal, and wherein a gate terminal of said pull-up transistor is coupled to said negated Data node.6.The circuit of claim 5 wherein said pull-down transistor is a first pull-down transistor, and wherein said circuit further comprises a second pull-down transistor coupled between said ground terminal and an intermediate node, said intermediate A node is between the pull-up transistor and the interrupt transistor, wherein a gate terminal of the second pull-down transistor is configured to receive the input signal.7.The circuit of claim 6 wherein said interrupt transistor is a first interrupt transistor, and wherein said circuit further comprises:a second interrupt transistor coupled between the first interrupt transistor and the pull-up transistor;a third pull-down transistor coupled between the ground terminal and a second intermediate node, the second intermediate node being between the pull-up transistor and the second interrupt transistor, wherein A gate terminal of the third pull-down transistor is configured to receive the input signal.8.The circuit of claim 4 further comprising a capacitance boosting circuit coupled to said input node to deliver the boosted input signal to said interrupt transistor and said pull-down transistor.9.The circuit of claim 1 wherein said input node, said data node, said negated data node and said keeper transistor are included in a first stage of said voltage level shifter circuit, and Wherein the voltage level shifter circuit further comprises a second stage for receiving an output signal of the first stage and generating an output signal of the second stage, the output of the second stage The signal is in the third voltage domain.10.A voltage level shifter circuit comprising:An input node for receiving an input data signal associated with the first voltage domain;a data node for maintaining a logic state of the input data signal to generate an output signal, the output signal corresponding to the input data signal, and in a second voltage domain that is higher than the first voltage domain in;a first pull-down transistor coupled between the data node and a ground terminal, the gate terminal of the pull-down transistor for receiving the input data signal;Interrupting a transistor, the interrupt transistor is coupled to the data node, and a gate terminal of the interrupt transistor is configured to receive the input data signal;a pull-up transistor coupled between the interrupt transistor and a power rail for receiving a supply voltage associated with the second voltage domain;a second pull-down transistor coupled between the ground terminal and an intermediate node, the intermediate node being between the pull-up transistor and the interrupt transistor, wherein the second pull-down transistor A gate terminal is configured to receive the input data signal.11.The circuit of claim 10 wherein said interrupt transistor is a first interrupt transistor, and wherein said circuit further comprises:a second interrupt transistor coupled between the first interrupt transistor and the pull-up transistor;a third pull-down transistor coupled between the ground terminal and a second intermediate node, the second intermediate node being between the pull-up transistor and the second interrupt transistor, wherein A gate terminal of the third pull-down transistor is configured to receive the input data signal.12.The circuit of claim 10 further comprising a negated data node, said negated data node for maintaining a logic state of the inverted input signal, said negated input signal being an inverted signal of said input data signal, Wherein the gate terminal of the pull-up transistor is coupled to the negated data node.13.The circuit of claim 12 further comprising a keeper transistor having a source terminal coupled to said data node, a gate terminal coupled to said negated data node, and a receiving terminal Describe the drain terminal of the inverted input signal.14.The circuit of claim 13 wherein said keeper transistor is a first keeper transistor, and wherein said circuit further comprises a second keeper transistor, said second keeper transistor having a coupling to said fetch a source terminal of the inverse data node, a gate terminal coupled to the data node, and a drain terminal for receiving a delayed signal of the input data signal.15.A circuit according to any one of claims 10 to 14, further comprising a capacitance boosting circuit coupled to said input node for converting a voltage of said input data signal at said data node Increased above the power supply voltage of the first voltage domain.16.The circuit of claim 15 wherein said input node is a first input node, and wherein said capacitance boosting circuit comprises:a p-type transistor coupled between the second input node and the first input node, wherein the second input node is configured to receive the input data signal in the first voltage domain And wherein a gate terminal of the p-type transistor is configured to receive a delayed signal of the input data signal;An n-type transistor coupled between the first input node and the second input node, a gate terminal of the n-type transistor for receiving a power supply associated with the first voltage domain Voltage;a capacitively coupled transistor coupled between the p-type transistor and the first input node, the capacitively coupled transistor for charging the first input node at the An elevated data signal is generated at an input node.17.The circuit of claim 11 further comprising an enable transistor coupled between said second pull-down transistor and said ground terminal, said gate terminal of said enable transistor for receiving enable A signal to selectively enable an enhanced power interruption mode of the circuit.18.A system comprising:a first input node, the first input node for receiving an input signal in a low voltage domain;a capacitor boosting circuit coupled between the first input node and the second input node, the capacitor boosting circuit comprising:a p-type transistor coupled between the first input node and the second input node, a gate terminal of the p-type transistor for receiving a delayed signal of the input signal;An n-type transistor coupled between the first input node and the second input node, a gate terminal of the n-type transistor for receiving a low power supply associated with the low voltage domain Voltage;a capacitively coupled transistor coupled between the p-type transistor and the second input node, the capacitively coupled transistor charging the second input node above the low supply voltage Voltage level to produce an boosted input signal;a level shifting circuit for receiving the boosted input signal at the second input node and for generating an output signal, the output signal corresponding to the input signal and in a high voltage domain The high voltage domain has a higher voltage level relative to the low voltage domain.19.The system of claim 18 wherein said p-type transistor and said n-type transistor are coupled in parallel with each other.20.The system of claim 18 wherein the gate terminal of the capacitively coupled transistor is for receiving a delayed signal of the input signal.21.The system of claim 18 wherein said level shifting circuit comprises:An interrupt transistor coupled to the data node, the data node for maintaining a logic state of the input signal, wherein a gate terminal of the interrupt transistor is coupled to the second input node;a pull-up transistor coupled between the interrupt transistor and a power rail for receiving a high supply voltage associated with the high voltage domain;a pull-down transistor coupled between the ground terminal and the intermediate node, the intermediate node being between the pull-up transistor and the interrupt transistor, wherein a gate terminal of the pull-down transistor is coupled to the Two input nodes.22.The system of claim 21 wherein said level shifting circuit further comprises:a data node, the data node is configured to maintain a logic state of the input signal;Inverting a data node, the inverting data node is configured to maintain a logic state of the inverted input signal, and the inverting input signal is an inverted signal of the input signal;A keeper transistor having a source terminal coupled to the data node, a gate terminal coupled to the negated data node, and a drain terminal for receiving the negated input signal.23.The system of claim 18 further comprising an enable transistor coupled to said second input node, said enable transistor selectively said input when said capacitor boost circuit is disabled A signal is transmitted to the second input node.24.A system according to any one of claims 18 to 23, further comprising:A processor coupled to the level shifting circuit, the processor including a first circuit block operating in the low voltage domain and a second circuit block operating in the high voltage domain. |
Voltage level shifter circuitTechnical fieldEmbodiments of the present invention generally relate to the field of electronic circuits and, in particular, to voltage level shifter circuits.Background techniqueThe background illustrations provided herein are for the purpose of illustration of the present disclosure. In the context of what is described in this Background section, the work of the presently designated inventor, as well as the prior art aspects of the description, which should not be considered as being submitted, are not explicitly or implicitly recognized as being relative to the present disclosure. The prior art of content. The present invention described in this section is not prior art to the claims in this disclosure, and should not be considered as prior art.In an integrated circuit, different blocks of the circuit can operate at different supply voltages. A voltage level shifter circuit is used to convert digital input/output (I/O) signals between blocks (eg, to convert an I/O signal from a low supply voltage domain to a high supply voltage domain, and vice versa).DRAWINGSThe embodiments will be more readily understood in conjunction with the following detailed description. For the purposes of this description, similar reference numerals indicate similar structural elements. Embodiments are shown by way of example and not limitation in the drawings.FIG. 1 illustrates a voltage level shifter circuit including an adaptive keeper circuit in accordance with various embodiments.2 illustrates a voltage level shifter circuit including an enhanced interruptible power supply circuit in accordance with various embodiments.FIG. 3 illustrates a voltage level shifter circuit including a stacked enhanced interruptible power supply circuit in accordance with various embodiments.4 illustrates a voltage level shifter circuit including a capacitance boosting circuit in accordance with various embodiments.FIG. 5 illustrates a voltage level shifter circuit including an adaptive keeper circuit and an enhanced interruptible power supply circuit in accordance with various embodiments.FIG. 6 illustrates a voltage level shifter circuit including an adaptive keeper circuit and a capacitance boost circuit in accordance with various embodiments.FIG. 7 illustrates a voltage level shifter circuit including an enhanced interruptible power supply circuit and a capacitance boost circuit in accordance with various embodiments.8 illustrates a voltage level shifter circuit including an adaptive keeper circuit, an enhanced interruptible power supply circuit, and a capacitance boost circuit, in accordance with various embodiments.9 illustrates a voltage level shifter circuit including an adaptive keeper circuit, a selectively enabled enhanced interruptible power supply circuit, and a selectively enabled capacitance boost circuit, in accordance with various embodiments.Figure 10 illustrates a voltage level shifter circuit including two level shifter stages in accordance with various embodiments.11 illustrates an exemplary system configured to use the apparatus and methods described herein in accordance with various embodiments.Detailed waysBRIEF DESCRIPTION OF THE DRAWINGS In the following detailed description, reference to the drawings It will be appreciated that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the disclosure. Therefore, the following detailed description is not to be considered in aA plurality of operations are illustrated as a plurality of separate actions or operations in sequence in a manner that is most helpful in understanding the claimed subject matter. However, the order of the description should not be construed as implying that the operations must be sequential. In particular, these operations may not be performed in the order presented. The operations described may be performed in an order different from the described embodiments. A number of additional operations may be performed in additional embodiments and/or the operations described may be omitted.For the purposes of the present disclosure, the phrases "A and/or B" and "A or B" mean (A), (B) or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B) And C).The description uses the phrases "in one embodiment" or "in an embodiment", each of which may refer to one or more of the same or different embodiments. Moreover, the terms "including", "comprising", "having", and the like, are used as a synonym with respect to the embodiments of the present disclosure.The term "circuitry" as used herein may refer to or include an application specific integrated circuit (ASIC), an electronic circuit, a system on a chip (SoC), a processor (shared, dedicated or packetized), combinatorial logic, and/or Or other suitable hardware components that provide the described functionality. The term "computer-implemented method" as used herein may refer to a computer system having one or more processors, having one or more processors, a mobile device such as a smart phone (which may include one or more processors), Any method performed by tablets, laptops, set-top boxes, game consoles, etc.The description and the drawings may refer to a transistor as an MPx transistor to indicate that the transistor is a p-type transistor or an MNx transistor to indicate that the transistor is an n-type transistor. Illustratively presenting the type of transistor, other embodiments may use other types of transistors to achieve similar functionality.Various embodiments can include a voltage level shifter circuit to convert a data signal from a first voltage domain to a second voltage domain. The data signal can be a digital data signal that transitions between a low voltage level used to represent a first logic value (eg, a logic zero) and a high voltage level to represent a second logic value (eg, a logic one). In some embodiments, the low voltage level can be a ground voltage and the high voltage level can be a positive voltage (eg, having a value based on a supply voltage used by the voltage domain). The voltage difference between the low voltage level and the high voltage level of the data signal may be greater than the first voltage domain for the second voltage domain. In addition, the high power supply voltage VDD high (VDDH) used in the second voltage domain may be greater than the low power supply voltage VDD low (VDDL) used in the first voltage domain.In various embodiments, the voltage level shifter circuit described herein can include one or more components to reduce the low voltage at which the voltage level shifter circuit can operate (eg, across process, voltage, and temperature conditions). The minimum voltage Vmin of the voltage VDDL. For example, the voltage level shifter circuit can include an adaptive keeper circuit, an enhanced interruptible power supply circuit, and/or a capacitance boost circuit to reduce Vmin of the low supply voltage. The reduced Vmin may allow circuit blocks operating in the first voltage domain to operate at a lower supply voltage, thereby reducing power consumption.FIG. 1 schematically illustrates a voltage level shifter circuit 100 (hereinafter "circuit 100") including an adaptive keeper circuit in accordance with various embodiments. Circuit 100 receives an input data signal DIN at input 102 and an output data signal DOUT at output 104. Circuit 100 includes an input circuit 106 coupled to a voltage shifter circuit 108. The input circuit 106 can include three inverters 110a-c coupled in series with the input terminal 102 for generating an input signal IN (inverted signal of the input data signal DIN) as shown, and inverting the input signal INB (input signal) The inverted signal of IN) and the delayed input signal INd (the delayed signal of the input signal IN). Input signal IN, inverted input signal INB, and delayed input signal INd may be passed to level shifter circuit 108 having respective nodes of corresponding flags as shown in FIG. Input circuit 106 and level shifter circuit 108 are shown as separate circuits for ease of illustration.In various embodiments, the input data signal DIN can be received by the input 102 in the low voltage domain. Inverters 110a-c can be coupled to low supply rail 112 to receive low supply voltage VDDL, and inverters 110a-c can operate at low supply voltage VDDL.In various embodiments, the level shifter circuit 108 can include a data node (Q) 114 that is driven to the current value of the input signal IN and an inverted data node that is driven to the inverse of the current value of the input signal IN ( QB) 116. Inverted data node (QB) 116 is coupled via inverter 118 to output 104 to provide an output data signal DOUT. In other embodiments, the output 104 can be coupled to the data node 114 to receive the output data signal DOUT.In various embodiments, level shifter circuit 108 can include a high supply voltage rail 120 that receives a high supply voltage VDDH. Pull-up transistors MP1 and MP2 can be coupled to a high supply voltage rail 120 (eg, at their source terminals). Interrupt transistor MP3 can be coupled between pull-up transistor MP1 and data node 114. The interrupt transistor MP4 can be coupled between the pull-up transistor MP2 and the negated data node 116. Pull-down transistor MN1 can be coupled between data node 114 and ground voltage 122, and pull-down transistor MN2 can be coupled between inversion data node 116 and ground voltage 122. The interrupt transistor MP3 and the pull-down transistor MN1 can receive the input signal IN at their respective gate terminals. Interrupt transistor MP4 and pull-down transistor MN2 may receive a negated input signal INB at their respective gate terminals.In various embodiments, the adaptive keeper circuit of circuit 100 can include keeper transistors MN3 and MN4 and/or firewall transistors MN5 and MN6. In an embodiment, the drain terminal of the keeper transistor MN3 can be coupled to receive the negated input signal INB. The source terminal of the keeper transistor MN3 can be coupled to the data node 114, and the gate terminal of the keeper transistor MN3 can be coupled to the negated data node 116. In an embodiment, the drain terminal of the keeper transistor MN4 can be coupled to receive the delayed input signal INd. The source terminal of the keeper transistor MN4 can be coupled to the negated data node 116, and the gate terminal of the keeper transistor MN4 can be coupled to the data node 114.Firewall transistor MN5 can be coupled between keeper transistor MN3 and ground voltage 122, and firewall transistor MN6 can be coupled between keeper transistor MN4 and ground voltage 122. The gate terminals of firewall transistors MN5 and MN6 can be coupled to one another at firewall node 124. Firewall node 124 can receive a firewall signal that has a logic low value (eg, 0 volts) when active in the low voltage domain (eg, not gated by power) and has a power gating (eg, power down) in the low voltage domain A logically high value. When the low voltage domain is gated by power, the low supply voltage VDDL can be lowered and/or turned off (eg, to be 0 volts). In various embodiments, firewall transistors MN5 and MN6 may be turned off (eg, non-conducting) when the firewall signal has a logic low value.In various embodiments, when the input signal IN transitions from a logic high level (eg, VDDL) to a logic low level (eg, 0 volts), the pull-down transistor MN1 can be turned off, and the keeper transistor MN3 can be turned on, thereby Node 114 is charged. At this point, data node 114 may have a high resistance, and the voltage at the gate terminal (and inverted data node 116) of keeper transistor MN3 may have a value of VDDH. In various embodiments, the high supply voltage VDDH can be greater than the sum of the low supply voltage VDDL and the threshold voltage VTHmn3 of the keeper transistor MN3. Thus, data node 114 can be charged to VDDL by keeper transistor MN3. Alternatively, VDDH can be less than the sum of VDDL and VTHmn3, in which case data node 114 can be charged to the value of VDDL-VTHmn3.Accordingly, the gate-source voltage of the pull-up transistor MP2 can be reduced, thereby reducing the pull-up strength of the pull-up transistor MP2 (for example, the amount of current conducted by the MP2). Thus, the low VDDL mitigates the competition between the pull-down transistor MN2 and the pull-up transistor MP2, allowing the pull-down transistor MN2 to pull the inverted data node 116 to 0 volts. When the completion of the negated data node 116 transitions from VDDH to 0 volts, the keeper transistor MN3 can be turned off and the data node 114 can be charged to VDDH by the pull-up transistor MP1.A similar contention reduction is provided by the keeper transistor MN4 when the negated input signal INB transitions from a logic high to a logic low. Similar to the reduced competition provided by the keeper transistor MN3 for the pull-up transistor MP2 and the pull-down transistor MN2, the keeper transistor MN4 can assist in the conversion process by reducing the competition between the pull-up transistor MP1 and the pull-down transistor MN1. Since the drain terminal of the keeper transistor MN4 receives the delayed input signal INd, the pull-down transistor MN2 can be turned off (eg, based on the negated input signal INB) before the keeper transistor MN4 begins charging the inverted data node 116.In various embodiments, there may be additional contention paths when the negated data node 116 is pulled down and the pull up transistor MP1 is turned "on". While the keeper transistor MN3 supplies VDDL to the data node 114, the pull-up transistor MP1 charges the data node 114 to VDDH. This contention path no longer exists when data node 114 is fully switched to zero and keeper transistor MN3 is turned off. However, this competitive path can increase the delay of the circuit 100 (e.g., from the input data signal DIN to the delay of the output data signal DOUT).In addition, during the competition between the pull-up transistor MP1 and the keeper transistor MN3, the short-circuit current may flow from the high-voltage power rail 120 to the low-voltage power rail 112 via the pull-up transistor MP1 and the keeper transistor MN3. In some embodiments, however, this short circuit current can be used by one or more devices (e.g., logic devices) that operate in the low voltage domain. Therefore no short circuit current is wasted.In various embodiments, the transistors of firewall transistors MN5 and MN6 and/or inverter 110c may have relatively small dimensions. Firewall transistors MN5 and MN6 can be turned on only when the low voltage domain is power gated, without affecting the delay of circuit 100.2 illustrates a voltage level shifter circuit 200 ("circuit 200" hereinafter) including an enhanced interruptible power supply circuit in accordance with various embodiments. Circuitry 200 can include components similar to those of circuit 100, as indicated by like reference numerals. Circuit 200 may not include an adaptive keeper circuit of circuit 100. Thus, the drain terminals of the keeper transistors MN3 and MN4 can be coupled to the ground voltage 222, which may not include a firewall transistor. Additionally, input circuit 206 can include two inverters 210a-b for generating an input signal IN (inverting of input data signal DIN) and a negated input signal (INB) (inverting of input data signal DIN).In various embodiments, the level shifter circuit 208 of the circuit 200 can include an enhanced interruptible power supply circuit that includes pull-down transistors MN7 and MN8. The source terminal of pull-down transistor MN7 can be coupled to an intermediate node (N) 230 between pull-up transistor MP1 and interrupt transistor MP3. The gate terminal of pull-down transistor MN7 can receive an input signal IN (eg, can be coupled to the gate terminal of interrupt transistor MP3 and/or the gate terminal of pull-down transistor MN1).The source terminal of pull-down transistor MN8 can be coupled to a negated intermediate node (NB) 232 between pull-up transistor MP2 and interrupt transistor MP4. The gate terminal of pull-down transistor MN8 can receive a negated input signal INB (eg, can be coupled to the gate terminal of interrupt transistor MP4 and/or the gate terminal of pull-down transistor MN2). The drain terminals of pull-down transistors MN7 and MN8 can be coupled to a common ground voltage 222.In various embodiments, when the input signal IN causes a transition from 0 to VDDL, the gate-source voltage of the interrupt transistor MP3 is reduced, thereby reducing the strength of the pull-up path provided by the pull-up transistor MP1 and the interrupt transistor MP3. Thus, data node 214 can be pulled down to 0 volts by pull-down transistor MN1. However, as the value of VDDL decreases, the power supply interruption provided by the interrupt transistor MP3 decreases, and there may be a contention path between the pull-up transistor MP1 and the pull-down transistor MN1.In various embodiments, pull-down transistor MN7 can provide an additional power supply interrupt, further attenuating the pull-down path, allowing the use of a reduced VDDL value (eg, a reduced Vmin). When the input signal IN causes a transition from 0 to VDDL, a resistance path is formed between the pull-down transistor MN7 and the pull-up transistor MP1. The resistive path reduces the voltage of the intermediate node N by an amount Δ (eg, from VDDH to VDDH−Δ). The decrease in the voltage of the intermediate node N can reduce the gate-source voltage of the interrupt transistor MP3, thereby enhancing the power supply interruption provided by the interrupt transistor MP3.For example, when VDDL is relatively low (eg, close to Vmin) and the interrupt transistor MP3 is in a sub-threshold state during level shifting, the interrupt transistor MP3 can be significantly reduced even if the voltage of the intermediate node N is smallly reduced. Strength of. In one non-limiting example, a delta of about 100 mV can provide a reduction of about 10 times in the pull up strength of the pull up path provided by pull up transistor MP1 and interrupt transistor MP3.When the negated input signal INB is switched from 0 to VDDL, a similar power supply interrupt can be provided by the interrupt transistor MP4 and the pull-down transistor MN8 to reduce the pull-up path provided by the pull-up transistor MP2 and the interrupt transistor MP4 for the inverted data node 216. Strength of.FIG. 3 illustrates a voltage level shifter circuit 300 (hereinafter "circuit 300") including a stacked enhanced interruptible power supply circuit in accordance with various embodiments. Circuitry 300 can include components similar to those of circuit 300, as indicated by like reference numerals.Circuit 300 may include additional interrupt transistors MP5 and MP6, as well as additional pull-down transistors MN9 and MN10, as compared to circuit 200. The interrupt transistor MP5 can be coupled between the pull-up transistor MP1 and the interrupt transistor MP3 (eg, the drain terminal of the interrupt transistor MP5 can be coupled to the source terminal of the interrupt transistor MP3 at the first intermediate node (N1) 334, the source of the interrupt transistor MP5. The terminal can be coupled to the drain terminal of the pull-up transistor MP1 at the second intermediate node (N2) 336. The gate terminal of the pull-down transistor MN9 and the gate terminal of the interrupt transistor MP5 can receive the input signal IN. The source terminal of pull-down transistor MN9 can be coupled to a second intermediate node 336, and the drain terminal of pull-down transistor MN9 can be coupled to ground voltage 322.As shown in FIG. 3, interrupt transistor MP6 and pull-down transistor MN10 can be coupled in circuit 300 in a similar manner. For example, the interrupt transistor MP6 can be coupled between the first intermediate reverse node (N1B) 338 and the second intermediate reverse node (N2B) 340 (eg, between the interrupt transistor MP4 and the pull-up transistor MP2). Pull-down transistor MN10 can be coupled between second intermediate node 340 and ground voltage 322.In various embodiments, the interrupt transistor MP5 and the pull-down transistor MN9 may provide further power interruptions to further attenuate the pull-down strength of the pull-down path provided by the pull-up transistor MP1 and the interrupt transistors MP3 and MP5. Similarly, the interrupt transistor MP6 and the pull-down transistor MN10 can provide further power interruptions to further attenuate the pull-down strength of the pull-down path provided by the pull-up transistor MP2 and the interrupt transistors MP4 and MP6.4 illustrates a voltage level shifter circuit 400 ("circuit 400" hereinafter) including a capacitance boosting circuit in accordance with various embodiments. Circuitry 400 can include components similar to those of circuits 100 and/or 200, as indicated by like reference numerals. Circuit 400 may not include an adaptive keeper circuit of circuit 100 or an enhanced interruptible power supply circuit of circuit 200 or 300.In various embodiments, input circuit 406 can include a plurality of inverters 410a-f coupled in series with input 402. The inverters 410a-f can generate an input signal IN, a negated input signal INB, a delayed input signal IND, and a delayed inverted input signal INBD. In various embodiments, the delayed input signal IND and the delayed inverted input signal INBD may be delayed for a longer period of time relative to the delayed input signal INd described above with respect to the circuit 100 of FIG.In various embodiments, the capacitance boosting circuit of circuit 400 can include p-type transistors MPX1, MPX2 and n-type transistor MNX1 coupled between input node 444 receiving input signal IN and boost input node 446 receiving boosted input signal INX. between. Transistors MNX1 and MPX1 can receive input signals at their drain terminals. Transistor MPX2 may be capacitively configured (eg, its drain and source terminals are coupled to each other along a conductive path between input node 444 and boost input node 446). Transistor MNX1 can receive a low supply voltage VDDL at its gate terminal, and the gate terminals of transistors MPX1 and MPX2 can receive a delayed input signal IND.In various embodiments, the boosted input signal INX can be passed to the input of level shifting circuit 408 (eg, to interrupt transistor MP3 and pull-down transistor MN1). In various embodiments, when the input signal IN has a value of VDDL, the capacitance boosting circuit can generate an elevated input signal INX having a voltage higher than VDDL.When the input signal IN is switched from 0 to VDDL, the delayed input signal IND has not been converted, and thus the transistors MPX1 and MPX2 are turned on. As long as the delayed input signal IND remains at 0 volts, the boost input node 446 can be charged to VDDL through transistor MPX1. Subsequently, when the delayed input signal IND transitions from 0 to VDDL, the voltage boosted by the delayed input signal IND is transferred to the boost input node 446 by the capacitively coupled transistor MPX2, thereby charging the boosted input signal INX to a voltage greater than VDDL. .In an embodiment, transistor MNX1 can act as a diode (eg, when input signal IN is VDDL). If the voltage of the boosted input signal INX falls below VDDL - VTHmnx1 (where VTHmnx1 is the threshold voltage of transistor MNX1), transistor MNX1 can be turned on to charge boost input node 446.In various embodiments, a higher voltage of the boosted input signal INX compared to the input signal IN may increase the pull-down strength of the pull-down transistor MN1, thereby reducing competition between the pull-down transistor MN1 and the pull-up transistor MP1.In various embodiments, a similar capacitance boost can be provided by transistors MPX3, MPX4, and MNX2 when the inverted input signal INB transitions from 0 to VDDL. Capacitively coupled transistor MPX4 can generate a boosted inverting input signal INBX at boost input node 448.In some embodiments, the voltage level shifter circuit can include an adaptive keeper circuit (eg, an adaptive keeper circuit of circuit 100), an enhanced interruptible power supply circuit (eg, enhanced interruptible power supply circuit of circuit 200 and circuit 300) And/or any combination of capacitance boosting circuits (eg, capacitance boosting circuits of circuit 400). The adaptive keeper circuit, the enhanced interruptible power supply circuit, and the capacitance boost circuit each can provide a reduced minimum voltage Vmin (eg, a minimum voltage of the low supply voltage VDDL) for the voltage level shifter circuit. However, each of the adaptive keeper circuit, the enhanced interruptible power supply circuit, and the capacitance boost circuit also affects the delay to the voltage level shifter circuit. Thus, the adaptive keeper circuit, the enhanced interruptible power supply circuit, and/or the capacitance boost circuit combination and/or structure can be selected based on the application.For example, FIG. 5 illustrates a voltage level shifter circuit 500 (hereinafter "circuit 500") including an adaptive keeper circuit and an enhanced interruptible power supply circuit in accordance with various embodiments. Similar to the adaptive keeper circuit of circuit 100, the adaptive keeper circuit can include keeper transistors MN3 and MN4 and firewall transistors MN5 and MN6. Similar to the enhanced interruptible power supply circuit of circuit 200, the enhanced interruptible power supply circuit can include pull-down transistors MN7 and MN8 and interrupt transistors MP3 and MP4.Figure 6 shows a voltage level shifter circuit 600 ("circuit 600" hereinafter) including an adaptive keeper circuit and a capacitance boost circuit. Similar to the adaptive keeper circuit of circuit 100, the adaptive keeper circuit can include keeper transistors MN3 and MN4 and firewall transistors MN5 and MN6. Similar to the capacitance boosting circuit of circuit 400, the capacitance boosting circuit can include p-type transistors MPX1 and MPX3, n-type transistors MNX1 and MNX2, and capacitively coupled transistors MPX2 and MPX4.In an embodiment, circuit 600 can further include an input circuit 606 that includes a plurality of inverters 610a-f. The input circuit 606 can receive the input data signal DIN from the input terminal 602, and can generate the input signal IN, the inverted input signal INB, the first delayed input signal INd, the first delayed inverted input signal INBd, the second delayed input signal IND, and the The second delay inverts the input signal INBD. The delay time period of the second delayed input signal IND and the second delayed inverted input signal INBD may be longer than the first delayed input signal INd and the first delayed inverted input signal INBd, respectively.The first delayed inverted input signal INBd may be transferred to the drain terminal of the keeper transistor MN3, and the first delayed input signal INd is transferred to the drain terminal of the keeper transistor MN4. The second delayed input signal IND can be transferred to the gate terminals of the p-type transistor MPX1 and the capacitively coupled transistor MPX2. The second delayed inverted input signal INBD may be transferred to the gate terminals of the p-type transistor MPX3 and the capacitively coupled transistor MPX4. In some embodiments, input circuit 606 can include an additional inverter coupled between inverter 610a and inverter 610f to provide a desired delay for signals INd, INBd, IND, and/or INBD.FIG. 7 illustrates a voltage level shifter circuit 700 (hereinafter "circuit 700") including an enhanced interruptible power supply circuit and a capacitance boost circuit. Similar to the enhanced interruptible power supply circuit of circuit 200, the enhanced interruptible power supply circuit can include pull-down transistors MN7 and MN8 and interrupt transistors MP3 and MP4. Similar to the capacitance boosting circuit of circuit 400, the capacitance boosting circuit can include p-type transistors MPX1 and MPX3, n-type transistors MNX1 and MNX2, and capacitively coupled transistors MPX2 and MPX4.FIG. 8 illustrates a voltage level shifter circuit 800 ("circuit 800" hereinafter) including an adaptive keeper circuit, an enhanced interruptible power supply circuit, and a capacitance boost circuit, in accordance with various embodiments. Similar to the adaptive keeper circuit of circuit 100, the adaptive keeper circuit can include keeper transistors MN3 and MN4 and firewall transistors MN5 and MN6. Similar to the enhanced interruptible power supply circuit of circuit 200, the enhanced interruptible power supply circuit can include pull-down transistors MN7 and MN8 and interrupt transistors MP3 and MP4. Similar to the capacitance boosting circuit of circuit 400, the capacitance boosting circuit can include p-type transistors MPX1 and MPX3, n-type transistors MNX1 and MNX2, and capacitively coupled transistors MPX2 and MPX4.9 shows a voltage level shifter circuit 900 ("circuit 900" hereinafter) that includes an adaptive keeper circuit, an enhanced interruptible power supply circuit, and a capacitance boost circuit similar to circuit 800. Circuit 900 further includes enable transistors MNEN1 and MNEN2 to allow selective activation of the enhanced interruptible power supply circuit. The enable transistor MNEN1 can be coupled between the pull-down transistor MN7 and ground. The enable transistor MNEN2 can be coupled between the pull-down transistor MN8 and ground. The enable transistors MNEN1 and MNEN2 can receive the first enable signal EN1 at their respective gate terminals. The first enable signal can turn on the enable transistors MNEN1 and MNEN2 to enable the enhanced interruptible power supply circuit, and can turn off the enable transistors MNEN1 and MNEN2 to disable the enhanced interruptible power supply circuit.Additionally, circuit 900 can include enable transistors MPEN1 and MPEN2 to allow selective activation of the capacitor boost circuit. The source terminal of enable transistor MPEN1 can be coupled to boost input node 946. The drain terminal of enable transistor MPEN1 can be coupled to inverter 910a of input circuit 906 to receive input signal IN. The gate terminal of the enable transistor MPEN1 can receive the second enable signal EN. The source terminal of enable transistor MPEN2 can be coupled to boost inverse input node 948. The drain terminal of enable transistor MPEN2 can be coupled to inverter 910b of input circuit 906 to receive the inverted input signal INB. The gate terminal of the enable transistor MPEN2 can receive the second enable signal EN2.The second enable signal EN2 can turn off the enable transistors MPEN1 and MPEN2 to enable the capacitor boost circuit. The second enable signal EN2 can turn on the enable transistors MPEN1 and MPEN2 to disable the capacitor boost circuit. When the capacitor boost circuit is disabled, the enable transistor MPEN1 can pass the input signal IN to the boost input node 946, and the enable transistor MPEN2 can pass the negated input signal INB to the boost inverse input node 948.In various embodiments, input circuit 906 of circuit 900 can include a plurality of inverters 910a-f coupled in series with input 902. In some embodiments, the inverter 910d of the input circuit 906 can be a tri-state inverter. When the capacitance boost circuit is disabled, the inverter 910d can receive the second enable signal EN2 at the three-state input to selectively The inverter 910d is set in the three-state mode. When the inverter 910d is in the three-state mode, the output of the inverter 910d can have a high impedance, and the outputs of the inverters 910e and 910f can be effectively cut off. Therefore, the transistors MPX1, MPX2, MPX3, and MPX4 can be turned off.FIG. 10 illustrates a voltage level shifter circuit 1000 (hereinafter "circuit 1000") in accordance with various embodiments. Circuit 1000 can include a first level shifter stage 1050 (also referred to as "first stage 1050") and a second level shifter stage 1052 (also referred to as "second stage 1052"). The first stage 1000 can include circuitry similar to circuits 100, 200, 300, 400, 500, 600, 700, 800, and/or 900. For example, the first stage 1000 shown in FIG. 10 includes circuitry similar to circuit 100 (with adaptive keeper circuitry). The first stage 1000 can include a diode-connected transistor 1054 coupled between the high power rail 1020 and the node 1056 between the pull-up transistors MP1 and MP2. Diode-connected transistor 1054 can reduce the voltage at node 1056 to an intermediate voltage VDDHI that is lower than the high supply voltage VDDH (eg, reduces the threshold voltage of diode-connected transistor 1054). Thus, the first stage 1050 can generate a data signal Q1 at the data node 1014 and a negated data signal Q1B at the inverted data node 1016, which is in the intermediate voltage domain between the low voltage domain and the high voltage domain. The data signal Q1 and the inverted data signal Q1B can be transferred to the second stage 1052.In various embodiments, the second stage 1052 can level shift the data signal Q1 and/or the inverted data signal Q1B to produce an output data signal in the high voltage domain (eg, which fluctuates between 0 volts and VDDH). The second stage 1052 may or may not include an adaptive keeper circuit, an enhanced interruptible power supply circuit, and/or a capacitance boost circuit.It will be apparent that embodiments of circuit 1000 can include any suitable number of diode-connected transistors 1054 to generate intermediate voltage VDDHI. Additionally or alternatively, in some embodiments, circuit 1000 can include more than two shifter stages.11 illustrates an exemplary embodiment that may use the devices and/or methods described herein (eg, circuits 100, 200, 300, 400, 500, 600, 700, 800, 900, and/or 1000) in accordance with various embodiments. Computing device 1100. As shown, computing device 1100 can include multiple components, such as one or more processors 1104 (one shown) and at least one communication chip 1106. In various embodiments, one or more processors 1104 can each include one or more processor cores. In various embodiments, at least one communication chip 1106 can be physically and electrically coupled to one or more processors 1104. In a further implementation, the communication chip 1106 can be a component of one or more processors 1104. In various embodiments, computing device 1100 can include a printed circuit board (PCB) 1102. For these embodiments, one or more processors 1104 and communication chip 1106 are disposed thereon. In an alternative embodiment, multiple components may be coupled without the use of PCB 1102.Depending on its application, computing device 1100 can include other components that may or may not be physically and electrically coupled to PCB 1102. These other components include, but are not limited to, a memory controller 1105, a volatile memory (eg, dynamic random access memory (DRAM) 1108), a non-volatile memory such as read only memory (ROM) 1110, a flash memory 1112, and storage. Device 1111 (eg, a hard disk drive (HDD)), an I/O controller 1114, a digital signal processor (not shown), an encryption processor (not shown), a graphics processor 1116, one or more antennas 1118, a display (not shown), touch screen display 1120, touch screen controller 1122, battery 1124, audio codec (not shown), video codec (not shown), global positioning system (GPS) device 1128, compass 1130, An accelerometer (not shown), a gyroscope (not shown), a speaker 1132, and a mass storage device (eg, a hard disk drive, a solid state drive, a compact disk (CD), a digital versatile disk (DVD)) (not shown) )and many more. In various embodiments, processor 1104 can be integrated with other components on the same wafer to form a system on a chip (SoC).In some embodiments, one or more processors 1104, flash memory 1112, and/or storage device 1111 can include associated firmware (not shown) that stores programming instructions configured to be executed by one or more processors 1104 Programming instructions enable computing device 1100 to practice all or selected aspects of the methods described herein. In various embodiments, these aspects may be additionally or alternatively implemented using hardware separate from one or more processors 1104, flash memory 1112, or storage device 1111.In various embodiments, one or more components of computing device 1100 can include circuits 100, 200, 300, 400, 500, 600, 700, 800, 900, and/or 1000 described herein. For example, circuits 100, 200, 300, 400, 500, 600, 700, 800, 900, and/or 1000 may be included in I/O controller 1114, processor 1104, memory controller 1105, and/or computing device 1100. In a component. In some embodiments, circuits 100, 200, 300, 400, 500, 600, 700, 800, 900, and/or 1000 may be included in processor 1104 to allow circuits to operate in a relatively low voltage domain. A circuit connection that operates in a relatively high voltage domain. In an embodiment, processor 1104 can include multiple circuits 100, 200, 300, 400, 500, 600, 700, 800, 900, and/or 1000.Communication chip 1106 can implement wired and/or wireless communication for communicating data to and from computing device 1100. The term "wireless" and its derivatives may be used to describe a circuit, device, system, method, technique, communication channel, or the like that can communicate data through the use of modulated electromagnetic radiation over a non-solid medium. The term does not imply that the associated device does not contain any wires, although in some embodiments they may not. The communication chip 1106 can implement any of a number of wireless standards or protocols including, but not limited to, IEEE 702.20, Long Term Evolution (LTE), LTE Advanced (LTE-A), General Packet Radio Service (GPRS), Evolution Data Optimization. (Ev-DO), Enhanced High Speed Packet Access (HSPA+), Enhanced High Speed Downlink Packet Access (HSDPA+), Enhanced High Speed Uplink Packet Access (HSUPA+), Global System for Mobile Communications (GSM), Enhanced Data Rate GSM Evolution (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Wireless Telecommunications (DECT), Worldwide Interoperability for Microwave Access (WiMAX), Bluetooth, its derivatives, and designated It is 3G, 4G, 5G and any other wireless protocol. Computing device 1100 can include a plurality of communication chips 1106. For example, the first communication chip 1106 can be dedicated to short-range wireless communication, such as Wi-Fi and Bluetooth, and the second communication chip 1106 can be dedicated to long-distance wireless communication, such as GPS, EDGE, GPRS, CDMA, LTE, Ev-DO, etc. .In various implementations, computing device 1100 can be a laptop, netbook, laptop, ultrabook, smart phone, computing tablet, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop, server , a printer, a scanner, a monitor, a set top box, an entertainment control unit (such as a gaming machine or car entertainment unit), a digital camera, a home appliance, a portable music player, or a digital video camera. In a further implementation, the computing device 1100 can be Any other electronic device that processes data.Some non-limiting examples are provided below.Example 1 is a voltage level shifter circuit comprising: an input node for receiving an input signal in a first voltage domain; a data node for maintaining a logic state of an input signal for generating an output signal, An output signal corresponding to the input signal and in a second voltage domain; an inverse data node for maintaining a logic state of the inverted input signal, the inverted input signal being an inversion of the input signal; and maintaining The transistor has a source terminal coupled to the data node, a gate terminal coupled to the inverting data node, and a drain terminal receiving the inverted input signal.Example 2 is the circuit of example 1, wherein the keeper transistor is a first keeper transistor, and wherein the circuit further comprises a second keeper transistor having a source terminal coupled to the inverting data node, coupled a gate terminal to the data node, and a drain terminal that receives a delayed signal of the input signal.Example 3 is the circuit of example 2, further comprising: a first firewall transistor coupled between the first keeper transistor and the ground terminal; and a second firewall transistor coupled between the second keeper transistor and the ground terminal, wherein a gate terminal of the second firewall transistor is coupled to a gate terminal of the first firewall transistor, and wherein a gate terminal of the first firewall transistor and the second firewall transistor receives a firewall signal for being used by the power gate when the first voltage domain is The control time selectively drives the data node and the inverted data node to 0 volts.Example 4 is the circuit of any one of the examples 1 to 3, further comprising: a pull-down transistor coupled between the data node and the ground terminal; an interrupt transistor coupled to the data node; and a pull-up transistor coupled to the interrupt transistor and the power supply Between the rails, the power rail receives a supply voltage.Example 5 is the circuit of example 4, wherein the gate terminals of the interrupt transistor and the pull-down transistor receive an input signal, and wherein the gate terminal of the pull-up transistor is coupled to the negated data node.Example 6 is the circuit of example 5, wherein the pull-down transistor is a first pull-down transistor, and wherein the circuit further comprises a second pull-down transistor coupled between the ground terminal and the intermediate node, the intermediate node being pulled up Between the transistor and the interrupt transistor, wherein the gate terminal of the second pull-down transistor receives the input signal.Example 7 is the circuit of example 6, wherein the interrupt transistor is a first interrupt transistor, and wherein the circuit further comprises: a second interrupt transistor coupled between the first interrupt transistor and the pull-up transistor; A triple pull-down transistor coupled between the ground terminal and the second intermediate node, the second intermediate node being between the pull-up transistor and the second interrupt transistor, wherein the gate terminal of the third pull-down transistor receives the input signal.Example 8 is the circuit of example 4, further comprising a capacitance boosting circuit coupled to the input node for transmitting the boosted input signal to the interrupt transistor and the pull-down transistor.Example 9 is the circuit of example 1, wherein the input node, the data node, the negated data node, and the keeper transistor are included in a first stage of the voltage level shifter circuit, and wherein the voltage level shifter circuit Further included is a second stage for receiving an output signal of the first stage and generating an output signal of the second stage in the third voltage domain.Example 10 is a voltage level shifter circuit comprising: an input node for receiving an input data signal associated with a first voltage domain; and a data node for maintaining a logic state of the input data signal for generating an output signal, The output signal corresponds to the input signal and is in a second voltage domain that is higher than the first voltage domain; a first pull-down transistor is coupled between the data node and the ground terminal, and a gate terminal of the pull-down transistor receives the input a signal; an interrupt transistor coupled to the data node, the gate terminal of the interrupt transistor receiving the input signal; a pull-up transistor coupled between the interrupt transistor and the power rail, the power rail receiving a supply voltage associated with the second voltage domain; A second pull-down transistor is coupled between the ground terminal and the intermediate node, the intermediate node being between the pull-up transistor and the interrupt transistor, wherein the gate terminal of the second pull-down transistor receives the input signal.Example 11 is the circuit of example 10, wherein the interrupt transistor is a first interrupt transistor, and wherein the circuit further comprises: a second interrupt transistor coupled between the first interrupt transistor and the pull-up transistor; A triple pull-down transistor coupled between the ground terminal and the second intermediate node, the second intermediate node being between the pull-up transistor and the second interrupt transistor, wherein the gate terminal of the third pull-down transistor receives the input signal.Example 12 is the circuit of example 10, further comprising: inverting the data node to maintain a logic state of the inverted input signal, the negated input signal being an inversion of the input signal, wherein the gate of the pull-up transistor The extremes are coupled to the inverting data node.Example 13 is the circuit of example 12, further comprising: a keeper transistor having a source terminal coupled to the data node, a gate terminal coupled to the inverted data node, and a drain terminal receiving the inverted input signal.Example 14 is the circuit of example 13, wherein the keeper transistor is a first keeper transistor, and wherein the circuit further comprises a second keeper transistor having a source terminal coupled to the inverting data node, coupled a gate terminal to the data node, and a drain terminal that receives a delayed signal of the input signal.Example 15 is the circuit of any one of the examples 10 to 14, further comprising a capacitance boosting circuit coupled to the input node for increasing a voltage of the input data signal at the data node above a supply voltage of the first voltage domain.Example 16 is the circuit of example 15, wherein the input node is a first input node, and wherein the capacitance boosting circuit comprises: a p-type transistor coupled between the second input node and the first input node, wherein a second input node receives the data signal in the first voltage domain, and wherein the gate terminal of the first p-type transistor receives the delayed signal of the data signal; the n-type transistor is coupled to the first input node and the second input node Between the gate terminals of the n-type transistor receiving a supply voltage associated with the first voltage domain; and a capacitively coupled transistor coupled between the p-type transistor and the first input node, the capacitively coupled transistor charging the first input The node generates a boost data signal at the first input node.Example 17 is the circuit of example 11, further comprising an enable transistor coupled between the second pull-down transistor and the ground terminal, the enable terminal of the enable transistor receiving an enable signal for selectively enabling an enhanced power interruption of the circuit mode.Example 18 is a system comprising: a first input node for receiving an input signal in a low voltage domain; and a capacitance boosting circuit coupled between the first input node and the second input node, the capacitance boosting circuit The method includes: a p-type transistor coupled between the first input node and the second input node, a gate terminal of the first p-type transistor receiving a delayed signal of the data signal; and an n-type transistor coupled to the first input node and the second input Between the nodes, the gate terminal of the n-type transistor receives a low supply voltage associated with the low voltage domain; and a capacitively coupled transistor coupled between the p-type transistor and the second input node, the capacitively coupled transistor will be second The input node is charged to a voltage level greater than the low supply voltage to produce an boosted input signal. The system of Example 18 further includes a level shifting circuit for receiving the boosted input signal at the second input node and generating an output signal, the output signal corresponding to the input signal, and in the high voltage domain, the high voltage domain has a high The voltage level in the low voltage domain.Example 19 is the system of example 18, wherein the p-type transistor and the n-type transistor are coupled in parallel with each other.Example 20 is the system of example 18, wherein the gate terminal of the capacitively coupled transistor receives a delayed signal of the input signal.Example 21 is the system of example 18, wherein the level shifting circuit comprises: an interrupt transistor coupled to the data node, the data node maintaining a logic state of the input signal, wherein a gate terminal of the interrupt transistor is coupled to a second input node; a pull-up transistor coupled between the interrupt transistor and the power rail, the power rail receiving a high supply voltage associated with the high voltage domain; and a pull-down transistor coupled between the ground terminal and the intermediate node, The intermediate node is between the pull-up transistor and the interrupt transistor, wherein the gate terminal of the pull-down transistor is coupled to the second input node.Example 22 is the system of example 21, wherein the level shifting circuit further comprises: a data node to maintain a logic state of the input new civil aviation; and an inverse data node to maintain a logic state of the inverted input signal, The inverse input signal is an inversion of the input signal; the keeper transistor has a source terminal coupled to the data node, a gate terminal coupled to the inverted data node, and a drain terminal receiving the inverted input signal.Example 23 is the system of example 18, further comprising an enable transistor coupled to the second input node, the enable transistor selectively transmitting the input signal to the second input node when the capacitance boost circuit is disabled.Example 24 is the system of any one of Examples 18 to 23, further comprising: a processor coupled to the level shifting circuit, the processor comprising a first circuit block operating in a low voltage domain and operating in a high voltage domain The second circuit block in .Although certain embodiments are shown and described herein for purposes of illustration, the invention is intended to cover any modifications or variations of the embodiments described herein. Therefore, it is obvious that the embodiments described herein are limited only by the claims.In the event that the disclosure refers to "a" or "first" element or its equivalent, such disclosure includes one or more such elements, neither . In addition, ordinal numbers (eg, first, second, or third) of the elements used for the determination are used to distinguish between the elements, and do not indicate or imply a necessary or limited number of such elements, nor do they indicate The specific location or order of the elements, unless expressly stated otherwise. |
The present invention relates generally to photolithographic systems and methods, and more particularly to systems and methodologies that facilitate improved critical dimension (CD) control and the reduction of line-edge roughness (LER) during pattern line formation in an imprint mask. One aspect of the invention provides for forming features having CDs that are larger than ultimately desired in a mask resist. Upon application of a non-lithographic shrink technique, LER is mitigated and CD is reduced to within a desired target tolerance. |
What is claimed is:1. A system that improves critical dimension control and mitigates line edge roughness in imprint mask manufacture, comprising:a non-lithographic shrink component that reduces critical dimension(s) and/or mitigates line edge roughness on mask feature(s) at least by performing a non-lithographic shrink technique on the mask feature(s);a monitoring component that measures critical dimension and line edge roughness on a mask during fabrication; anda processor that analyzes data associated with at least one of critical dimension, line edge roughness, and non-lithographic shrink technique(s).2. The system of claim 1, the monitoring component comprises at least one of a scatterometry system, and a Scanning Electron Microscopy system.3. The system of claim 1, the processor comprises an artificial intelligence component that makes inferences regarding at least one of reducing mask critical dimension(s) to a target critical dimension and mitigating line edge roughness.4. The system of claim 3, the artificial intelligence component comprises at least one of a support vector machine, a neural network, an expert system, a Bayesian belief network, fuzzy logic, and a data fusion engine.5. The system of claim 1, the non-lithographic shrink component comprises at least one of a Resolution Enhancement Lithography Assisted by Chemical Shrink (RELACS(TM)) component, a Shrink Assist Film for Enhanced Resolution (SAFIER) component, and a thermal reflow component.6. The system of claim 1, further comprising at least one sensor that gathers data associated with at least one parameter of a physical condition of the mask. |
TECHNICAL FIELDThe present invention relates generally to photolithographic systems and methods, and more particularly to systems and methodologies that mitigate line edge roughness and improve critical dimension control of imprint mask features.BACKGROUND OF THE INVENTIONAs semiconductor trends continue toward decreased size and increased packaging density, every aspect of semiconductor fabrication processes is scrutinized in an attempt to maximize efficiency in semiconductor fabrication and throughput. Many factors contribute to fabrication of a semiconductor. For example, at least one photolithographic process can be used during fabrication of a semiconductor. This particular factor in the fabrication process is highly scrutinized by the semiconductor industry in order to improve packaging density and precision in semiconductor structure.Lithography is a process in semiconductor fabrication that generally relates to transfer of patterns between media. More specifically, lithography refers to transfer of patterns onto a thin film that has been deposited onto a substrate. The transferred patterns then act as a blueprint for desired circuit components. Typically, various patterns are transferred to a photoresist (e.g., radiation-sensitive film), which is the thin film that overlies the substrate during an imaging process described as "exposure" of the photoresist layer. During exposure, the photoresist is subjected to an illumination source (e.g. UV-light, electron beam, X-ray), which passes through a pattern template, or reticle, to print the desired pattern in the photoresist. Upon exposure to the illumination source, radiation-sensitive qualities of the photoresist permit a chemical transformation in exposed areas of the photoresist, which in turn alters the solubility of the photoresist in exposed areas relative to that of unexposed areas. When a particular solvent developer is applied, exposed areas of the photoresist are dissolved and removed, resulting in a three-dimensional pattern in the photoresist layer. This pattern is at least a portion of the semiconductor device that contributes to final function and structure of the device, or wafer.Techniques, equipment and monitoring systems have concentrated on preventing and/or decreasing defect occurrence within lithography processes. For example, aspects of resist processes that are typically monitored can comprise: whether the correct mask has been used; whether resist film qualities are acceptable (e.g., whether resist is free from contamination, scratches, bubbles, striations, . . . ); whether image quality is adequate (e.g., good edge definition, line-width uniformity, and/or indications of bridging); whether critical dimensions are within specified tolerances; whether defect types and densities are recorded; and/or whether registration is within specified limits; etc. Such defect inspection task(s) have progressed into automated system(s) based on both automatic image processing and electrical signal processing.Imprint lithography uses a patterned mask to "imprint" a pattern on a resist at a 1:1 feature size ratio. Thus, imprint mask integrity must be maintained throughout the lithography process because any flaw or structural defect present on a patterned imprint mask can be indelibly transferred to underlying layers during imprinting of a photoresist. One example of an undesirable structural defect is line-edge roughness (LER). LER refers to variations on sidewalls of features, which can result in variations of LER in the patterned photoresist and increased critical dimensions (CDs). Many factors can contribute to LER in the an imprint mask pattern, such as LER on chrome patterns residing on the reticle, image contrast in a system that generates the mask pattern, an etch process that can be used to pattern the mask, inherent properties and/or weaknesses of the mask materials, and/or the mask processing method. Additionally, LER appearing in fabricated structures can result from damage to the patterned resist during an etch process.Current methods of pattern line formation on an imprint mask typically produce LER as an undesirable side effect. As lithographic techniques are pushed to their limits, smaller and smaller CDs are desired to maximize chip performance. Thus, chip manufacture is governed largely by wafer CD, which is defined as the smallest allowable width of, or space between, lines of circuitry in a semiconductor device. As methods of wafer manufacture are improved, wafer CD is decreased, which in turn requires finer and finer line edges to be produced. Line edges having a roughness that was acceptable just a year ago can detrimentally affect an imprint mask exhibiting today's critical dimension standards, which in turn can cause chip performance to deteriorate. Furthermore, as CD decreases, LER becomes increasingly difficult to avoid.SUMMARY OF THE INVENTIONThe following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.The present invention provides for systems and methods that facilitate improved critical dimension (CD) control and mitigate line-edge roughness (LER) on pattern lines formed on an imprint mask during mask manufacture. More specifically, the systems and methods of the invention can mitigate LER and reduce CD to a desired target tolerance in order to improve imprint mask performance.According to an aspect of the invention, an imprint mask can be fabricated having larger-than-target CDs. By intentionally manufacturing a mask with CDs above a target tolerance, LER can be mitigated. According to this aspect, a non-lithographic shrink technique can be applied to the imprint mask to reduce CD to within the target tolerance. Additionally, application of the non-lithographic shrink technique will mitigate any LER that might be present despite the use of relatively large CDs during mask fabrication. In this manner, the present invention can provide improved CD control while minimizing LER episodes.Another aspect of the present invention provides for techniques that can be employed to selectively mitigate LER on pattern lines on an imprint mask. For example, a monitoring component can determine whether LER exists on pattern lines on an imprint mask and/or whether CD is within a target tolerance. If it is determined that LER is present, a non-lithographic shrink technique can be performed on the pattern feature(s) to mitigate LER. If CD is determined to be within a target tolerance, the shrink technique can be employed utilizing a minimum functional temperature, at which undesirable topography is mitigated while target CD is retained. Additionally, if CD is determined to be above a target tolerance, a non-lithographic shrink technique can be employed in the absence of LER to reduce CD.According to another aspect, the non-lithographic shrink technique can be a thermal flow technique, whereby a patterned mask resist is heated to a predetermined minimum temperature, such as, for example, the glass transition temperature of the resist, so that the resist begins to exhibit fluid properties and begins to flow. By causing the resist to just enter a liquid phase, LER is mitigated because the solid physical state of the mask resist is compromised. The temperature to which the resist is heated can be high enough to mitigate LER but low enough to avoid a decrease in CD. In this manner, the invention advantageously mitigates LER while maintaining CD within a desired tolerance.According to another aspect, the invention can employ a Resolution Enhancement Lithography Assisted by Chemical Shrink (RELACS(TM)) technique. For example, imprint mask features can be shrunk to facilitate achieving Deep UV and/or Extreme UV dimensions. According to yet another aspect of the invention, a Shrink Assist Film for Enhanced Resolution (SAFIER) technique can be employed to facilitate a controlled shrink of, for example, and imprint mask negative of a contact opening or a gate channel. This technique is capable of shrinking a feature down to about 50 nm.To the accomplishment of the foregoing and related ends, certain illustrative aspects of the invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention can be employed and the present invention is intended to comprise all such aspects and their equivalents. Other advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is an illustration of a mask CD control system in accordance with an aspect of the present invention.FIG. 2 is an illustration of a mask CD control system in accordance with an aspect of the present invention comprising a processor and a memory.FIG. 3 is an illustration of a mask CD control system in accordance with the present invention wherein a processor comprises an artificial intelligence component.FIG. 4 is a cross-sectional illustration of a typical imprint mask substrate with an unpatterned resist overlay.FIG. 5 illustrates a cross-sectional and top-down view of a mask with pattern lines exhibiting line-edge roughness (LER) and a CD, d1, greater than a desired target tolerance.FIG. 6 is an illustration of a system in accordance with an aspect of the present invention wherein a non-lithographic shrink technique is applied to an imprint mask to mitigate LER and reduce CD to within a target tolerance.FIG. 7 illustrates cross-sectional and top-down views of an imprint maskFIG. 8 illustrates a perspective view of a grid-mapped mask according to one or more aspects of the present invention.FIG. 9 illustrates plots of measurements taken at grid-mapped locations on a mask in accordance with one or more aspects of the present invention.FIG. 10 illustrates a table containing entries corresponding to measurements taken at respective grid-mapped locations on a mask in accordance with one or more aspects of the present invention.FIG. 11 is an illustration of a flow diagram of a methodology in accordance with an aspect of the present invention.FIG. 12 is an illustration of a flow diagram of a methodology in accordance with an aspect of the present invention.FIG. 13 is an illustration of a flow diagram of a methodology in accordance with an aspect of the present invention.FIGS. 14 and 15 are illustrations of exemplary computing systems and/or environments in connection with facilitating employment of the subject invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. The present invention will be described with reference to systems and methods for mitigating line-edge roughness (LER) during pattern line formation on an imprint mask while reducing critical dimension (CD) to within a target tolerance. It should be understood that the description of these exemplary aspects are merely illustrative and that they should not be taken in a limiting sense.The term "component" refers to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component can be a process running on a processor, a processor, an object, an executable, a thread of execution, a program and a computer. By way of illustration, both an application running on a server and the server can be components. A component can reside in one physical location (e.g., in one computer) and/or can be distributed between two or more cooperating locations (e.g., parallel processing computer, computer network).It is to be appreciated that various aspects of the present invention can employ technologies associated with facilitating unconstrained optimization and/or minimization of error costs. Thus, non-linear training systems/methodologies (e.g., back propagation, Bayesian, fuzzy sets, non-linear regression, or other neural networking paradigms including mixture of experts, cerebella model arithmetic computer (CMACS), radial basis functions, directed search networks, and function link networks) can be employed.FIG. 1 is an illustration of an imprint mask CD control system 100 according to an aspect of the present invention. The mask CD control system 100 comprises a non-lithographic shrink component 102 that is operatively coupled to a monitoring component 104. According to this aspect of the invention, lines and/or features having a larger CD than is ultimately desired are formed in a mask resist via conventional methods. The monitoring component 104 can analyze and determine whether threshold LER exists on the pattern lines, and/or whether CD is within a target tolerance. The monitoring component 104 can employ scatterometry techniques to perform the preceding analysis.Upon determining that a threshold amount of LER is present, the system 100 can mitigate any extant LER and reduce CD to within a target tolerance by employing the non-lithographic shrink component 102. According to an aspect of the invention, the non-lithographic shrink component 102 can be a thermal flow component that is capable of heating a resist (not shown) in which pattern lines have been formed to a temperature at which the resist will begin to flow. Such a temperature is often referred to as the "glass transition temperature" of a resist, which describes a temperature near the resist softening point and at which the resist begins to flow. By causing the resist to begin to flow, jagged edges associated with LER can be smoothed (e.g., mitigated). Additionally, the non-lithographic shrink component 102 can be a Resolution Enhancement Lithography Assisted by Chemical Shrink (RELACS(TM)) component. For example, features can be manipulated to facilitate achieving Deep UV and/or Extreme UV dimensions. According to another example, the non-lithographic shrink component 102 can be a Shrink Assist Film for Enhanced Resolution (SAFIER) component that can facilitate a controlled shrink of, for example, a line or feature on a mask resist. Via employing a SAFIER technique, a SAFIER component can shrink a contact opening down to about 50 nm. Thus, the instant invention can advantageously mitigate LER associated with pattern line formation in a mask resist while reducing CD to within a desired tolerance.It is to be appreciated that the monitoring component 104 can be, for example, a scatterometry component. The present invention contemplates any suitable scatterometry component and/or system, and such systems are intended to fall within the scope of the hereto-appended claims. It is further to be appreciated that the monitoring component 104 utilized by the present invention can be, for example, a Scanning Electron Microscope (SEM), a Critical Dimension Scanning Electron Microscope (CD-SEM), a Field Effect Scanning Electron Microscope (FESEM), an In-Lens FESEM, or a Semi-In-Lens FESEM, depending on the desired magnification and precision. For example, FESEM permits greater levels of magnification and resolution at high or low energy levels by rastering a narrower electron beam over the sample area. FESEM thus permits quality resolution at approximately 1.5 nm. Because FESEM can produce high-quality images at a wide range of accelerating voltages (typically 0.5 kV to 30 kV), it is able to do so without inducing extensive electrical charge in the sample. Furthermore, conventional SEM cannot accurately image an insulating material unless the material is first coated with an electrically conductive material. FESEM mitigates the need to deposit an electrically conductive coating prior to scanning. According to another example, the monitoring component 104 of the present invention can be In-Lens FESEM, which is capable of 0.5 nm resolution at an accelerating voltage of 30 kV, or any other suitable type of scanner, such as Transmission Electron Microscopy (TEM), Atomic Force Microscopy (AFM), Scanning Probe Microscopy (SPM), etc.It is further to be appreciated that information gathered by the monitoring component 104 can be utilized for generating feedback and/or feed-forward data that can facilitate maintaining critical dimensions that are within acceptable tolerances. The mask CD control system 100 can additionally employ such data to control components and/or operating parameters associated therewith. For instance, feedback/feed-forward information can be generated from sequence analysis to maintain, increase and/or decrease a rate at which fabrication processes (e.g., thermal reflow, etching, . . . ) progress. Additionally, one or a plurality of sensors can be associated with the mask CD control system 100 to permit data to be gathered regarding the state of the mask (e.g., temperature, density, viscosity, material composition, and/or any other suitable information related to the condition of the mask).FIG. 2 illustrates a mask CD control system 200 in accordance with an aspect of the present invention. The mask CD control system 200 comprises a non-lithographic shrink component 202 that is operably coupled to a monitoring component 204. According to this aspect, the monitoring component 204 is further operably coupled to a processor 206, which is in turn operably coupled to a memory 208. It is to be understood that a that the processor 206 can be a processor dedicated to determining whether CD is within a desired target tolerance, whether LER exists, a processor used to control one or more of the components of the present system(s), or, alternatively, a processor that is used to determine whether CD is within a target tolerance, whether LER exists, and to control one or more of the components of the mask CD control system.The memory component 208 can be employed to retain control programs, semiconductor fabrication data, etc. Furthermore, the memory 208 can be either volatile memory or nonvolatile memory, or can comprise both volatile and nonvolatile memory. By way of illustration, and not limitation, nonvolatile memory can comprise read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can comprise random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). The memory 208 of the present systems and methods is intended to comprise, without being limited to, these and any other suitable types of memory.FIG. 3 is an illustration of a mask CD control system 300 in accordance with an aspect of the present invention. The mask CD control system 300 can employ various inference schemes and/or techniques in connection with mitigating LER and/or reducing CD to within a target tolerance. As used herein, the term "inference" refers generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic-that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. Various classification schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines . . . ) can be employed in connection with performing automatic and/or inferred action in connection with the subject invention.Still referring to FIG. 3, the mask CD control system 300 comprises a non-lithographic shrink component 302 that is operably coupled to a monitoring component 304. A processor 306 is operably coupled to both a memory 308 and the monitoring component 304. According to this aspect of the invention, the processor 306 is further associated with an artificial intelligence (AI) component 310 that can make inferences regarding system operation. For example, the AI component 310 can determine an optimal duration for employing the non-lithographic shrink component 302. Additionally, the AI component can make inferences regarding an optimal temperature at which to expose the resist features to facilitate reducing CD while mitigating LER. According to another example, the AI component 310 can make inferences regarding whether target CD has been achieved. These examples are given by way of illustration only and are not in any way intended to limit the scope of the present invention or the number of, or manner in which the AI component makes, inferences.FIG. 4 is a cross-sectional illustration of an unpatterned mask 400. The mask 400 comprises a mask substrate 402 and a mask resist 404. The mask substrate can be, for example, fused silica, as is commonly used in the art. The mask resist 404 can be, for example, any suitable resist material into which desired patterns can be placed for imprinting into a wafer (e.g., poly(1-butene sulfone) (PBS), poly(glycidyl methacrylate) (PGMA), etc.). Such resist materials are given by way of example, and not limitation.FIG. 5 illustrates cross-sectional and top down views of a mask 500 comprising a substrate 502 and a resist 504. Pattern lines have been formed in the mask resist 504, which exhibit LER 506 and have a CD, d1, which is greater than a desired target CD. CD, in this example, is defined as the width of a trench, rather than the width of the space between trenches. This definition of CD is offered merely for illustrative purposes, and it is to be understood that CD could alternatively be defined as the width of the space between trenches, in which scenario a shrink technique would increase CD to a target tolerance.FIG. 6 is an illustration of a mask 612 as described in FIG. 5 undergoing a non-lithographic shrink technique via a mask CD control system 600. This aspect of the invention contemplates thermal reflow techniques, SAFIER techniques, and/or RELACS(TM) techniques. However, the invention is not limited to the above-mentioned techniques, and can employ any suitable non-lithographic shrink technique. The mask CD control system 600 comprises a non-lithographic shrink technique component 602 operably coupled to a monitoring system 604. A processor 606 is operably coupled to a memory 608 and to the monitoring component 604. The processor 606 is associated with an AI component 610 that can make inferences regarding various aspects of CD control and/or LER mitigation.Still referring to FIG. 6, mask CD control system 600 directs the shrink component 602 to perform a shrink technique on the mask 612. The performance of the technique is illustrated via solid arrows. The mask 612 comprises a mask substrate 614 (e.g., fused silica, etc.) and a resist layer 616 (e.g., PBS, PGMA, etc.), into which pattern lines exhibiting LER 618 have been formed. A distance d1 is shown, which is the CD measurement of the width of trenches formed in the resist. For purposes of this discussion, d1 represents a CD greater than the desired target CD. It should also be noted that line edge roughness as illustrated in FIG. 6 occurs in both the x-plane and the y-plane. Indeed, line edge roughness can occur in any plane, depending on the particular topography of a mask feature.FIG. 7 illustrates a cross-sectional and top-down view of a mask 700 after the mask CD control system has performed a non-lithographic shrink technique. The mask 700 comprises a substrate 702 and a resist layer 704. According to this illustration, LER has been mitigated on pattern lines 706. It should be noted that the original CD defined by d1 has been reduced to the desired target CD, d2.Turning now to FIGS. 8-10, in accordance with one or more aspects of the present invention, a mask 802 (or one or more die located thereon) situated on a stage 804 can be logically partitioned into grid blocks to facilitate concurrent measurements of critical dimensions and overlay as the mask matriculates through a semiconductor fabrication process. This can facilitate selectively determining to what extent, if any, fabrication adjustments are necessary. Obtaining such information can also assist in determining problem areas associated with fabrication processes.FIG. 8 illustrates a perspective view of the steppable stage 804 supporting the mask 802. The mask 802 can be divided into a grid pattern as shown in FIG. 8. Each grid block (XY) of the grid pattern corresponds to a particular portion of the mask 802 (e.g., a die or a portion of a die). The grid blocks are individually monitored for fabrication progress by concurrently measuring critical dimensions and overlay with either scatterometry or scanning electron microscope (SEM) techniques.This can also be applicable in order to assess mask-to-mask and lot-to-lot variations. For example, a portion P (not shown) of a first mask (not shown) can be compared to the corresponding portion P (not shown) of a second mask. Thus, deviations between masks and lots can be determined in order to calculate adjustments to the fabrication components that are necessary to accommodate for the mask-to-mask and/or lot-to-lot variations.In FIG. 9, one or more respective portions of the mask 802 (X1 Y1 . . . X12, Y12) are concurrently monitored for critical dimensions and overlay utilizing either scatterometry or scanning electron microscope techniques. Exemplary measurements produced during fabrication for each grid block are illustrated as respective plots. The plots can, for example, be composite valuations of signatures of critical dimensions and overlay. Alternatively, critical dimensions and overlay values can be compared separately to their respective tolerance limits.As can be seen, the measurement at coordinate X7 Y6 yields a plot that is substantially higher than the measurement of the other portions XY. This can be indicative of overlay, overlay error, and/or one or more critical dimension(s) outside of acceptable tolerances. As such, fabrication components and/or operating parameters associated therewith can be adjusted accordingly to mitigate repetition of this aberrational measurement. It is to be appreciated that the mask 802 and or one or more die located thereon can be mapped into any suitable number and/or arrangement of grid blocks to effectuate desired monitoring and control.FIG. 10 is a representative table of concurrently measured critical dimensions and overlay taken at various portions of the mask 802 mapped to respective grid blocks. The measurements in the table can, for example, be amalgams of respective critical dimension and overlay signatures. As can be seen, all the grid blocks, except grid block X7 Y6, have measurement values corresponding to an acceptable value (VA) (e.g., no overlay error is indicated and/or overlay measurements and critical dimensions are within acceptable tolerances), while grid block X7 Y6 has an undesired value (Vu) (e.g., overlay and critical dimensions are not within acceptable tolerances, thus at least an overlay or CD error exists). Thus, it has been determined that an undesirable fabrication condition exists at the portion of the mask 802 mapped by grid block X7 Y6. Accordingly, fabrication process components and parameters can be adjusted as described herein to adapt the fabrication process accordingly to mitigate the re-occurrence or exaggeration of this unacceptable condition.Alternatively, a sufficient number of grid blocks can have desirable thickness measurements so that the single offensive grid block does not warrant scrapping the entire mask. It is to be appreciated that fabrication process parameters can be adapted so as to maintain, increase, decrease and/or qualitatively change the fabrication of the respective portions of the mask 802 as desired. For example, when the fabrication process has reached a pre-determined threshold level (e.g., X % of grid blocks have acceptable CDs and no overlay error exists), a fabrication step can be terminated.Now turning to FIGS. 11-13, methodologies that can be implemented in accordance with the present invention are illustrated. While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the present invention is not limited by the order of the blocks, as some blocks can, in accordance with the present invention, occur in different orders and/or concurrently with other blocks from that shown and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies in accordance with the present invention.FIG. 11 is an illustration of a methodology 1100 in accordance with an aspect of the present invention. At 1102, pattern lines are formed on an imprint mask at a larger-than-target CD, which permits formation of lines with less LER than would occur if the lines were formed at the smaller, target CD. At 1104, a non-lithographic shrink technique is applied to the imprint mask. According to one example, shrink material(s) can be applied to the resist via spin-coat technique(s). The non-lithographic shrink technique can be, for example, a thermal reflow technique, a Resolution Enhancement Lithography Assisted by Chemical Shrink (RELACS(TM)) technique, and/or a Shrink Assist Film for Enhanced Resolution (SAFIER) technique. Typically, after a series of bakes and rinses, the features of the resist successfully will be shrunk. At 1106, a determination is made regarding whether the shrink was successful (e.g., the features and/or lines on the mask have been shrunk to within the desired target CD tolerance). Such determination can be made utilizing a monitoring component such as, for example, a scatterometry component, an SEM, etc., as has been described herein with regard to various other aspects of the invention. If CD is not within the desired tolerance, then the method can revert to 1104 for further shrinking. If the CD is within the target tolerance, then a determination is made at 1108 regarding the presence of any remaining LER. As stated above, LER is initially mitigated via fabrication of the mask at a larger-than-target CD. However, some LER may still occur at the larger CD, which can be mitigated by the application of the shrink technique. In the event that LER is still present at 1108, the method can revert to 1104, where the shrink technique can be reapplied to further reduce LER. Additionally, if LER is detected at 1108 but is determined to be acceptably minimal, the method can proceed to 1110, where the imprint mask is approved.FIG. 12 is an illustration of a methodology 1200 in accordance with an aspect of the present invention. Utilizing conventional methods, features are formed in a resist layer on a mask at 1202. It is to be appreciated that the lines can delineate negatives of, for example, trenches, gates, or any other suitable structure that can be patterned onto or into a mask resist. The formation of lines conforms to a specific initial CD tolerance. At 1204, a non-lithographic shrink technique can be applied to reduce the CD of mask features. The non-lithographic shrink technique can be, for example, a thermal reflow technique, a Resolution Enhancement Lithography Assisted by Chemical Shrink (RELACS(TM)) technique, and/or a Shrink Assist Film for Enhanced Resolution (SAFIER) technique. A determination is made at 1206 as to whether the new CD of the mask features is within a predetermined target tolerance. This determination can be made via employing, for example, a monitoring component such as a scanning electron microscope (SEM), a critical dimension SEM (CD-SEM), a scatterometry component, or any other suitable means for detecting, measuring, and/or monitoring CD and/or LER. If the new CD is not within the desired tolerance, the method can revert to 1204 to further reduce feature CD. If the new CD is within the desired tolerance, then a determination is made regarding the presence of any residual LER at 1208. If LER has been sufficiently mitigated, the method can proceed to 1210, where the mask is approved (e.g., for use, further fabrication, etc.). If it is determined at 1208 that an unacceptable amount of LER still exists on the resist features, then the method can proceed to 1212At 1212, a non-lithographic shrink technique can be applied at a minimum temperature. By applying the non-lithographic shrink technique at a minimum effective temperature, any further decrease in CD can be avoided during the shrink technique, while LER can be further mitigated. For example, the temperature to which the resist is heated can be high enough to reduce LER, but low enough to preclude any decrease in CD. Such temperature is commonly referred to as the glass transition temperature of a resist. Additionally, duration of exposure to the minimum reaction temperature can be accounted for in order to ensure that CD remains within a target tolerance. In this manner, the present invention can reduce minimize the detrimental effects of the presence of LER on mask structure(s) while retaining a desired CD, in the event that no further CD reduction is desired.FIG. 13 illustrates a methodology 1300 in accordance with an aspect of the invention wherein artificial intelligence (AI) is employed to facilitate CD control and/or LER mitigation during imprint mask fabrication. At 1302, features are formed in a mask resist at larger-than-target CDs. At 1304, a non-lithographic shrink technique is employed to reduce CD to a desired tolerance, while mitigating any LER that may have occurred during formation of the features at the larger CD. The non-lithographic shrink technique can be, for example, a thermal reflow technique, a Resolution Enhancement Lithography Assisted by Chemical Shrink (RELACS(TM)) technique, and/or a Shrink Assist Film for Enhanced Resolution (SAFIER) technique. According to one example, AI techniques can be employed to determine a most suitable shrink technique, duration of the shrink technique, temperature at which the shrink technique is applied, etc. At 1306, a determination is made as to whether the original CD has been successfully reduced to the desired target CD. At 1308, a determination is made as to whether LER has been sufficiently mitigated (e.g., LER is completely absent, or present, but having a magnitude below a detrimental threshold level). The determinations at 1306 and/or 1308 can be facilitated via employing, for example, a monitoring component such as a scanning electron microscope (SEM), a critical dimension SEM (CD-SEM), a scatterometry component, or any other suitable means for detecting, measuring, and/or monitoring CD and/or LER expression. If both determinations are positive (e.g., target CD has been achieved and LER has been sufficiently mitigated), then the method can proceed to 1310, where the mask is approved. If either or both of the determinations at 1306 and 1308 are negative (e.g., either CD has not been reduced to target or LER is present in an amount above an acceptable threshold, or both), then the method can proceed to 1312, where AI techniques can be employed to determine a most appropriate course of action. Such potentially appropriate courses of action are given by way of example in FIG. 13, and are denoted by "exclusive 'or'"symbols. However, such courses of action are exemplary in nature, and other appropriate courses of action are intended to be within the scope of the present invention.For example, inferences made by an AI component can determine that further application of a non-lithographic shrink technique can successfully reduce an unacceptable CD to within a desired target CD tolerance. If so, then the method can proceed from 1312 to 1304 for further shrink technique application. According to another example, if an unacceptable amount of LER has been detected at 1308, then reversion to 1304 for further shrink technique application can be inferred to be a most appropriate course of action at 1312. Additionally, if an unacceptable LER presence is detected at 1308 but the determination at 1306 indicates that target CD has been achieved, then an inference can be made that further non-lithographic shrink technique application is desirable at a minimum effective temperature (e.g., the glass transition temperature of the resist), so that LER can be further mitigated while target CD is retained.According to a related example, inferences made at 1312 can indicate that approval of the mask is an appropriate course of action, in which case the method proceeds to 1310. In this example, approval of the mask despite a non-target CD or an undesirable amount of LER can be based at least in part on information indicating, for example, that the detected LER presence is in a non-crucial region of the mask, that the post-shrink CD is within a predetermined sub-tolerance of the desired CD target tolerance, etc.Additionally, a most appropriate course of action can be simply to reject the mask altogether. For example, if the new CD of mask feature(s) is substantially below a target tolerance, then correction can be cost-prohibitive. In such a case, rejection and discarding of the mask can be a most appropriate course of action, and the method can proceed to 1314 in order to avoid any further production costs associated with fabrication of a particular mask.In order to provide a context for the various aspects of the invention, FIGS. 14 and 15 as well as the following discussion are intended to provide a brief, general description of a suitable computing environment in which the various aspects of the present invention can be implemented. While the invention has been described above in the general context of computer-executable instructions of a computer program that runs on a computer and/or computers, those skilled in the art will recognize that the invention also can be implemented in combination with other program modules. Generally, program modules include routines, programs, components, data structures, etc. that perform particular tasks and/or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, mini-computing devices, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like. The illustrated aspects of the invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. However, some, if not all, aspects of the invention can be practiced on stand-alone computers. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.With reference to FIG. 14, an exemplary environment 1410 for implementing various aspects of the invention includes a computer 1412. The computer 1412 includes a processing unit 1414, a system memory 1416, and a system bus 1418. The system bus 1418 couples system components including, but not limited to, the system memory 1416 to the processing unit 1414. The processing unit 1414 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 1414.The system bus 1418 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus utilizing any variety of available bus architectures including, but not limited to, 8-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).The system memory 1416 includes volatile memory 1420 and nonvolatile memory 1422. The basic input/output system (BIOS), comprising the basic routines to transfer information between elements within the computer 1412, such as during start-up, is stored in nonvolatile memory 1422. By way of illustration, and not limitation, nonvolatile memory 1422 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory 1420 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).Computer 1412 also includes removable/non-removable, volatile/non-volatile computer storage media. FIG. 14 illustrates, for example, a disk storage 1424. Disk storage 1424 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick. In addition, disk storage 1424 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM). To facilitate connection of the disk storage devices 1424 to the system bus 1418, a removable or non-removable interface is typically used such as interface 1426.It is to be appreciated that FIG. 14 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment 1410. Such software includes an operating system 1428. Operating system 1428, which can be stored on disk storage 1424, acts to control and allocate resources of the computer system 1412. System applications 1430 take advantage of the management of resources by operating system 1428 through program modules 1432 and program data 1434 stored either in system memory 1416 or on disk storage 1424. It is to be appreciated that the present invention can be implemented with various operating systems or combinations of operating systems.A user enters commands or information into the computer 1412 through input device(s) 1436. Input devices 1436 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 1414 through the system bus 1418 via interface port(s) 1438. Interface port(s) 1438 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 1440 use some of the same type of ports as input device(s) 1436. Thus, for example, a USB port can be used to provide input to computer 1412, and to output information from computer 1412 to an output device 1440. Output adapter 1442 is provided to illustrate that there are some output devices 1440 like monitors, speakers, and printers, among other output devices 1440, which require special adapters. The output adapters 1442 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1440 and the system bus 1418. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1444.Computer 1412 can operate in a networked environment utilizing logical connections to one or more remote computers, such as remote computer(s) 1444. The remote computer(s) 1444 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 1412. For purposes of brevity, only a memory storage device 1446 is illustrated with remote computer(s) 1444. Remote computer(s) 1444 is logically connected to computer 1412 through a network interface 1448 and then physically connected via communication connection 1450. Network interface 1448 encompasses communication networks such as local-area networks (LAN) and wide-area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 802.3, Token Ring/IEEE 802.5 and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).Communication connection(s) 1450 refers to the hardware/software employed to connect the network interface 1448 to the bus 1418. While communication connection 1450 is shown for illustrative clarity inside computer 1412, it can also be external to computer 1412. The hardware/software necessary for connection to the network interface 1448 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.FIG. 15 is a schematic block diagram of a sample-computing environment 1500 with which the present invention can interact. The system 1500 includes one or more client(s) 1510. The client(s) 1510 can be hardware and/or software (e.g., threads, processes, computing devices). The system 1500 also includes one or more server(s) 1530. The server(s) 1530 can also be hardware and/or software (e.g., threads, processes, computing devices). The servers 1530 can house threads to perform transformations by employing the present invention, for example. One possible communication between a client 1510 and a server 1530 can be in the form of a data packet adapted to be transmitted between two or more computer processes. The system 1500 includes a communication framework 1550 that can be employed to facilitate communications between the client(s) 1510 and the server(s) 1530. The client(s) 1510 are operably connected to one or more client data store(s) 1560 that can be employed to store information local to the client(s) 1510. Similarly, the server(s) 1530 are operably connected to one or more server data store(s) 1540 that can be employed to store information local to the servers 1530.What has been described above comprises examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art can recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term "comprises" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. |
Methods, systems, and devices for purging data from a memory device are described. A memory system may receive, from a host system, a command to write data to an address storing an encryption key in a first portion of the memory system that is configured to store secure information (e.g., a Replay Protected Memory Block). The encryption key may be configured to encrypt data associated with the host system that is stored in a second portion of the memory system. The memory system may then receive an indication of a purge command from the host system. The memory system may execute the purge command by transferring data from the first portion of the memory system to a third portion of the memory system configured to store secure information and erasing the data from the first portion of the memory system. |
CLAIMSWhat is claimed is:1. A non-transitory computer-readable medium storing code at a memory system, the code comprising instructions executable by a processor to: receive, from a host system, a first command to write first data to an address storing an encryption key in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system; receive, from the host system after receiving the first command, an indication of a second command to purge the first portion of the memory system; transfer, based at least in part on receiving the second command, second data including the first data and one or more additional encryption keys from the first portion of the memory system to a third portion of the memory system configured to store secure information; erase the second data from the first portion of the memory system based at least in part on the transferring; and transmit, to the host system, an indication that the second command is complete based at least in part on the erasing.2. The non-transitory computer-readable medium of claim 1, wherein the instructions are further executable by the processor to: write the first data to the address of the first portion of the memory system to overwrite the encryption key based at least in part on receiving the first command, wherein transferring the second data to the third portion of the memory system is based at least in part on writing the first data.3. The non-transitory computer-readable medium of claim 1, wherein the instructions to receive the indication of the second command are executable by the processor to: identify that a register of the memory system stores a value that indicates the second command.4. The non-transitory computer-readable medium of claim 1, wherein the instructions to receive the indication of the second command are executable by the processor to: receive, from the host system, the second command to purge the encryption key from the first portion of the memory system.5. The non-transitory computer-readable medium of claim 1, wherein the instructions are further executable by the processor to: determine that the one or more additional encryption keys are valid based at least in part on third data stored at the memory system indicating whether encryption keys stored at the first portion of the memory system are valid, wherein transferring the second data including the one or more additional encryption keys is based at least in part on the determining.6. The non-transitory computer-readable medium of claim 1, wherein the instructions are further executable by the processor to: receive, from the host system, a third command to store the encryption key in the first portion of the memory system, wherein receiving the first command is based at least in part on receiving the first command.7. The non-transitory computer-readable medium of claim 1, wherein the instructions are further executable by the processor to: receive, from the host system, an indication to interrupt purging the first portion of the memory system after receiving the indication of the second command; perform one or more access operations at the first portion of the memory system based at least in part on receiving the indication to interrupt purging; and receive, from the host system, an indication to resume purging the first portion of the memory system after performing the one or more access operations, wherein transmitting the indication that the second command is complete is based at least in part on receiving the indication to resume purging.8. The non-transitory computer-readable medium of claim 7, wherein: receiving the indication to interrupt purging comprises identifying that a register of the memory system stores a first value that indicates interrupting purging; and
receiving the indication to resume purging comprises identifying that the register of the memory system stores a second value, different from the first value, that indicates resuming purging.9. The non-transitory computer-readable medium of claim 1, wherein the first portion of the memory system and the third portion of the memory system comprise a Replay Protected Memory Block (RPMB).10. The non-transitory computer-readable medium of claim 1, wherein the instructions are further executable by the processor to: initiate a garbage collection operation for the first portion of the memory system based at least in part on transferring the second data, wherein erasing the second data is performed as part of the garbage collection operation.11. A non-transitory computer-readable medium storing code at a host system, the code comprising instructions executable by a processor to: transmit, to a memory system, a first command to store an encryption key at an address in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system; transmit, to the memory system, a second command to write first data to the address storing the encryption key; indicate, to the memory system, a third command to purge the first portion of the memory system based at least in part on transmitting the second command; and receive, from the memory system, an indication that the third command to purge the first portion of the memory system is complete.12. The non-transitory computer-readable medium of claim 11, wherein the instructions to indicate the third command are executable by the processor to: set a register of the memory system to a value that indicates, to the memory system, the third command.13. The non-transitory computer-readable medium of claim 11, wherein the instructions to indicate the third command are executable by the processor to: transmit the third command to the memory system.14. The non-transitory computer-readable medium of claim 11, wherein the instructions are further executable by the processor to: indicate, to the memory system, to interrupt purging the first portion of the memory system after indicating the third command; access the first portion of the memory system based at least in part on indicating to interrupt purging; and indicate, to the memory system, to resume purging the first portion of the memory system based at least in part on accessing the first portion of the memory system, wherein receiving the indication that the third command is complete is based at least in part on indicating to resume purging.15. The non-transitory computer-readable medium of claim 14, wherein: indicating to interrupt purging comprises setting a register of the memory system to a first value that indicates interrupting purging; and indicating to resume purging comprises setting the register of the memory system to a second value, different from the first value, that indicates resuming purging.16. The non-transitory computer-readable medium of claim 11, wherein the first portion of the memory system comprises a Replay Protected Memory Block (RPMB).17. A memory system, comprising: a memory device; a controller coupled with the memory device and configured to cause the memory system to: receive, from a host system, a first command to write first data to an address storing an encryption key in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system; receive, from the host system after receiving the first command, an indication of a second command to purge the first portion of the memory system; transfer, based at least in part on receiving the second command, second data including the first data and one or more additional encryption keys from
the first portion of the memory system to a third portion of the memory system configured to store secure information; erase the second data from the first portion of the memory system based at least in part on the transferring; and transmit, to the host system, an indication that the second command is complete based at least in part on the erasing.18. The memory system of claim 17, wherein: the controller is further configured to write the first data to the address of the first portion of the memory system to overwrite the encryption key based at least in part on receiving the first command; and transferring the second data to the third portion of the memory system is based at least in part on writing the first data.19. The memory system of claim 17, further comprising: a register coupled with the controller and configured to store a value that indicates the second command, wherein receiving the indication of the second command is based at least in part on the register storing the value that indicates the second command.20. The memory system of claim 17, wherein: the controller is further configured to receive, from the host system, the second command; and receiving the indication of the second command is based at least in part on receiving the second command from the host system.21. The memory system of claim 17, wherein: the controller is further configured to determine that the one or more additional encryption keys are valid based at least in part on third data stored at the memory system indicating whether encryption keys stored at the first portion of the memory system are valid; and transferring the second data including the one or more additional encryption keys is based at least in part on the determining.22. An apparatus, comprising: a controller configured to couple with a memory system, wherein the controller is configured to cause the apparatus to:
transmit, to the memory system, a first command to store an encryption key at an address in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the apparatus; transmit, to the memory system, a second command to write first data to the address storing the encryption key; indicate, to the memory system, a third command to purge the first portion of the memory system based at least in part on transmitting the second command; and receive, from the memory system, an indication that the third command to purge the first portion of the memory system is complete.23. The apparatus of claim 22, wherein the controller is further configured to cause the apparatus to set a register of the memory system to a value that indicates the third command to the memory system.24. The apparatus of claim 22, wherein the controller is further configured to cause the apparatus to transmit the third command to the memory system.25. The apparatus of claim 22, wherein the controller is further configured to cause the apparatus to: indicate, to the memory system, to interrupt purging the first portion of the memory system after indicating the third command; access the first portion of the memory system based at least in part on indicating to interrupt purging; and indicate, to the memory system, to resume purging the first portion of the memory system based at least in part on accessing the first portion of the memory system, wherein receiving the indication that the third command is complete is based at least in part on indicating to resume purging. |
PURGING DATA FROM A MEMORY DEVICECROSS REFERENCE[0001] The present Application for Patent claims priority to U.S. Patent Application No. 17/524,471 by GYLLENSKOG et al., entitled “PURGING DATA AT A MEMORY DEVICE,” filed November 11, 2021 and U.S. Provisional Patent Application No. 63/118,387 by GYLLENSKOG et al., entitled “PURGING DATA AT A MEMORY DEVICE,” filed November 25, 2020; each of which is assigned to the assignee hereof, and each of which is expressly incorporated by reference herein.FIELD OF TECHNOLOGY[0002] The following relates generally to one or more systems for memory and more specifically to purging data from a memory device.BACKGROUND[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing memory cells within a memory device to various states. For example, binary memory cells may be programmed to one of two supported states, often corresponding to a logic 1 or a logic 0. In some examples, a single memory cell may support more than two possible states, any one of which may be stored by the memory cell. To access information stored by a memory device, a component may read, or sense, the state of one or more memory cells within the memory device. To store information, a component may write, or program, one or more memory cells within the memory device to corresponding states.[0004] Various types of memory devices exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), 3-dimensional cross-point memory (3D cross point), not-or (NOR), and not-and (NAND) memory devices, and others. Memory devices may be volatile or non-volatile. Volatile memory cells (e.g., DRAM cells) may lose their programmed states over time unless they are periodically refreshed by an external power source. Non-volatile memory cells (e.g., NAND memory cells) may maintain
their programmed states for extended periods of time even in the absence of an external power source.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates an example of a system that supports purging data from a memory device in accordance with examples as disclosed herein.[0006] FIG. 2 illustrates an example of a process flow that supports purging data from a memory device in accordance with examples as disclosed herein.[0007] FIG. 3 shows a block diagram of a memory system that supports purging data from a memory device in accordance with examples as disclosed herein.[0008] FIG. 4 shows a block diagram of a host system that supports purging data from a memory device in accordance with examples as disclosed herein.[0009] FIGs. 5 and 6 show flowcharts illustrating a method or methods that support purging data from a memory device in accordance with examples as disclosed herein.DETAILED DESCRIPTION[0010] A memory system may include one or more portions configured to store data securely (e.g., more securely than other portions of the memory system configured to store data). For example, the memory system may include a Replay Protected Memory Block (RPMB) configured to store data securely. In order to access data stored in the RPMB, the memory system may first perform an authentication (e.g., provide a key to access the RPMB). For example, in order to write data to the RPMB, the memory system may perform the authentication procedure (e.g., using an RPMB key) prior to performing an authenticated write operation on the RPMB. In some instances, it may be desirable to erase data stored within the RPMB. For example, the RPMB may store one or more encryption keys for encrypting data stored in another portion of the memory system (e.g., associated with an application). In some cases, access to the encrypted data may be withdrawn. For example, an application may restrict access to a user of the memory system, thus withdrawing the user’s access to the encrypted data. Here, it may be desirable to erase the encryption key from the RPMB to prevent the memory system from using the encryption key stored in the RPMB to decrypt the associated data. But in some cases, an RPMB may not be configured to execute an erase command.
[0011] Systems, device, and techniques may be described for a memory system to overwrite the data stored in the RPMB and then to execute a purge operation on that portion of the RPMB that formerly stored the data. Such techniques may be utilized by the memory system to ‘erase’ data from the RPMB without relying on an erase command. The system may execute a write operation (e.g., an authenticated write operation) to write data to an address of the RPMB that is storing the encryption key (e.g., the data that is to be removed). That is, the memory system may overwrite the encryption key (e.g., write other data to an address of the RPMB storing the encryption key) to ensure that the RPMB no longer stores a copy of the encryption key. The memory system may then perform a purge operation (e.g., an authenticated purge operation) at the RPMB. That is, the memory system may transfer the valid data stored within the RPMB from a first portion of the memory system to a second portion of the memory system (e.g., thus moving the RPMB to another portion of the memory system). The memory system may then erase the data from the first portion of the memory system. By first overwriting an encryption key stored within the RPMB and then performing a purge operation at the RPMB, the memory system may erase the encryption key from the RPMB.[0012] Features of the disclosure are initially described in the context of a system and a process flow as described with reference to FIGs. 1 and 2. These and other features of the disclosure are further illustrated by and described with reference to apparatus diagrams and flowcharts that relate to purging data from a memory device as described with reference to FIGs. 3 through 6.[0013] FIG. 1 is an example of a system 100 that supports purging data from a memory device in accordance with examples as disclosed herein. The system 100 includes a host system 105 coupled with a memory system 110.[0014] A memory system 110 may be or include any device or collection of devices, where the device or collection of devices includes at least one memory array. For example, a memory system 110 may be or include a Universal Flash Storage (UFS) device, an embedded Multi-Media Controller (eMMC) device, a flash device, a universal serial bus (USB) flash device, a secure digital (SD) card, a solid-state drive (SSD), a hard disk drive (HDD), a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile DIMM (NVDIMM), among other possibilities.
[0015] The system 100 may be included in a computing device such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), an Internet of Things (IoT) enabled device, an embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or any other computing device that includes memory and a processing device.[0016] The system 100 may include a host system 105, which may be coupled with the memory system 110. In some examples, this coupling may include an interface with a host system controller 106, which may be an example of a control component configured to cause the host system 105 to perform various operations in accordance with examples as described herein. The host system 105 may include one or more devices, and in some cases may include a processor chipset and a software stack executed by the processor chipset. For example, the host system 105 may include an application configured for communicating with the memory system 110 or a device therein. The processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the host system 105), a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system 105 may use the memory system 110, for example, to write data to the memory system 110 and read data from the memory system 110. Although one memory system 110 is shown in FIG. 1, the host system 105 may be coupled with any quantity of memory systems 110.[0017] The host system 105 may be coupled with the memory system 110 via at least one physical host interface. The host system 105 and the memory system 110 may in some cases be configured to communicate via a physical host interface using an associated protocol (e.g., to exchange or otherwise communicate control, address, data, and other signals between the memory system 110 and the host system 105). Examples of a physical host interface may include, but are not limited to, a serial advanced technology attachment (SATA) interface, a UFS interface, an eMMC interface, a peripheral component interconnect express (PCIe) interface, a USB interface, a Fiber Channel interface, a Small Computer System Interface (SCSI), a Serial Attached SCSI (SAS), a Double Data Rate (DDR) interface, a DIMM interface (e.g., DIMM socket interface that supports DDR), an Open NAND Flash Interface (ONFI), and a Low Power Double Data Rate (LPDDR) interface. In some examples, one or more such interfaces may be included in or otherwise supported between a host system controller 106 of the host system 105 and a memory system controller 115 of the memory
system 110. In some examples, the host system 105 may be coupled with the memory system 110 (e.g., the host system controller 106 may be coupled with the memory system controller 115) via a respective physical host interface for each memory device 130 included in the memory system 110, or via a respective physical host interface for each type of memory device 130 included in the memory system 110.[0018] Memory system 110 may include a memory system controller 115 and one or more memory devices 130. A memory device 130 may include one or more memory arrays of any type of memory cells (e.g., non-volatile memory cells, volatile memory cells, or any combination thereol). Although two memory devices 130-a and 130-b are shown in the example of FIG. 1, the memory system 110 may include any quantity of memory devices 130. Further, where memory system 110 includes more than one memory device 130, different memory devices 130 within memory system 110 may include the same or different types of memory cells.[0019] The memory system controller 115 may be coupled with and communicate with the host system 105 (e.g., via the physical host interface), and may be an example of a control component configured to cause the memory system 110 to perform various operations in accordance with examples as described herein. The memory system controller 115 may also be coupled with and communicate with memory devices 130 to perform operations such as reading data, writing data, erasing data, or refreshing data at a memory device 130, and other such operations, which may generically be referred to as access operations. In some cases, the memory system controller 115 may receive commands from the host system 105 and communicate with one or more memory devices 130 to execute such commands (e.g., at memory arrays within the one or more memory devices 130). For example, the memory system controller 115 may receive commands or operations from the host system 105 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access of the memory devices 130. And in some cases, the memory system controller 115 may exchange data with the host system 105 and with one or more memory devices 130 (e.g., in response to or otherwise in association with commands from the host system 105). For example, the memory system controller 115 may convert responses (e.g., data packets or other signals) associated with the memory devices 130 into corresponding signals for the host system 105.
[0020] The memory system controller 115 may be configured for other operations associated with the memory devices 130. For example, the memory system controller 115 may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., logical block addresses (LBAs)) associated with commands from the host system 105 and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices 130.[0021] The memory system controller 115 may include hardware such as one or more integrated circuits or discrete components, a buffer memory, or a combination thereof. The hardware may include circuitry with dedicated (e.g., hard-coded) logic to perform the operations ascribed herein to the memory system controller 115. The memory system controller 115 may be or include a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP)), or any other suitable processor or processing circuitry.[0022] The memory system controller 115 may also include a local memory 120. In some cases, the local memory 120 may include read-only memory (ROM) or other memory that may store operating code (e.g., executable instructions) executable by the memory system controller 115 to perform functions ascribed herein to the memory system controller 115. In some cases, the local memory 120 may additionally or alternatively include static random access memory (SRAM) or other memory that may be used by the memory system controller 115 for internal storage or calculations, for example, related to the functions ascribed herein to the memory system controller 115.[0023] A memory device 130 may include one or more arrays of non-volatile memory cells. For example, a memory device 130 may include NAND (e.g., NAND flash) memory, ROM, phase change memory (PCM), self-selecting memory, other chalcogenide-based memories, ferroelectric RAM (FeRAM), magneto RAM (MRAM), NOR (e.g., NOR flash) memory, Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), and electrically erasable programmable ROM (EEPROM). Additionally or alternatively, a memory device 130 may include one or more arrays of volatile memory cells. For example, a memory device
130 may include random access memory (RAM) memory cells, such as dynamic RAM (DRAM) memory cells and synchronous DRAM (SDRAM) memory cells.[0024] In some examples, a memory device 130 may include (e.g., on a same die or within a same package) a local controller 135, respectively, which may execute operations on one or more memory cells of the memory device 130. A local controller 135 may operate in conjunction with a memory system controller 115 or may perform one or more functions ascribed herein to the memory system controller 115.[0025] In some cases, a memory device 130 may be or include a NAND device (e.g., NAND flash device). A memory device 130 may be or include a memory die 160. For example, in some cases, a memory device 130 may be a package that includes one or more dies 160. A die 160 may, in some examples, be a piece of electronics-grade semiconductor cut from a wafer (e.g., a silicon die cut from a silicon wafer). Each die 160 may include one or more planes 165, and each plane 165 may include a respective set of blocks 170, where each block 170 may include a respective set of pages 175, and each page 175 may include a set of memory cells.[0026] In some cases, a NAND memory device 130 may include memory cells configured to each store one bit of information, which may be referred to as single level cells (SLCs). Additionally or alternatively, a NAND memory device 130 may include memory cells configured to each store multiple bits of information, which may be referred to as multi level cells (MLCs) if configured to each store two bits of information, as tri-level cells (TLCs) if configured to each store three bits of information, as quad-level cells (QLCs) if configured to each store four bits of information, or more generically as multiple-level memory cells. Multiple-level memory cells may provide greater density of storage relative to SLC memory cells but may, in some cases, involve narrower read or write margins or greater complexities for supporting circuitry.[0027] In some cases, planes 165 may refer to groups of blocks 170, and in some cases, concurrent operations may take place within different planes 165. For example, concurrent operations may be performed on memory cells within different blocks 170 so long as the different blocks 170 are in different planes 165. In some cases, performing concurrent operations in different planes 165 may be subject to one or more restrictions, such as identical operations being performed on memory cells within different pages 175 that have the same
page address within their respective planes 165 (e.g., related to command decoding, page address decoding circuitry, or other circuitry being shared across planes 165).[0028] In some cases, a block 170 may include memory cells organized into rows (pages 175) and columns (e.g., strings, not shown). For example, memory cells in a same page 175 may share (e.g., be coupled with) a common word line, and memory cells in a same string may share (e.g., be coupled with) a common digit line (which may alternatively be referred to as a bit line).[0029] One or more of the memory devices 130 may include an RPMB 140. The RPMB 140 may be associated with one or more of the blocks 170. For example, the RPMB 140 may include four regions (e.g., four blocks 170, four pages 175, four other partitions of memory cells of a memory device 130), each configured to store data. In some cases, the RPMB 140 may be configured to store data more securely when compared to one or more additional blocks 170 at the memory system 110. For example, the blocks 170 associated with the RPMB 140 may include SLCs, which may be more reliable when compared to other types of memory cells. Additionally, prior to accessing the RPMB 140, the memory system 110 may perform an authentication. For example, the memory system 110 may utilize a key to access data stored at the RPMB 140. In some cases, the RPMB 140 (or RPMB partition) may not be capable of being accessed using the standard command protocol, but rather may be accessed using a unique command protocol that enhances the security of the RPMB 140. The RPMB 140 may provide authenticated and replay protected access to sensitive information stored therein. In some examples, the protocol associated with RPMB 140 (e.g., the key used to write and read the RPMB partition) may mitigate risks associated with replay attacks, as compared with standard memory blocks.[0030] In some cases, RPMB 140 may be configured to store data such as encryption keys. For example, the RPMB 140 may store encryption keys associated with other data stored at the memory system 110. That is, the RPMB 140 may store an encryption key for encrypting and decrypting data stored on another block 170 of the memory device 130-a. In some cases, the encryption keys may be derived from a hardware unique key (e.g., associated with the memory system 110). Additionally, the data may be associated with an application that uses encrypted data. Here, a user of the application may have user credentials and the encryption keys may be associated with the user credentials.
[0031] In some instances, the memory system 110 may erase one or more encryption keys stored at the RPMB 140. For example, the memory system may erase one or more encryption keys stored at the block 170-a. To erase the one or more encryption keys, the host system 105 may transmit a write command to the RPMB 140 (e.g., via the memory system controller 115) to write data to the one or more addresses (e.g., logical block addresses) currently storing the one or more encryption keys. The RPMB 140 may write the data to the one or more addresses, thus overwriting the one or more encryption keys with the data indicated by the write command.[0032] The host system 105 may then indicate, to the memory system 110, to purge the RPMB 140. That is, the host system 105 may transmit a purge command to the memory system 110. Additionally or alternatively, the host system 105 may set a value in the register 125 to a value indicating a purge command. In order to execute the purge command at the RPMB 140, the memory system 110 may first perform an authentication procedure at the RPMB. That is, the memory system 110 may perform the authentication procedure (e.g., using an RPMB key) prior to performing the authenticated purge operation. To execute the purge command of the RPMB 140, the memory system 110 may transfer a latest version of the data stored by the addresses (e.g., the logical block addresses) of the RPMB 140 to other addresses (e.g., other logical block addresses). For example, the memory system 110 may transfer the data stored by the block 170-a to another block 170-b. In some cases, the memory device 130-a may store data (e.g., free lists and garbage lists, a logical -to-physical (L2P) mapping table) indicating whether the addresses of the block 170-a are storing valid data, storing invalid data (e.g., garbage data), or not storing any data (e.g., free addresses). Based on the stored data indicating which addresses of the block 170-a are storing valid data, the memory device 130-a may transfer the valid data to another block 170-b associated with the RPMB 140. The memory device 130-a may then erase the block 170-a. For example, the memory device 130-a may execute a garbage collection operation at the block 170-a. After purging the RPMB 140, the RPMB 140 may store data at a different block 170-b. Additionally, some or all copies of data stored by the RPMB 140 (e.g., at the block 170-a) may be erased. By overwriting one or more encryption keys stored by the RPMB 140 and the purging the RPMB 140, the memory system 110 may erase one or more encryption keys from the RPMB 140.[0033] For some NAND architectures, memory cells may be read and programmed (e.g., written) at a first level of granularity (e.g., at the page level of granularity) but may be erased
at a second level of granularity (e.g., at the block level of granularity). That is, a page 175 may be the smallest unit of memory (e.g., set of memory cells) that may be independently programmed or read (e.g., programmed or read concurrently as part of a single program or read operation), and a block 170 may be the smallest unit of memory (e.g., set of memory cells) that may be independently erased (e.g., erased concurrently as part of a single erase operation). Further, in some cases, NAND memory cells may be erased before they can be re written with new data. Thus, for example, a used page 175 may in some cases not be updated until the entire block 170 that includes the page 175 has been erased.[0034] In some cases, to update some data within a block 170 while retaining other data within the block 170, the memory device 130 may copy the data to be retained to a new block 170 and write the updated data to one or more remaining pages of the new block 170. The memory device 130 (e.g., the local controller 135) or the memory system controller 115 may mark or otherwise designate the data that remains in the old block 170 as invalid or obsolete, and update an L2P mapping table to associate the logical address (e.g., LB A) for the data with the new, valid block 170 rather than the old, invalid block 170. In some cases, such copying and remapping may be preferable to erasing and rewriting the entire old block 170, due to latency or wearout considerations, for example. In some cases, one or more copies of an L2P mapping table may be stored within the memory cells of the memory device 130 (e.g., within or more blocks 170 or planes 165) for use (e.g., reference and updating) by the local controller 135 or memory system controller 115.[0035] In some cases, L2P tables may be maintained and data may be marked as valid or invalid at the page level of granularity, and a page 175 may contain valid data, invalid data, or no data. Invalid data may be data that is outdated due to a more recent or updated version of the data being stored in a different page 175 of the memory device 130. Invalid data have been previously programmed to the invalid page 175 but may no longer be associated with a valid logical address, such as a logical address referenced by the host system 105. Valid data may be the most recent version of such data being stored on the memory device 130. A page 175 that includes no data may be a page 175 that has never been written to or that has been erased.[0036] In some cases, a memory system controller 115 or a local controller 135 may perform operations (e.g., as part of one or more media management algorithms) for a memory device 130, such as wear leveling, background refresh, garbage collection, scrub, block scans, health monitoring, or others, or any combination thereof. For example, within a memory
device 130, a block 170 may have some pages 175 containing valid data and some pages 175 containing invalid data. To avoid waiting for some or all of the pages 175 in the block 170 to have invalid data in order to erase and reuse the block 170, an algorithm referred to as “garbage collection” may be invoked to allow the block 170 to be erased and released as a free block for subsequent write operations. Garbage collection may refer to a set of media management operations that include, for example, selecting a block 170 that contains valid and invalid data, selecting pages 175 in the block that contain valid data, copying the valid data from the selected pages 175 to new locations (e.g., free pages 175 in another block 170), marking the data in the previously selected pages 175 as invalid, and erasing the selected block 170. As a result, the number of blocks 170 that have been erased may be increased such that more blocks 170 are available to store subsequent data (e.g., data subsequently received from the host system 105).[0037] The system 100 may include any quantity of non-transitory computer readable media that support purging data from a memory device. For example, the host system 105, the memory system controller 115, or a memory device 130 may include or otherwise may access one or more non-transitory computer readable media storing instructions (e.g., firmware) for performing the functions ascribed herein to the host system 105, memory system controller 115, or memory device 130. For example, such instructions, when executed by the host system 105 (e.g., by the host system controller 106), by the memory system controller 115, or by a memory device 130 (e.g., by a local controller 135), may cause the host system 105, memory system controller 115, or memory device 130 to perform one or more associated functions as described herein.[0038] FIG. 2 illustrates an example of a process flow 200 that supports purging data from a memory device in accordance with examples as disclosed herein. The process flow 200 may implement aspects of the systems as described with reference to FIG. 1. For example, operations described by the process flow 200 may be performed by a host system 205 and a memory system 210, which may be examples of the host system 105 and the memory system 110, respectively, as described with reference to FIG. 1. That is, the memory system 210 may include a memory system controller and an RPMB as described with reference to FIG. 1. The process flow 200 may be implemented to erase one or more encryption keys stored at the RPMB. In the following description of the process flow 200, the operations may be performed in different orders or at different times. Some operations may
also be omitted from the process flow 200, and other operations may be added to the process flow 200.[0039] At 215, a write command may be transmitted by the host system 205 to the memory system 210. In some cases, the write command may indicate, to the memory system 210, to write encrypted data to one or more addresses of the memory system 210. In one example, the host system 205 may indicate unencrypted data within the write command.Here, the host system 205 may additionally indicate, to the memory system 210, to encrypt the data prior to storing the data at the memory system 210.[0040] At 220, an encryption key may be optionally generated by the memory system 210. In some cases, the host system 205 may indicate the encryption key for the memory system 210 to utilize for encrypting and decrypting the data associated with the write command (e.g., at 210). Here, the memory system 210 may not generate the encryption key at 220. In some other cases, the memory system 210 may generate the encryption key (e.g., based on user credentials, a unique hardware key associated with the memory system).[0041] In either case, the memory system 210 may encrypt the data (e.g., indicated by the write command received at 215) using the encryption key and then store the encrypted data at a memory device of the memory system. For example, the memory system controller 215 may store the encrypted data indicated by the write command received at 215 at a portion of the memory system different from the RPMB. In some cases, the portion of the memory system different from the RPMB may be configured to store data associated with the host system 205 (e.g., host data).[0042] At 225, the encryption key may be stored at an address (e.g., a logical block address) of the RPMB by the memory system 210. In some cases, in order to access the RPMB to store the encryption key, the memory system 210 may perform an authentication procedure. For example, the memory system 210 may provide a key to the RPMB prior to the RPMB storing the encryption key.[0043] The RPMB may store a set of encryption keys, among other information that may benefit from the added security of the RPMB. That is, the RPMB may store the encryption key associated with the encrypted data indicated by the write command received from the host system 205 at 215 in addition to one or more other encryption keys. In some cases, the memory system may erase one or more of the encryption keys from the RPMB. For example, the memory system may erase encryption keys for encrypting or decrypting data stored at the
memory system to which a user (e.g., of the memory system, of an application associated with the data) no longer has access. The operations performed by the host system 205 and the memory system 210 at 230, 235, 245, 250, and 270 may result in an encryption key (or other types of data) being erased from the RPMB.[0044] At 230, a command may be transmitted by the host system 205 to the memory system 210 to write data to the RPMB at the address storing the encryption key (or other types of data). That is, the host system 205 may transmit a write command to overwrite the encryption key with other data. The write command may be an authenticated write command. That is, prior to executing the write command, the memory system 210 may perform an authentication procedure (e.g., using the RPMB key). In some cases (e.g., to erase more than one encryption key), the host system 205 may transmit a command to the memory system 210 to write data to addresses storing more than one encryption key.[0045] At 235, the data (e.g., indicated by the command received from the host system 205 at 230) may be written by the memory system 210 to the address of the RPMB to overwrite the encryption key based on receiving command from the host system 205 at 230.In cases that the host system 205 indicated to write data to addresses storing more than one encryption key, the memory system 210 may write the data to the addresses of the RPMB to overwrite the more than one encryption key.[0046] At 245, a purge command may be indicated by the host system 205 to the RPMB to the memory system 210. For example, the host system 205 may transmit a purge command to the memory system 210 indicating a purge operation at the RPMB. In another example, the host system 205 may set a value of a register of the memory system 210 to a value that indicates the purge command. The purge command may be an authenticated purge command. That is, prior to executing the purge command, the memory system 210 may perform an authentication procedure (e.g., using the RPMB key). In some cases, the purge command may be an example of a memory management command, such as a garbage collection command.[0047] At 250, data stored by the RPMB from a first portion of the memory system 210 (e.g., a first block, or a first set of blocks) may be transferred by the memory system 210, based on receiving the indication of the purge command from the host system 205 at 255, to a second portion of the memory system 210 (e.g., to a different block or set of blocks in the RPMB partition). For example, the memory system 210 may transfer the data in response to receiving the indication of the purge command. In another example, the indication of the
purge command may cause the memory system 210 to transfer the data. In either example, both the first and second portions of the memory system 210 may be configured to store secure data associated with the RPMB. For example, both the first and second portions of the memory system 210 may include SLCs, which may be configured to store data more securely than other types of memory cells. The data stored by the RPMB that is transferred to the second portion of the memory system 210 may include the data indicated by the host system 205 for overwriting the encryption key and one or more additional encryption keys stored at the RPMB. The one or more additional encryption keys stored at the RPMB may be for encrypting and decrypting other data (e.g., associated with the host system 205) stored at portions of the memory system 210 different from the RPMB.[0048] In some cases, the memory system 210 may store data (e.g., free lists and garbage lists, an L2P mapping table) indicating whether the addresses of the first portion of the memory system 210 (e.g., associated with the RPMB) are storing valid data, storing invalid data (e.g., garbage data), or not storing any data (e.g., free addresses). Based on the stored data indicating which addresses of the first portion of the memory system 210 are storing valid data, the RPMB may transfer the valid data to the second portion of the memory system 210.[0049] At 255, an interrupt to the purge operation may optionally be indicated to the memory system 210 by the host system 205. For example, the host system 205 may set a value of a register at the memory system 210 to a value that indicates to interrupt the purge operation. The host system 205 may optionally indicate to interrupt the purge operation at any point during the purge operation. For example, the host system 205 may indicate to interrupt the purge operation while the memory system 210 is transferring data stored by the RPMB from the first portion of the memory system 210 to the second portion of the memory system 210. In another example, the host system 205 may indicate to interrupt the purge operation while the memory system 210 erases data at 270.[0050] Based on receiving the indicating to interrupt the purge operation, the memory system 210 may stop executing the purge operation. For example, the memory system 210 may stop transferring data stored by the RPMB from the first portion of the memory system 210 to the second portion of the memory system 210. In another example, the memory system 210 may stop erasing data (e.g., at 270).
[0051] In cases that the host system 205 indicate the interruption, an access operation (e.g., a read operation, a write operation) may optionally be performed by the memory system 210 on the RPMB at 260. That is, the memory system 210 may not perform access operations at the RPMB during a purge operation. Thus, the host system 205 may indicate to interrupt the purge operation prior to the memory system 210 performing any access operations at the RPMB. In some cases, the memory system 210 may perform the access operation 260 at the RPMB in response to receiving an access command from the host system 205. For example, the host system 205 may transmit an access command to the memory system 210.[0052] In cases that the host system 205 indicate the interruption, an indication to resume the purge operation at 270 may optionally be indicated to the memory system 210 by the host system 205. For example, the host system 205 may indicate to resume the purge operation after the memory system 210 executes the access operation at 260. The host system 205 may indicate to resume the purge operation by setting a value of the register at the memory system to a value that indicates to resume the purge operation. In response to receiving the indication of the purge operation resumption, the memory system 210 may resume the purge operation.[0053] At 270, data stored by the first portion of the memory system 210 may be optionally erased by the memory system 210. For example, the memory system 210 may initiate a garbage collection operation for the first portion of the memory system 210 after transferring the data associated with the RPMB to the second portion of the memory system 210. The erase operation may be an authenticated erase operation. That is, prior to executing the erase operation, the memory system 210 may perform an authentication procedure (e.g., using the RPMB key).[0054] At 275, an indication that the purge command is complete (e.g., that an execution of the purge command is complete) may be indicated to the host system 205 by the memory system 210 based on the erasing.[0055] At 280, a purge counter may optionally be incremented by the memory system 210. The purge counter may indicate a quantity of purge operations performed at the RPMB.[0056] FIG. 3 shows a block diagram 300 of a memory system 320 that supports purging data from a memory device in accordance with examples as disclosed herein. The memory system 320 may be an example of aspects of a memory system as described with reference to FIGs. 1 and 2. The memory system 320, or various components thereof, may be an example of means for performing various aspects of purging data from a memory device as described
herein. For example, the memory system 320 may include a data writing manager 325, a purge indication manager 330, a data transfer component 335, an erasing manager 340, an indication transmitter 345, an interruption manager 350, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0057] The data writing manager 325 may be configured as or otherwise support a means for receiving, from a host system, a first command to write first data to an address storing an encryption key in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system. The purge indication manager 330 may be configured as or otherwise support a means for receiving, from the host system after receiving the first command, an indication of a second command to purge the first portion of the memory system. The data transfer component 335 may be configured as or otherwise support a means for transferring, basing at least in part on receiving the second command, second data including the first data and one or more additional encryption keys from the first portion of the memory system to a third portion of the memory system configured to store secure information. The erasing manager 340 may be configured as or otherwise support a means for erasing the second data from the first portion of the memory system based at least in part on the transferring. The indication transmitter 345 may be configured as or otherwise support a means for transmitting, to the host system, an indication that the second command is complete based at least in part on the erasing.[0058] In some examples, the data writing manager 325 may be configured as or otherwise support a means for writing the first data to the address of the first portion of the memory system to overwrite the encryption key based at least in part on receiving the first command, where transferring the second data to the third portion of the memory system is based at least in part on writing the first data.[0059] In some examples, to support receiving the indication of the second command, the purge indication manager 330 may be configured as or otherwise support a means for identifying that a register of the memory system stores a value that indicates the second command.[0060] In some examples, to support receiving the indication of the second command, the purge indication manager 330 may be configured as or otherwise support a means for
receiving, from the host system, the second command to purge the encryption key from the first portion of the memory system.[0061] In some examples, the data transfer component 335 may be configured as or otherwise support a means for determining that the one or more additional encryption keys are valid based at least in part on third data stored at the memory system indicating whether encryption keys stored at the first portion of the memory system are valid, where transferring the second data including the one or more additional encryption keys is based at least in part on the determining.[0062] In some examples, the data writing manager 325 may be configured as or otherwise support a means for receiving, from the host system, a third command to store the encryption key in the first portion of the memory system, where receiving the first command is based at least in part on receiving the first command.[0063] In some examples, the interruption manager 350 may be configured as or otherwise support a means for receiving, from the host system, an indication to interrupt purging the first portion of the memory system after receiving the indication of the second command. In some examples, the interruption manager 350 may be configured as or otherwise support a means for performing one or more access operations at the first portion of the memory system based at least in part on receiving the indication to interrupt purging.In some examples, the interruption manager 350 may be configured as or otherwise support a means for receiving, from the host system, an indication to resume purging the first portion of the memory system after performing the one or more access operations, where transmitting the indication that the second command is complete is based at least in part on receiving the indication to resume purging.[0064] In some examples, receiving the indication to interrupt purging includes identifying that a register of the memory system stores a first value that indicates interrupting purging. In some examples, receiving the indication to resume purging includes identifying that the register of the memory system stores a second value, different from the first value, that indicates resuming purging.[0065] In some examples, the first portion of the memory system and the third portion of the memory system include an RPMB.
[0066] In some examples, the erasing manager 340 may be configured as or otherwise support a means for initiating a garbage collection operation for the first portion of the memory system based at least in part on transferring the second data, where erasing the second data is performed as part of the garbage collection operation.[0067] FIG. 4 shows a block diagram 400 of a host system 420 that supports purging data from a memory device in accordance with examples as disclosed herein. The host system 420 may be an example of aspects of a host system as described with reference to FIGs. 1 and 2. The host system 420, or various components thereof, may be an example of means for performing various aspects of purging data from a memory device as described herein. For example, the host system 420 may include an encryption key manager 425, a writing manager 430, a purge indicator 435, an indication manager 440, an interruption manager 445, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0068] The encryption key manager 425 may be configured as or otherwise support a means for transmitting, to a memory system, a first command to store an encryption key at an address in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system. The writing manager 430 may be configured as or otherwise support a means for transmitting, to the memory system, a second command to write first data to the address storing the encryption key. The purge indicator 435 may be configured as or otherwise support a means for indicating, to the memory system, a third command to purge the first portion of the memory system based at least in part on transmitting the second command. The indication manager 440 may be configured as or otherwise support a means for receiving, from the memory system, an indication that the third command to purge the first portion of the memory system is complete.[0069] In some examples, to support indicating the third command, the purge indicator 435 may be configured as or otherwise support a means for setting a register of the memory system to a value that indicates, to the memory system, the third command.[0070] In some examples, to support indicating the third command, the purge indicator 435 may be configured as or otherwise support a means for transmitting the third command to the memory system.
[0071] In some examples, the interruption manager 445 may be configured as or otherwise support a means for indicating, to the memory system, to interrupt purging the first portion of the memory system after indicating the third command. In some examples, the interruption manager 445 may be configured as or otherwise support a means for accessing the first portion of the memory system based at least in part on indicating to interrupt purging. In some examples, the interruption manager 445 may be configured as or otherwise support a means for indicating, to the memory system, to resume purging the first portion of the memory system based at least in part on accessing the first portion of the memory system, where receiving the indication that the third command is complete is based at least in part on indicating to resume purging.[0072] In some examples, indicating to interrupt purging includes setting a register of the memory system to a first value that indicates interrupting purging. In some examples, indicating to resume purging includes setting the register of the memory system to a second value, different from the first value, that indicates resuming purging. In some examples, the first portion of the memory system includes an RPMB.[0073] FIG. 5 shows a flowchart illustrating a method or methods 500 that supports purging data from a memory device in accordance with aspects of the present disclosure. The operations of method 500 may be implemented by a memory system or its components as described herein. For example, the operations of method 500 may be performed by a memory system as described with reference to FIG. 3. In some examples, a memory system may execute a set of instructions to control the functional elements of the memory system to perform the described functions. Additionally or alternatively, a memory system may perform aspects of the described functions using special-purpose hardware.[0074] At 505, the memory system may receive, from a host system, a first command to write first data to an address storing an encryption key in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system. The operations of 505 may be performed according to the methods described herein. In some examples, aspects of the operations of 505 may be performed by a data writing manager as described with reference to FIG. 3.[0075] At 510, the memory system may receive, from the host system after receiving the first command, an indication of a second command to purge the first portion of the memory
system. The operations of 510 may be performed according to the methods described herein. In some examples, aspects of the operations of 510 may be performed by a purge indication manager as described with reference to FIG. 3.[0076] At 515, the memory system may transfer, basing at least in part on receiving the second command, second data including the first data and one or more additional encryption keys from the first portion of the memory system to a third portion of the memory system configured to store secure information. The operations of 515 may be performed according to the methods described herein. In some examples, aspects of the operations of 515 may be performed by a data transfer component as described with reference to FIG. 3.[0077] At 520, the memory system may erase the second data from the first portion of the memory system based on the transferring. The operations of 520 may be performed according to the methods described herein. In some examples, aspects of the operations of 520 may be performed by an erasing manager as described with reference to FIG. 3.[0078] At 525, the memory system may transmit, to the host system, an indication that the second command is complete based on the erasing. The operations of 525 may be performed according to the methods described herein. In some examples, aspects of the operations of 525 may be performed by an indication transmitter as described with reference to FIG. 3.[0079] In some examples, an apparatus as described herein may perform a method or methods, such as the method 500. The apparatus may include features, means, or instructions (e.g., anon-transitory computer-readable medium storing instructions executable by a processor) for receiving, from a host system, a first command to write first data to an address storing an encryption key in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system, receiving, from the host system after receiving the first command, a second command to purge the first portion of the memory system, transferring, basing at least in part on receiving the second command, second data including the first data and one or more additional encryption keys from the first portion of the memory system to a third portion of the memory system configured to store secure information, erasing the second data from the first portion of the memory system based on the transferring, and transmitting, to the host system, an indication that the second command is complete based on the erasing.
[0080] Some examples of the method 500 and the apparatus described herein may further include operations, features, means, or instructions for writing the first data to the address of the first portion of the memory system to overwrite the encryption key based on receiving the first command, where transferring the second data to the third portion of the memory system may be based on writing the first data.[0081] In some examples of the method 500 and the apparatus described herein, receiving the indication of the second command may include operations, features, means, or instructions for identifying that a register of the memory system stores a value that indicates the second command.[0082] In some examples of the method 500 and the apparatus described herein, receiving the indication of the second command may include operations, features, means, or instructions for receiving, from the host system, the second command to purge the encryption key from the first portion of the memory system.[0083] Some examples of the method 500 and the apparatus described herein may further include operations, features, means, or instructions for determining that the one or more additional encryption keys may be valid based on third data stored at the memory system indicating whether encryption keys stored at the first portion of the memory system may be valid, where transferring the second data including the one or more additional encryption keys may be based on the determining.[0084] Some examples of the method 500 and the apparatus described herein may further include operations, features, means, or instructions for receiving, from the host system, a third command to store the encryption key in the first portion of the memory system, where receiving the first command may be based on receiving the first command.[0085] Some examples of the method 500 and the apparatus described herein may further include operations, features, means, or instructions for receiving, from the host system, an indication to interrupt purging the first portion of the memory system after receiving the indication of the second command, performing one or more access operations at the first portion of the memory system based on receiving the indication to interrupt purging, and receiving, from the host system, an indication to resume purging the first portion of the memory system after performing the one or more access operations, where transmitting the indication that the second command may be complete may be based on receiving the indication to resume purging.
[0086] In some examples of the method 500 and the apparatus described herein, receiving the indication to interrupt purging may include operations, features, means, or instructions for identifying that a register of the memory system stores a first value that indicates interrupting purging, and receiving the indication to resume purging may include operations, features, means, or instructions for identifying that the register of the memory system stores a second value, different from the first value, that indicates resuming purging.[0087] In some examples of the method 500 and the apparatus described herein, the first portion of the memory system and the third portion of the memory system include an RPMB.[0088] Some examples of the method 500 and the apparatus described herein may further include operations, features, means, or instructions for initiating a garbage collection operation for the first portion of the memory system based on transferring the second data, where erasing the second data may be performed as part of the garbage collection operation.[0089] FIG. 6 shows a flowchart illustrating a method or methods 600 that supports purging data from a memory device in accordance with aspects of the present disclosure. The operations of method 600 may be implemented by a host device or its components as described herein. For example, the operations of method 600 may be performed by a host device as described with reference to FIG. 4. In some examples, a host device may execute a set of instructions to control the functional elements of the host device to perform the described functions. Additionally or alternatively, a host device may perform aspects of the described functions using special-purpose hardware.[0090] At 605, the host device may transmit, to a memory system, a first command to store an encryption key at an address in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system. The operations of 605 may be performed according to the methods described herein. In some examples, aspects of the operations of 605 may be performed by an encryption key manager as described with reference to FIG. 4.[0091] At 610, the host device may transmit, to the memory system, a second command to write first data to the address storing the encryption key. The operations of 610 may be performed according to the methods described herein. In some examples, aspects of the operations of 610 may be performed by a writing manager as described with reference to FIG. 4.
[0092] At 615, the host device may indicate, to the memory system, a third command to purge the first portion of the memory system based on transmitting the second command. The operations of 615 may be performed according to the methods described herein. In some examples, aspects of the operations of 615 may be performed by a purge indicator as described with reference to FIG. 4.[0093] At 620, the host device may receive, from the memory system, an indication that the third command to purge the first portion of the memory system is complete. The operations of 620 may be performed according to the methods described herein. In some examples, aspects of the operations of 620 may be performed by an indication manager as described with reference to FIG. 4.[0094] In some examples, an apparatus as described herein may perform a method or methods, such as the method 600. The apparatus may include features, means, or instructions (e.g., anon-transitory computer-readable medium storing instructions executable by a processor) for transmitting, to a memory system, a first command to store an encryption key at an address in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system, transmitting, to the memory system, a second command to write first data to the address storing the encryption key, indicating, to the memory system, a third command to purge the first portion of the memory system based on transmitting the second command, and receiving, from the memory system, an indication that the third command to purge the first portion of the memory system is complete.[0095] In some examples of the method 600 and the apparatus described herein, indicating the third command may include operations, features, means, or instructions for setting a register of the memory system to a value that indicates, to the memory system, the third command.[0096] In some examples of the method 600 and the apparatus described herein, indicating the third command may include operations, features, means, or instructions for transmitting the third command to the memory system.[0097] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for indicating, to the memory system, to interrupt purging the first portion of the memory system after indicating the third command,
accessing the first portion of the memory system based on indicating to interrupt purging, and indicating, to the memory system, to resume purging the first portion of the memory system based on accessing the first portion of the memory system, where receiving the indication that the third command may be complete may be based on indicating to resume purging.[0098] In some examples of the method 600 and the apparatus described herein, indicating to interrupt purging may include operations, features, means, or instructions for setting a register of the memory system to a first value that indicates interrupting purging, and indicating to resume purging may include operations, features, means, or instructions for setting the register of the memory system to a second value, different from the first value, that indicates resuming purging.[0099] In some examples of the method 600 and the apparatus described herein, the first portion of the memory system includes an RPMB.[0100] It should be noted that the methods described herein are possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, portions from two or more of the methods may be combined.[0101] It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined.[0102] Another apparatus is described. The apparatus may include a memory device and a controller coupled with the memory device and configured to cause the memory system to receive, from a host system, a first command to write first data to an address storing an encryption key in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the host system; receive, from the host system after receiving the first command, an indication of a second command to purge the first portion of the memory system; transfer, based at least in part on receiving the second command, second data including the first data and one or more additional encryption keys from the first portion of the memory system to a third portion of the memory system configured to store secure information; erase the second data from the first portion of the memory system based at least in part on the transferring; and transmit, to
the host system, an indication that the second command is complete based at least in part on the erasing.[0103] In some examples of the apparatus, the controller may be further configured to write the first data to the address of the first portion of the memory system to overwrite the encryption key based at least in part on receiving the first command, and transferring the second data to the third portion of the memory system may be based at least in part on writing the first data.[0104] In some examples of the apparatus, the apparatus may include a register coupled with the controller and configured to store a value that indicates the second command, where receiving the indication of the second command may be based at least in part on the register storing the value that indicates the second command.[0105] In some examples of the apparatus, the controller may be further configured to receive, from the host system, the second command, and receiving the indication of the second command may be based at least in part on receiving the second command from the host system.[0106] In some examples of the apparatus, the controller may be further configured to determine that the one or more additional encryption keys may be valid based at least in part on third data stored at the memory system indicating whether encryption keys stored at the first portion of the memory system may be valid, and transferring the second data including the one or more additional encryption keys may be based at least in part on the determining.[0107] Another apparatus is described. The apparatus may include a controller configured to couple with a memory system, where the controller is configured to cause the apparatus to transmit, to the memory system, a first command to store an encryption key at an address in a first portion of the memory system that is configured to store secure information, the encryption key configured to encrypt data stored in a second portion of the memory system that is configured to store information associated with the apparatus; transmit, to the memory system, a second command to write first data to the address storing the encryption key; indicate, to the memory system, a third command to purge the first portion of the memory system based at least in part on transmitting the second command; and receive, from the memory system, an indication that the third command to purge the first portion of the memory system is complete
[0108] In some examples of the apparatus, the controller may be further configured to cause the apparatus to set a register of the memory system to a value that indicates the third command to the memory system.[0109] In some examples of the apparatus, the controller may be further configured to cause the apparatus to transmit the third command to the memory system.[0110] In some examples of the apparatus, the controller may be further configured to cause the apparatus to indicate, to the memory system, to interrupt purging the first portion of the memory system after indicating the third command, access the first portion of the memory system based at least in part on indicating to interrupt purging, and indicate, to the memory system, to resume purging the first portion of the memory system based at least in part on accessing the first portion of the memory system, where receiving the indication that the third command may be complete may be based at least in part on indicating to resume purging.[0111] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0112] The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some
examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0113] The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0114] The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0115] A switching component or a transistor discussed herein may represent a field- effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be “off’ or “deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.
[0116] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0117] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0118] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.[0119] For example, the various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
[0120] As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of’ or “one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0121] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0122] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples
and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. |
The invention relates to wear leveling based on sub-group write counts in a memory sub-system. In an embodiment, a system includes a plurality of memory components that each include a plurality of management groups. Each management group includes a plurality of sub-groups. The system also includes a processing device that is operatively coupled with the plurality of memory components to perform wear-leveling operations that include maintaining a sub-group-level delta write count (DWC) for each of the sub-groups of each of the management groups of a memory component in the plurality of memory components. The wear-leveling operations also include determining, in connection with a write operation to a first sub-group of a first management group of the memory component, that a sub-group-levelDWC for the first sub-group equals a management-group-move threshold, and responsively triggering a management-group-move operation from the first management group to a second management group of thememory component. |
1.A system including:Multiple memory components, each memory component includes multiple management groups, and each management group contains multiple sub-groups; andA processing device operatively coupled with the plurality of memory components to perform a wear leveling operation, the wear leveling operation including:Maintaining a sub-group-level incremental write counter DWC for each of the sub-groups of each of the management groups of memory components of the plurality of memory components; andIn combination with the first write operation to the first subgroup of the first management group of the memory component, it is determined that the first subgroup level DWC of the first subgroup is equal to the management group movement threshold, and one or more are executed in response Management group rollover operation collection,The set of management group flipping operations includes triggering a management group move operation from the first management group to the second management group of the memory component.2.The system of claim 1, wherein the set of management group rollover operations further comprises resetting the sub-group level DWC for each of the sub-groups of the first management group.3.The system of claim 1, wherein the wear leveling operation further comprises:Maintain a management group level DWC for each of the management groups of the memory component,Wherein determining that the first sub-group level DWC is equal to the management group movement threshold includes determining:The first sub-group level DWC is equal to the maximum value of the sub-group level DWC; andThe first management group-level DWC of the first management group is equal to the maximum management group-level DWC.4.The system according to claim 3, wherein the management group level DWC and the subgroup level DWC are maintained in a management table stored on the processing device.5.The system according to claim 3, wherein the management group rollover operation set further comprises:Reset the first management group level DWC; andThe sub-group level DWC is reset for each of the sub-groups of the first management group.6.The system of claim 3, wherein the wear leveling operation further comprises:Combining with the second write operation to the first subgroup, it is determined that the first subgroup level DWC is less than the maximum value of the subgroup level DWC, and the first subgroup level DWC is incremented in response.7.The system of claim 3, wherein the wear leveling operation further comprises:Determine in combination with the third write operation to the first subgroup:The first sub-group level DWC is equal to the maximum value of the sub-group level DWC; andThe first management group level DWC is less than the maximum management group level DWC,And responsively:Reset the sub-group level DWC for each of the sub-groups of the first management group; andThe first management group level DWC is incremented.8.The system of claim 1, the wear leveling operation further comprising writing a counter LWC for each maintenance life in the management group of the memory component.9.The system according to claim 8, wherein:Each LWC has the most significant MSB part and the least significant LSB part;A single shared LWC cardinality represents the MSB part of all the LWCs in the management group of the memory component;A separate management group specific LWC offset represents the LSB part of the LWC of the management group of the memory component; andThe management group rollover operation set further includes incrementing the first LWC of the first management group.10.The system according to claim 9, wherein:The LWC offset is stored in a management table on the processing device; andThe shared LWC cardinality is stored outside the management table on the processing device.11.The system of claim 9, wherein incrementing the first LWC comprises:Determining whether the first LWC offset of the first management group is less than or equal to the maximum LWC offset;If the first LWC offset is less than the maximum LWC offset, then:Increment the first LWC offset; andIf the first LWC offset is equal to the maximum value of the LWC offset, then:Reducing the LWC offset of each of the management groups of the memory component by an LWC base increment size; andThe shared LWC base of the memory component is increased by the LWC base increment size.12.The system of claim 11, wherein incrementing the first LWC further comprises:If the first LWC offset is less than the maximum LWC offset, then:Determine whether the first LWC offset is equal to the LWC offset check threshold, and if so, then:Determining whether at least one of the LWC offsets of the first memory component other than the first LWC offset is smaller than the difference between the LWC offset range and the LWC base increment size,And if it is, the firmware of the processing device triggers an LWC offset imbalance event.13.One method includes:Maintaining a sub-group-level incremental write counter DWC for each of the plurality of sub-groups of each of the plurality of management groups of memory components; andIn combination with the first write operation to the first subgroup of the first management group of the memory component, it is determined that the first subgroup level DWC of the first subgroup is equal to the management group movement threshold, and one or more are executed in response Management group rollover operation collection,The set of management group flipping operations includes triggering a management group move operation from the first management group to the second management group of the memory component.14.The method according to claim 13, further comprising:Maintain a management group level DWC for each of the management groups of the memory component,Wherein determining that the first sub-group level DWC is equal to the management group movement threshold includes determining:The first sub-group level DWC is equal to the maximum value of the sub-group level DWC; andThe first management group-level DWC of the first management group is equal to the maximum management group-level DWC.15.The method of claim 14, further comprising:Combining with the second write operation to the first subgroup, it is determined that the first subgroup level DWC is less than the maximum value of the subgroup level DWC, and the first subgroup level DWC is incremented in response.16.The method of claim 14, further comprising:Determine in combination with the third write operation to the first subgroup:The first sub-group level DWC is equal to the maximum value of the sub-group level DWC; andThe first management group level DWC is less than the maximum management group level DWC,And responsively:Reset the sub-group level DWC for each of the sub-groups of the first management group; andThe first management group level DWC is incremented.17.The method of claim 13, further comprising writing a counter LWC for each maintenance life of the management group of the memory component.18.The method of claim 17, wherein:Each LWC has the most significant MSB part and the least significant LSB part;A single shared LWC cardinality represents the MSB part of all the LWCs in the management group of the memory component;A separate management group specific LWC offset represents the LSB part of the LWC of the management group of the memory component; andThe management group rollover operation set further includes incrementing the first LWC of the first management group.19.The method of claim 18, wherein incrementing the first LWC comprises:Determine whether the first LWC offset of the first management group is less than or equal to the maximum LWC offset; if the first LWC offset is less than the maximum LWC offset, then:Increment the first LWC offset; andIf the first LWC offset is equal to the maximum value of the LWC offset, then:Reducing the LWC offset of each of the management groups of the memory component by an LWC base increment size; andThe shared LWC base of the memory component is increased by the LWC base increment size.20.A non-transitory machine-readable storage medium containing instructions that, when executed by a processing device, cause the processing device to perform operations including the following:Maintain a sub-group-level incremental write counter DWC for each of the plurality of sub-groups of each of the plurality of management groups of memory components; andIn combination with the first write operation to the first subgroup of the first management group of the memory component, it is determined that the first subgroup level DWC of the first subgroup is equal to the management group movement threshold, and one or more are executed in response Management group rollover operation collection,The set of management group flipping operations includes triggering a management group move operation from the first management group to the second management group of the memory component. |
Wear leveling based on subgroup write count in the memory subsystemCross references to related applicationsThis application requires a US provisional patent application named "Wear Leveling Based on Sub-Group Write Counts in a Memory Sub-System" filed on July 15, 2019 The rights to Serial No. 62/874,294, the entire content of the provisional patent application is incorporated herein by reference.Technical fieldThe embodiments of the present disclosure generally relate to the memory subsystem, and more specifically to the wear leveling based on the sub-group write count in the memory subsystem.Background techniqueThe memory subsystem may be a storage device, a memory module, or a hybrid of a storage device and a memory module. The memory subsystem may contain one or more memory components that store data. The memory component may be, for example, a non-volatile memory component and a volatile memory component. Generally, the host system can use the memory subsystem to store data in and retrieve data from the memory component.Summary of the inventionIn one aspect, the present application provides a system, the system includes: a plurality of memory components, each memory component includes a plurality of management groups, each management group includes a plurality of sub-groups; and a processing device, and the plurality of The memory components are operatively coupled to perform a wear leveling operation, the wear leveling operation comprising: maintaining a subgroup for each of the subgroups of each of the management groups of the memory components of the plurality of memory components Group-level incremental write counter (DWC); and determining that the first sub-group-level DWC of the first sub-group is equal to the management group in conjunction with the first write operation to the first sub-group of the first management group of the memory component A set of one or more management group flipping operations is performed responsively, wherein the set of management group flipping operations includes triggering a management group move operation from the first management group to the second management group of the memory component .In another aspect, the present application further provides a method, the method comprising: maintaining a sub-group level incremental write counter (DWC) for each of the plurality of sub-groups of each of the plurality of management groups of memory components And in conjunction with the first write operation to the first subgroup of the first management group of the memory component to determine that the first subgroup level DWC of the first subgroup is equal to the management group movement threshold, and responsively perform one or A set of a plurality of management group flipping operations, wherein the management group flipping operation set includes triggering a management group move operation from the first management group to the second management group of the memory component.In yet another aspect, the present application further provides a non-transitory machine-readable storage medium containing instructions that, when executed by a processing device, cause the processing device to perform operations including the following: Each of the plurality of sub-groups of each of the plurality of management groups maintains a sub-group level incremental write counter (DWC); and a first write operation to the first sub-group of the first management group of the memory component is combined To determine that the first subgroup-level DWC of the first subgroup is equal to the management group movement threshold, and responsively execute a set of one or more management group flipping operations, wherein the management group flipping operation set includes triggering from the first subgroup A management group move operation from a management group to a second management group of the memory component.Description of the drawingsThe present disclosure will be more fully understood through the detailed description given below and the accompanying drawings of various embodiments of the present disclosure. However, the drawings should not be used to limit the present disclosure to specific embodiments, but only for explanation and understanding.Figure 1 illustrates an example computing environment according to some embodiments of the present disclosure, the example computing environment including a memory subsystem including a memory subsystem controller.Figure 2 is a block diagram of an example memory device of the memory subsystem of Figure 1 according to some embodiments of the present disclosure.FIG. 3 is a block diagram of an example of dividing the read/write storage portion of the memory device of FIG. 2 into multiple management groups according to some embodiments of the present disclosure, and each management group has multiple sub-groups.4 is a block diagram of an example management table that can be used by a memory subsystem controller to perform operations according to some embodiments of the present disclosure.FIG. 5 is a block diagram of a first example section of the management table of FIG. 4 according to some embodiments of the present disclosure.FIG. 6 is a block diagram of a second example section of the management table of FIG. 4 according to some embodiments of the present disclosure.FIG. 7 is a flowchart of an example method for wear leveling based on subgroup write count in a memory subsystem according to some embodiments of the present disclosure, and the method may be executed by the memory subsystem controller of FIG. 1.FIG. 8 is a flowchart depicting some example operations performed by the memory subsystem controller of FIG. 1 according to some embodiments of the present disclosure.9 is a flowchart depicting some example operations performed by the memory subsystem controller of FIG. 1 according to some embodiments of the present disclosure.Figure 10 is a message flow diagram depicting communications between various functional components of the memory subsystem controller and the example memory device of Figure 1 according to some embodiments of the present disclosure.Figure 11 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure are directed to wear leveling based on subgroup write counts in the memory subsystem. The memory subsystem may be a storage device, a memory module, or a hybrid of a storage device and a memory module. The following describes examples of storage devices and memory modules in conjunction with FIG. 1. Generally, a host system can utilize a memory subsystem that includes one or more memory components (also referred to herein as "memory devices"). The host system can provide data to be stored in the memory subsystem and can request data to be retrieved from the memory subsystem. The host system can send an access request to the memory subsystem, such as storing data in the memory subsystem and reading data from the memory subsystem. The data to be read and written is hereinafter referred to as "user data". The host request may contain a logical address (eg, logical block address (LBA)) of the user data, which is the location of the host system associated with the user data. The logical address (for example, LBA) may be part of the metadata of the user data.The memory components may include non-volatile and volatile memory devices. Non-volatile memory devices are packages of one or more dies. The die in the package can be assigned to one or more channels to communicate with the memory subsystem controller. Non-volatile memory devices contain cells (ie, electronic circuits that store information) that are grouped into pages to store data bits.The memory subsystem may perform internal management operations (such as media management operations (for example, defect scanning, wear leveling, refresh)) on the non-volatile memory device to manage the memory device. Storing user data in the memory device may increase the wear of the memory device. After a threshold amount of write operations, wear and tear may make the memory device unreliable, making it no longer possible to reliably store and retrieve user data from the memory device. At this point, when any one of the memory devices fails, the memory subsystem may cause a failure.Some memory components, such as non-volatile memory components, have limited durability. One aspect of this limited durability is that the underlying hardware components that store user data can only write user data a limited number of times, and then they wear out and no longer function reliably.One technique for managing endurance in memory subsystems with physical components that have a limited life cycle (eg, a limited number of write and/or read cycles before an expected failure) is wear leveling. Wear leveling is the process of helping reduce premature wear of memory devices by distributing write operations across memory devices. Wear leveling involves a collection of operations to determine which physical media (for example, a collection of memory cells) to use whenever user data is programmed to help ensure that certain sets of physical memory cells are not written more frequently than other sets of physical memory cells And erase. Wear leveling operations can try to evenly distribute loss operations (for example, write, read, erase, etc.) operations and the corresponding physical loss across data storage devices or parts of the storage device, thus limiting certain parts of the memory subsystem The probability of failure before other parts.One method that has been developed to alleviate this problem involves delineating the memory cells of the memory device. Non-volatile memory devices may include, for example, three-dimensional cross-point ("3D cross-point") memory devices, which are non-volatile memory devices that can perform bit storage based on changes in body resistance in combination with a stackable cross-grid data access array. Crosspoint array of volatile memory.Such non-volatile memory devices can group pages across dies and channels to form a management unit (MU). The MU can contain user data and corresponding metadata. The memory subsystem controller may serve as a management unit to send and receive user data and corresponding metadata to and from the memory device. A Super Management Unit (SMU) is a group of one or more MUs managed together. For example, the memory subsystem controller may perform media management operations (for example, wear leveling operations, refresh operations, etc.) on the SMU.A physical super management unit (PSMU) is a group of one or more management units (MU). A logical super management unit (LSMU) is a logical group of one or more logical addresses (for example, logical block addresses (LBA)). LSMU can be mapped to different PSMU at different time points.In addition, the memory subsystem controller can maintain a write count for each SMU in the memory subsystem. The write count can be stored in a static RAM (SRAM) data table, which is referred to herein as an SMU management table (or simply a management table). The memory subsystem controller maintains a write count for each SMU of the memory subsystem.Each write count of each SMU may contain multiple (for example, two) write counts: incremental write count (DWC) and lifetime write count (LWC). Whenever the memory subsystem controller directs a write operation to one or more MUs within a given SMU, the memory subsystem controller increments the DWC of that SMU. When the DWC reaches the threshold referred to as the SMU move threshold herein, the memory subsystem controller resets the DWC for the SMU, increments the LWC of the SMU, and performs the SMU move operation. In the SMU move operation, the memory sub The system controller selects an available SMU (referred to as the second SMU here) in other locations in the memory subsystem, and then moves the user data currently stored in the SMU to the second SMU. In conjunction with the SMU move operation, the memory subsystem controller also reallocates the logical address (for example, LBA) that has been previously allocated to the original SMU, and then allocates it to the second SMU.The memory subsystem controller can track subsequent write operations to it by incrementing the DWC of the second SMU, which will initiate a cycle in the reset state (e.g., equal to zero) to comply with the DWC of the SMU from the most recent (i) memory Initialization (or re-initialization) of the component and (ii) the fact that the SMU is counted since the latest SMU move operation for the SMU. When the DWC of the second SMU reaches the SMU movement threshold, the memory subsystem controller resets the DWC and increments the LWC of the second SMU, and then performs an SMU movement operation from the second SMU to another SMU. This method belongs to the category called wear leveling operation in the memory subsystem managed by the memory subsystem controller. Once the LWC of a given SMU reaches the LWC threshold, the memory subsystem controller can take one or more response measures, such as suspending use of that particular SMU, suspending use of the entire memory component, disabling the entire memory subsystem, warning the host system, and/ Or one or more other response operations.Each SMU move operation consumes processing power and time on both the memory subsystem controller and the corresponding memory component. Therefore, under all other conditions being equal, the less consumption the better. However, in the current implementation, any write operation to any part of a given SMU is treated as a write operation to the entire SMU, because the write operation causes the DWC of the SMU to be incremented, thereby speeding up the next SMU move operation, and thus Increase the number of SMU move operations that occur on average in any given time period. From the above explanation of DWC and LWC, it can be understood that this will shorten the life of the SMU and memory components, and in some cases even shorten the life of the entire memory subsystem. Although the fact is that write operations that are spread evenly across different parts of a given SMU to a large extent do not really cause repeated wear and tear on the same physical storage component, even though they are treated as if in current memory subsystem implementations They cause repeated losses the same. In addition, different types of memory components implement different SMU sizes, and this problem is more serious in memory components that use larger SMU sizes.Various aspects of the present disclosure solve the above and other deficiencies through wear leveling based on subgroup write counts in the memory subsystem. According to at least one embodiment, the memory subsystem controller includes a media management component that maintains the DWC at a finer granularity than the SMU level. In at least one embodiment, the media management component processes each SMU in the storage component as being delimited into a number of content referred to herein as subgroups, each of which is a defined SMU A subset of physical storage elements. For each SMU, the media management component maintains a sub-group level DWC for each of the sub-groups in the SMU. The media management component still maintains the LWC at the SMU level, but in at least one embodiment, maintains it in the following manner: as described below, at least partially offset the additional storage space occupied by the subgroup-level DWC in the SMU management table (and Compared with the current implementation is additional).In at least one embodiment of the present disclosure, a write operation to a location within a given subgroup of a given SMU triggers the media management component to increment the associated subgroup level DWC. In addition, whenever any one of the subgroup-level DWCs of the SMU reaches the SMU movement threshold, such an event will trigger the media management component to reset each of the subgroup-level DWCs for the SMU, and increment the LWC of the SMU, And perform SMU movement operations from this SMU to a different SMU. In some embodiments, in addition to maintaining a subgroup-level DWC for each subgroup in each SMU, the media management component also maintains an SMU-level DWC for each SMU. Each SMU level DWC can be regarded as an independent value, but can also be regarded as representing the most significant bit (MSB) of the combined DWC, and the associated subgroup level DWC provides the least significant bit (LSB) for the combined DWC.Regarding LWC, in at least one embodiment of the present disclosure, the media management component maintains a value referred to herein as the LWC base for each memory component. The media management component maintains these memory component-level values in registers, which are referred to herein as LWC base registers, which are located in a storage area outside the SMU management table and are referred to herein as LWC base registers file. The media management unit also maintains a value referred to herein as the LWC offset in the SMU management table for each SMU in each memory unit. At any given time, in at least one embodiment of the present disclosure, the LWC of the SMU of the memory unit is represented by the concatenation of the LWC base of the memory unit and the LWC offset of the SMU. The LWC base is the MSB part of the LWC, and the LWC offset is the least significant bit (LSB) part. Some example operations performed by the media management component in conjunction with these values are described below.The benefit of the embodiments of the present disclosure is that they are triggered less frequently than the SMU mobile operation combined with the current embodiment. The reason for this is that a larger number of write operations are required in the embodiment of the present disclosure to reach the SMU movement threshold compared to the current solution of the memory subsystem. In this way, the processing power and time requirements of both the media management component (and more generally, the memory subsystem controller) and the individual memory components are less.In very rare cases, the following situation may occur: after the same number of write operations that trigger the SMU move operation in combination with the current implementation, the SMU move operation is triggered in combination with the implementation of the present disclosure, although this only occurs in the pair. After the SMU move operation of the SMU is determined, each single write operation to the SMU is for a single subgroup in the subgroup of the SMU until the next SMU move operation is triggered. In all other cases, they are triggered less frequently in conjunction with the implementation of the present disclosure than the SMU mobile operation combined with the current implementation. This comparison of course assumes that the SMU movement threshold of the subgroup incorporating the embodiment of the present disclosure is the same as the SMU movement threshold of the SMU in the current embodiment.Other benefits of the embodiments of the present disclosure will be apparent to those skilled in the art.FIG. 1 shows an example computing environment 100 that includes a memory subsystem 110 according to some embodiments of the present disclosure. The memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 140), one or more non-volatile memory devices (eg, memory device 130), or such media The combination.The memory subsystem 110 may be a storage device, a memory module, or a hybrid of a storage device and a memory module. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, and hard disk drives (HDD). Examples of memory modules include dual in-line memory modules (DIMMs), small form-factor DIMMs (SO-DIMMs), and non-volatile dual in-line memory modules (NVDIMMs).The computing environment 100 may include a host system 120 coupled to a memory system. The memory system may include one or more memory subsystems 110. In some embodiments, the host system 120 is coupled to a different type of memory subsystem 110. FIG. 1 shows an example of a host system 120 coupled to a memory subsystem 110. The host system 120, for example, uses the memory subsystem 110 to write user data into the memory subsystem 110 and read user data from the memory subsystem 110. As used herein, "coupled to" generally refers to a connection between components, and the connection may be an indirect communication connection or a direct communication connection (for example, without intermediate components), whether wired or wireless, including electrical, Optical, magnetic, etc. connections.The host system 120 may be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, an embedded computer (for example, an embedded computer included in a vehicle, an industrial device, or a networked business device), or it may contain memory and Such computing devices for processing devices. The host system 120 may be coupled to the memory subsystem 110 via a physical host interface. Examples of physical host interfaces include but are not limited to Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect Express (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. . The physical host interface may be used to transfer data (eg, user data) between the host system 120 and the memory subsystem 110. When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 may further utilize an NVM Express (NVMe) interface to access memory components (for example, the memory device 130). The physical host interface may provide an interface for transferring control, address, user data, and other signals between the memory subsystem 110 and the host system 120.The memory device may include any combination of different types of non-volatile memory devices and/or volatile memory devices. The volatile memory device (eg, memory device 140) may be, but is not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Examples of non-volatile memory devices (eg, memory device 130) include 3D cross-point type flash memory, which is a cross-point array of non-volatile memory cells. The cross-point array of the non-volatile memory can perform bit storage based on a change in body resistance in combination with a stackable cross-grid data access array. In addition, in contrast to many flash-based memories, cross-point non-volatile memories can perform in-place write operations, in which non-volatile memory cells can be programmed without having to erase the non-volatile memory cells in advance.Although a non-volatile memory component such as a 3D cross-point memory is described, the memory device 130 may be based on any other type of non-volatile memory, such as NAND, read only memory (ROM), Phase change memory (PCM), optional memory, other chalcogenide-based memory, ferroelectric random access memory (FeRAM), magnetic random access memory (MRAM), NOR flash memory, and electrically erasable Program read-only memory (EEPROM).In some embodiments, each of the memory devices 130 may include one or more memory cell arrays, such as single-level cells (SLC), multi-level cells (MLC), three-level cells (TLC), and four-level cells (QLC). ) Or a combination of such units. In some embodiments, a particular memory component may include the SLC part and the MLC part, TLC part, or QLC part of the memory cell. Each of the memory units may store one or more bits of data used by the host system 120.In addition, the memory cells of the memory device 130 may be grouped into memory pages or memory blocks that may refer to one unit of memory components for storing data. Memory pages can be grouped across dies and channels to form a management unit (MU). The MU can contain user data and corresponding metadata. The memory subsystem controller may serve as a management unit to send and receive user data and corresponding metadata to and from the memory device. A Super Management Unit (SMU) is a group of one or more MUs managed together. For example, the memory subsystem controller may perform media management operations (for example, wear leveling operations, refresh operations, etc.) on the SMU.The memory subsystem controller 115 may communicate with the memory device 130 to perform operations such as reading data, writing data, or erasing data, and other such operations on the memory device 130. The memory subsystem controller 115 may include hardware such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The memory subsystem controller 115 may be a microcontroller, a dedicated logic circuit system (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.The memory subsystem controller 115 includes a processor (processing device) 117 configured to execute instructions stored in the local memory 119. In the example shown, the local memory 119 of the memory subsystem controller 115 includes an embedded memory configured to store various processes, operations, logic flows, and operations that control the memory subsystem 110. (Contains instructions for routines that handle the communication between the memory subsystem 110 and the host system 120).In some embodiments, the local memory 119 may include memory registers that store memory pointers, extracted data, and the like. The local memory 119 may also include read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 is shown as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the memory subsystem controller 115, but may rely on For external control (for example, external control provided by an external host or a processor or controller separate from the memory subsystem 110).Generally, the memory subsystem controller 115 can receive commands or operations from the host system 120, and can convert the commands or operations into instructions or appropriate commands to achieve desired access to the memory device 130. The memory subsystem controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical addresses (for example, logical block addresses (LBA)) and Address translation between physical addresses associated with the memory device 130. The memory subsystem controller 115 may further include a host interface circuit system to communicate with the host system 120 via a physical host interface. The host interface circuitry can convert the command received from the host system 120 into a command instruction for accessing the memory device 130, and convert the response associated with the memory device 130 into information of the host system 120.The memory subsystem controller 115 includes a media management part 121 to perform media management operations to manage the memory device 130. In at least one embodiment, operations such as wear leveling operations are performed by the media management component 121. As depicted in FIG. 1, in at least one embodiment, the media management component 121 includes a sub-group-based wear equalizer 113, which may be adapted to perform the at least sub-group-based wear equalizer described herein. The loss balancer functions in the form of circuit systems, dedicated logic, programmable logic, firmware, software, etc. or include them. In some embodiments, the memory subsystem controller 115 includes at least a portion of the sub-group-based wear leveler 113-the processor 117 may be configured to execute instructions stored in the local memory 119 to perform the sub-group-based Operation of the loss equalizer. In some embodiments, the sub-group-based wear balancer 113 is part of the host system 120, application program, or operating system.The memory subsystem 110 may also include additional circuitry or components not shown. In some embodiments, the memory subsystem 110 may include a cache or buffer (e.g., DRAM) and address circuitry (e.g., DRAM) that can receive an address from the memory subsystem controller 115 and decode the address to access the memory device 130 Row decoder and column decoder).In some embodiments, the memory device 130 includes a local media controller 135 that operates in conjunction with the memory subsystem controller 115 to perform operations on one or more memory units of the memory device 130. An external controller (for example, the memory subsystem controller 115) may externally manage the media device 130 (for example, perform a media management operation on the media device 130). In some embodiments, the storage device 130 may be a locally managed storage device that is an original storage device combined with a local media controller 135, and the local media controller encapsulates the storage device 130 in the same storage device. Perform storage management operations.In the depicted embodiment, the local memory 119 contains the SMU management table 160 and the LWC base register file 170, both of which are described herein as being used by the sub-group-based wear equalizer 113 to perform implementations in accordance with the present disclosure Operation of the program. In at least one embodiment, the sub-group-based wear equalizer 113 is configured to maintain records on the sub-group-level DWC, SMU-level DWC, and LWC offset in the SMU management table 160, and is further configured to store the LWC The base is maintained in a register in the LWC base register file 170. In addition, according to some embodiments of the present disclosure, the sub-group-based wear equalizer 113 is configured to instruct the memory device 130 to perform an SMU movement operation of its data from the SMU whose SMU-level DWC has reached the SMU movement threshold to another available SMU. Throughout this disclosure, various functions performed by the sub-group-based wear equalizer 113 in different combinations in different embodiments are described.In addition, it should be noted that the arrangement depicted in FIG. 1 is exemplary and not restrictive. In the arrangement, the SMU management table 160 and the LWC base register file 170 are contained in the local memory 119. In some embodiments, one or both of the SMU management table 160 and the LWC base register file 170 may be contained in the media management component 121 or other data storage device in the memory subsystem controller 115. Alternatively, one or both of the SMU management table 160 and the LWC base register file 170 may be located in the memory device 130 or the memory device 140. In addition, one or both may be stored on the host system 120.FIG. 2 is a block diagram of an example structure 200 of the memory device 130. Each of the memory devices 130 may have an internal structure similar to the structure of the memory device 130 depicted and described by way of example in conjunction with FIG. 2. The memory device 140 may also have a similar internal structure. In addition, the memory subsystem 110 may include any suitable number of memory devices 130 and memory devices 140.As mentioned, in at least one embodiment, the storage device 130 includes a local media controller 135. As further depicted in FIG. 2, in at least one embodiment, the memory device 130 further includes a read/write storage device 202 and a read-only storage device 204. Since different structures can be used in different embodiments, this structure for the memory device 130 is presented by way of example and not limitation. In at least one embodiment, the read/write storage device 202 includes a non-volatile data storage element, the local media controller 135 can write data to the non-volatile data storage element, and the local media controller 135 can read data from Read data from the non-volatile data storage element. As shown in FIG. 2 roughly as the SMU 203 delimited by subgroups, the read/write storage device 202 may include multiple SMUs, and each SMU is divided into multiple subgroups. In one example, the memory device 130 is a NAND-type flash memory component. In another example, the memory device 130 is a 3D cross-point memory device. In at least one embodiment, the read-only storage device 204 includes a non-volatile data storage element, and the local media controller 135 can read data from the non-volatile data storage element. The read-only memory device 204 may contain data that was programmed when the memory device 130 was manufactured.FIG. 3 depicts an example architecture 300 of the SMU 203 delimited by subgroups of the read/write storage device 202. In particular, the SMU 203 delineated by subgroups is depicted as including four SMUs: SMU 310, SMU 320, SMU 330, and SMU 340. Each of SMU 310 to 340 is shown as having four subgroups: SMU 310 has subgroups 311 to 314, SMU 320 has subgroups 321 to 324, SMU 330 has subgroups 331 to 334, and SMU 340 has subgroup 341 To 344. By way of example and not limitation, the SMU 203 delimited by subgroups is depicted as including four SMUs 310 to 340 and each of those SMUs 310 to 340 is depicted as having four subgroups 311 to 314, 321 to 324, 331 To 334, 341 to 344. In any given situation, the SMU 203 delineated by subgroups can contain any number of SMUs, and each SMU can contain any number of subgroups. Furthermore, in some cases, different SMUs may have different numbers of subgroups. Furthermore, for a given embodiment, SMUs and subgroups can have any size deemed suitable by those skilled in the art.As described above, the sub-group-based wear equalizer 113 can use the SMU management table 160 to maintain the SMU-level DWC and the sub-groups 311 to 314, 321 to 324, 331 to 334, and 341 to 344 of the SMUs 310 to 340. DWC records, and maintain records about the LWC offset of various SMUs 310 to 340. As shown in FIG. 4, in the example architecture 400 of the SMU management table 160, the SMU management table 160 may include a plurality of different sections, such as the DWC section 402 and the LWC section 404. This article discusses example ways in which DWC section 402 and LWC section 404 can be organized.One example way in which DWC sections 402 can be organized is shown in FIG. 5 (an example architecture 500 depicting DWC sections 402). As indicated in the title bar, the DWC section 402 can be organized as a table with six columns: the first column, which identifies the SMU 310 to 340 to which a given row belongs; the second column, the second column SMU-level DWCs 510 to 540 containing specific SMUs 310 to 340; and the third to sixth columns, the third to sixth columns containing subgroups 311 to 314, 321 to 324, 331 to 331 of the specific SMU 334, 341 to 344 sub-group DWC 511 to 514, 521 to 524, 531 to 534, 541 to 544. In addition, it should be noted that the display in the first column is mainly to help the reader-that is, in order to reduce the storage space occupied by the SMU management table 160, the DWC section 402 does not need to include the first column: subgroup-based wear equalizer 113 It can be programmed to know which row corresponds to which SMU 310 to 340.The first row of the DWC section 402 belongs to the SMU 310 and contains SMU-level DWC 510, sub-group-level DWC 511, sub-group-level DWC 512, sub-group-level DWC 513, and sub-group-level DWC 514. The second row belongs to SMU 320 and contains SMU level DWC 520 and subgroup level DWC 521 to 524. The third row belongs to SMU 330 and contains SMU level DWC 530 and subgroup level DWC 531 to 534. Finally, the fourth row belongs to SMU 340 and contains SMU level DWC 540 and subgroup level DWC 541 to 544. The subgroup level DWC 511 corresponds to the subgroup 311 of the SMU 310, the subgroup level DWC 512 corresponds to the subgroup 312, and so on.Any number of bits may be reserved for each of the SMU-level DWC 510 to 540 and each of the subgroup-level DWC 511 to 514, 521 to 524, 531 to 534, and 541 to 544. It is helpful to consider hypothetical examples in which, in the current implementation, when sorting from MSB to LSB from left to right, the SMU-level DWC contains 17 bits, called bits [16:0]. In this type of example, each SMU-level DWC can have any value (including 0 and 131,071) in the range of 0 to 131,071 (decimal) at any given time, and every 131,072 write operations to the SMU will result in that SMU The LWC is incremented and the SMU move operation is performed to move the data and logical address allocation from this SMU to another SMU.In at least one embodiment of the present disclosure, 13 MSBs are allocated by allocating 13 bits for each SMU level DWC 510 to 540 and further by assigning each subgroup level DWC 511 to 514, 521 to 524, 531 to 534 541 to 544 are allocated 4 bits to correspond to 4 LSBs, and the same DWC range (0 to 131,071) is realized for each of the sub-group level DWCs. In such an embodiment, each of the sub-group level DWCs 511 to 514, 521 to 524, 531 to 534, and 541 to 544 can take a value (inclusive of 0 and 15) in the range of 0 to 15 (decimal). In this example, regarding DWC related storage and compared to the hypothetical previous implementation, for each of the SMUs of the memory subsystem 110, an additional 12 bits will be occupied in the SMU management table 160.In addition, the LWC section 404 can be organized in many different ways, and an example of the many different ways is shown in the example architecture 600 of the LWC section 404 depicted in FIG. 6. As indicated in the header row, the LWC section 404 can be organized as a table with two columns: a first column, which identifies the SMU 310 to 340 to which a given row belongs; and a second column, the second column The column lists the LWC offsets 610 to 640 of a specific SMU 310 to 340. As in the case of Figure 5, the display of the first column in Figure 6 is mainly to help readers-in order to reduce the storage space occupied by the SMU management table 160, the LWC section 404 does not have to contain the first column and is based on subgroups The wear equalizer 113 can be programmed to know which row (and therefore which LWC offset 610 to 640) is associated with which SMU 310 to 340.Any suitable number of bits may be reserved in the LWC section 404 for each of the LWC offsets 610 to 640. In an embodiment, 3 bits are reserved for each of the LWC offsets 610 to 640, and each of the LWC offsets 610 to 640 can therefore take a value in the range of 0 to 7 (decimal) (including 0 And 7). As discussed herein, regarding LWC-related data, some embodiments of the present disclosure involve reserving only a certain number of LSBs of LWCs (in the form of LWC offsets 610 to 640) for each SMU in the SMU management table 160, and A single value is stored in the LWC base register file 170 outside of the SMU management table 160, the single value representing the MSB of all LWCs in the SMU of a given memory device 130. Therefore, compared with previous embodiments, some embodiments of the present disclosure involve more DWC-related storage in the SMU management table 160, but less LWC-related storage.The example method 700, the example method 800, and the example method 900 are described below with reference to FIGS. 7, 8 and 9 respectively. Regarding these drawings and the accompanying descriptions, it should be understood that although shown in a specific order, unless otherwise specified, the order of operations depicted and described may be modified. Therefore, the illustrated embodiments should be understood only as examples, and it should be further understood that the illustrated operations may be performed in a different order, and some operations may be performed in parallel. In addition, one or more operations may be omitted in various embodiments. Therefore, not all operations are performed in conjunction with each implementation. Other methods are possible. In addition, the method 700, the method 800, and/or the method 900 may be executed by processing logic, which may include hardware (for example, circuit systems, dedicated logic, programmable logic, microcode, etc.), software (such as in the processing device Instructions executed on the computer), firmware, or a combination of them.FIG. 7 depicts a flowchart of an example method 700 for wear leveling based on sub-group level DWC according to some embodiments of the present disclosure. The method 700 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., processing Instructions to run or execute on the device) or their combination. In some embodiments, the method 700 is performed by the subgroup-based wear equalizer 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should only be understood as examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more operations may be omitted in various embodiments. Therefore, not all operations are required in every embodiment. Other method flows are possible.At operation 702, the processing device is each of the subgroups 311 to 314, 321 to 324, 331 to 334, and 341 to 344 of the management group (eg, SMU 310 to 340) of the memory device 130 maintains the subgroup level DWC 511 To 514, 521 to 524, 531 to 534, 541 to 544. At operation 704, the processing device determines that the sub-group level DWC 523 is equal to the management group movement threshold in combination with the write operation to the subgroup 323 of the management group (for example, SMU 320) of the memory device 130, which is also referred to herein. This is called the SMU movement threshold. In response to making this determination, at operation 706, the processing device performs a collection of one or more of the content referred to herein as a management group rollover (or SMU rollover) operation. The set of one or more SMU rollover operations includes triggering a management group move operation (e.g., SMU move operation) from the SMU 320 to the second management group (e.g., SMU 340) of the memory device 130. In this example, SMU 340 is not used and can be used as the target SMU for the mentioned SMU move operation.It should be noted that the phrase "combined writing operation" is used in the preceding paragraphs, and similar variants (for example, "combining the first writing operation", "combining the second writing operation", "combining the second writing operation" are used in various places in this disclosure. Three write operations" etc.). In these cases, the processing device is described as "in conjunction with" a given write operation to perform one or more operations. This language is intended to broadly cover the associated operations performed by the processing device before, during, and/or after the mentioned write operation, and is mainly used to help readers avoid confusing different examples.In addition, "a set of one or more SMU flip operations" refers to an SMU that is determined to be executed from one SMU (e.g., SMU 320) of the memory unit (e.g., memory device 130) to another SMU (e.g., SMU 340). During the mobile operation, a collection of one or more operations performed by the processing device in various different embodiments of the present disclosure. In this way, operation 706 lists the triggering of the SMU move operation from SMU 320 to SMU 340 as the first described example SMU flip operation in a set of one or more SMU flip operations. Using the current example for explanation, the set of one or more SMU flip operations may also include resetting all of the sub-group level DWCs 521 to 524 for the sub-groups 321 to 324 of the SMU 320. Another example is to reset the SMU-level DWC 520 for the SMU 320 in an embodiment that includes an SMU-level DWC. Another example is to increment the LWC offset 620 associated with the SMU 320 in the LWC section 404.In the embodiment of the present disclosure, when any one of the sub-group level DWCs of a given SMU's sub-group reaches the SMU movement threshold, an SMU movement operation from the given SMU to the target SMU is triggered. Therefore, in this example, it is possible that when the sub-group-level DWC 523 reaches the SMU movement threshold, the sub-group-level DWC 521, the sub-group-level DWC 522, and the sub-group-level DWC 524 are equal to or greater than or equal to 0 and less than the SMU movement threshold. Various (possibly equal, possibly different) values.In addition, the description of the subgroup-level DWC reaching the SMU movement threshold is consistent with the implementation of maintaining an independent subgroup-level DWC but not maintaining the SMU-level DWC. As described below in conjunction with at least FIG. 8, the description of the sub-group level DWC reaching the SMU movement threshold is also consistent with the implementation of maintaining both the sub-group level DWC and the SMU level DWC.FIG. 8 depicts a flowchart of an example method 800 for wear leveling based on sub-group level DWC according to some embodiments of the present disclosure. The method 800 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, special logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., processing Instructions to run or execute on the device) or their combination. In some embodiments, the method 800 is performed by the sub-group-based wear equalizer 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should only be understood as examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more operations may be omitted in various embodiments. Therefore, not all operations are required in every embodiment. Other operational flows are possible.The method 800 depicted and described below in conjunction with FIG. 8 is in the context of the latter type of embodiment, and divides "subgroup-level DWC reaches SMU movement threshold" into two parts: (i) subgroup-level DWC reaches subgroup The maximum allowable value of the group-level DWC, and (ii) the associated SMU-level DWC reaches the maximum allowable value of the SMU-level DWC.In at least one embodiment of the present disclosure, when it is determined that a write operation (referred to herein as a "first" write operation) to the subgroup 323 of the SMU 320 is to be performed, the method 800 is executed by the processing device. It should be noted that this writing operation is referred to as a “first” writing operation and other writing operations described below are referred to as “second”, “third” writing operations, etc., regardless of the time sequence of performing these writing operations. These are just tags to help the reader distinguish between different examples involving different write operations.In this example, the processing device performs a first write operation on the subgroup 323 of the SMU 320 (see operation 802). Next, the processing device (at operation 804) evaluates whether the sub-group 323 is equal to the maximum allowed sub-group-level DWC value (referred to herein as sub-group-level DWC max). In this example, since 4 bits are allocated to each of the sub-group level DWCs 511 to 514, 521 to 524, 531 to 534, and 541 to 544, the sub-group level DWC max is 15. It should be noted that, in the example method 800 at this time, the sub-group level DWC 521 has not been incremented.If the processing device determines at operation 804 that the sub-group level DWC 521 is not equal to the sub-group level DWC max, control proceeds to operation 806, at which time the processing device increments the sub-group level DWC 521. Control then proceeds to operation 808, at which point the method 800 returns (to its calling function, process, etc.).However, if the processing device determines that the sub-group level DWC 521 is equal to the sub-group level DWC max at operation 804, then at operation 810, the processing device resets each of the sub-group level DWCs 521 to 524 for the SMU 320. Control then proceeds to operation 812, where the processing device evaluates whether the SMU-level DWC 520 is equal to the maximum allowable SMU-level DWC value (referred to herein as SMU-level DWC max). In this example, since 13 bits are allocated to each of the SMU-level DWCs 510 to 540, the SMU-level DWCmax is 8,191. When the SMU-level DWC and the sub-group-level DWC are respectively regarded as the MSB and LSB of the combined DWC, it can be seen that the SMU movement threshold is 131,071 in this example.Continuing with the example, if the processing device determines at operation 812 that the SMU level DWC 520 is not equal to (and therefore less than) the SMU level DWC max, the processing device increments the SMU level DWC 520 (see operation 814), and then the method 800 returns at operation 808 . However, if the processing device determines that the SMU level DWC 520 is equal to the SMU level DWC max at operation 812, the processing device performs a set of SMU flip operations: the processing device resets the SMU level DWC 520 (see operation 816), and increments the LWC offset 620 (See operation 818), and trigger an SMU move operation from SMU 320 to SMU 340 (SMU available as an example) (see operation 820). Then, the method 800 returns at operation 808.Some example operations are described herein in conjunction with the example method 900 depicted in FIG. 9-the operations are performed in some embodiments of the present disclosure as part of incrementing the LWC offset 620 at operation 818.FIG. 9 depicts a flowchart of an example method 900 for wear leveling based on sub-group level DWC according to some embodiments of the present disclosure. Method 900 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., processing Instructions to run or execute on the device) or their combination. In some embodiments, the method 800 is performed by the sub-group-based wear equalizer 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of operations may be modified. Therefore, the illustrated embodiments should only be understood as examples, and the illustrated operations may be performed in a different order, and some operations may be performed in parallel. In addition, one or more operations may be omitted in various embodiments. Therefore, not all operations are required in every embodiment. Other operational flows are possible.However, in some embodiments, the execution of the method 800 does not involve the complete execution of the method 900, but involves the execution of one or more but not all operations of the method 900. Some embodiments of the present disclosure involve performing one of method 800 and method 900 without performing the other method. As in the case of the method 800, the method 900 is described herein by way of example and not limitation as a method performed by the memory subsystem controller 115 with respect to the memory device 130. In some cases, the method 900 is performed particularly by the wear equalizer 113 based on the subgroup.As described above, in the embodiment of the present disclosure, the processing device maintains the LWC for each of the SMUs 310 to 340 of the memory device 130. In at least some embodiments, this involves maintaining a single shared LWC base (the single shared LWC base represents the MSB of all LWCs in SMUs 310 to 340 of the memory device 130) and individual SMU specific LWC offsets 610 to 640 (The LWC offset represents the LSB of the LWC of the various SMUs 310 to 340 of the memory device 130). This type of embodiment provides background for this description of the method 900 of FIG. 9.The method 900 includes operations performed by the processing device as part of performing operation 818 of the method 800 of FIG. 8 in at least one embodiment, which involves the processing device incrementing the LWC offset 620, and more generally involves the processing device changing the processing device to The LWC maintained by the SMU 320 is incremented. In at least one embodiment of the present disclosure, the MSB of the LWC is stored as the shared LWC base in a register in the LWC base register file 170 outside the SMU management table 160, and the LSB of the LWC is used as the LWC offset 610 to 640 The LWC offset 620 in is stored in the LWC section 404 of the SMU management table 160.In addition, in this example, 3 bits are reserved for each of the LWC offsets 610 to 640, and each of the LWC offsets 610 to 640 can take a value in the range of 0 to 7 (decimal) (including 0 and 7). Correspondingly, the maximum allowable value of the LWC offset (referred to herein as the LWC offset max) is 7. The processing device starts to execute the sub-method 900 at operation 902, at which time the processing device evaluates whether the LWC offset 620 is less than the amount (LWC offset max-1). If so, the processing device only increments the LWC offset 620 (see operation 904), and then the method 900 returns to the method 800 (see operation 906).However, if the processing device determines at operation 902 that the LWC offset 620 is not less than the amount (LWC offset max-1), the processing device continues to evaluate whether the LWC offset 620 is actually equal to the same amount at operation 908 . It should be noted that the mentioned amount (LWC offset max-1) is an example expression of a parameter referred to herein as the LWC offset check threshold. The LWC offset check threshold is the In some embodiments, it is used to verify the threshold value for writing operations between SMU 310 to 340 in a manner that keeps all of SMU 310 to 340 within a certain service life range in general. In addition, in some embodiments, the LWC offset check threshold is not used, while in other embodiments, the LWC offset check threshold is equal to a value other than (LWC offset max-1) because this value Used in this article by way of example rather than limitationReturning to the operation of the method 900, if the processing device determines at operation 908 that the LWC offset 620 is equal to the amount (LWC offset max-1), then control proceeds to operation 910, at which time the processing device checks the information listed in the above paragraph The error conditions related to the consideration of whether the service life of SMU 310 to 340 are within the tolerance of each other are listed. In particular, as an example, at operation 910, the processing device evaluates the LWC offset in addition to the LWC offset 620-in this case, the LWC offset 610, the LWC offset 630, and the LWC offset Does any of 640- have a current value smaller than the difference between (i) the value referred to herein as the LWC offset range size and (ii) the value referred to herein as the LWC base increment size.As mentioned, in this example, any one of the LWC offsets 610 to 640 can have any value (including 0 and 7) in the range of 0 to 7 (decimal) at any given time; accordingly, because There are 8 different values in this range, so the LWC offset range size in this example is 8. In addition, in this example, the LWC base increment size is 4. In this case, the LWC base increment size is calculated as half the size of the LWC offset range, although this is not required. The LWC base increment size can take other values smaller than the LWC offset range.If the processing device determines at operation 910 that at least one of the LWC offset 610, the LWC offset 630, and the LWC offset 640 is smaller than the difference that can be expressed as (LWC offset range size-LWC base increment size) Value, then this means that the error condition is true and the processing device responsively triggers an LWC offset imbalance event (e.g., an alarm or error message) to be stored in, for example, the firmware of the processing device (see operation 912). In other embodiments, the processing device may also or instead take one or more other response actions, possibly up to and including disabling further use of the memory device 130.In the depicted embodiment, after performing operation 912, the processing device increments the LWC offset 620 (see operation 904), and then the method 900 returns to the method 800 (see operation 906). It can be seen from FIG. 9 that if the processing device determines at operation 910 that the LWC offset 610, the LWC offset 630, and the LWC offset 640 are not less than the range of the LWC offset and the LWC base increment size There are two operations that are the same as those performed by the processing device.Now returning to operation 908, if the processing device determines that the LWC offset 620 is not equal to the LWC offset check threshold, which in this example is equal to the amount (LWC offset max-1), control Proceed to operation 914. It should be noted that, overall, the negative determinations at both operation 902 and operation 904 are equivalent to determining that the LWC offset 620 is not less than-and is actually equal to-the LWC offset max. In an embodiment that does not involve the use of the LWC check threshold to check the above-mentioned error conditions, then operations 902 and 904 may be combined into a single operation that evaluates whether the LWC offset 620 is less than the LWC offset max.At operation 914, in response to determining that the LWC offset 620 is equal to the LWC offset max, the processing device reduces all of the LWC offsets 610 to 640 of the SMUs 310 to 340 in the memory device 130 by the aforementioned LWC base increment size. The processing device also increases the shared LWC base stored in the register in the LWC base register file 170 by the LWC base increment size (see operation 916). Then, through the combination of these two operations 914 and 916, no LWC information is lost, and the current LWC of any given SMU 310 to 340 can be used at any time by using the shared LWC base as the MSB of the current LWC and using the current LWC bias. The shift amounts 610 to 640 are determined as the LSB of the current LWC. It should be noted that this is due to the aforementioned strategy of the processing device acting to keep the various SMUs 310 to 340 within the life span threshold amount of each other, that is, a condition verified by checking (absent) the aforementioned error condition. After performing operations 914 and 916, method 900 returns to method 800 (see operation 906). Finally, it should be noted that although not depicted, in some embodiments, the method 900 may also involve the processing device assessing whether the LWC of the SMU 320 (or any one of the SMUs 310 to 340) has reached the lifetime LWC max.10 is a message flow diagram depicting a message flow 1000 showing some example communications that may occur between the memory subsystem controller 115 and the various functional components of the memory device 130 in some embodiments of the present disclosure. The functional components of the memory subsystem controller 115 depicted in the message flow 1000 are the subgroup-based wear equalizer 113, the DWC section 402 and the LWC section 404 that are part of the SMU management table 160, and the LWC base register file 170. Furthermore, it should be noted that time is not intended to be depicted in the message flow 1000 to scale.As shown in the message flow 1000, the sub-group-based wear equalizer 113 may transmit a read command to the memory device 130 and receive corresponding read data from the memory device, as indicated at 1002. In addition, the sub-group-based wear equalizer 113 may transmit the write command and corresponding data to be written to the memory device 130, as indicated at 1004. The transmission of these write commands and write data at 1004 may correspond to the above-mentioned write operation directed by the memory subsystem controller 115 to the memory device 130. In this depiction, in particular, the sub-group-based wear equalizer 113 directs these write commands to the memory device 130. Each write command may be directed to a specific subgroup 311 to 314, 321 to 324, 331 to 334, 341 to 344 of a specific SMU 310 to 340 of the memory device 130.Furthermore, as indicated at 1006, in various embodiments, the sub-group-based wear leveler 113 may send a DWC read request to the DWC section 402 to request one or more of the SMU-level DWC 510 to 540 and/or The values of one or more of the sub-group level DWCs 511 to 514, 521 to 524, 531 to 534, and 541 to 544 stored in the DWC section 402. The sub-group-based wear equalizer 113 may receive the requested DWC value from the DWC section 402, and may further send information about the SMU-level DWC 510 to 540 and/or the sub-group-level DWC 511 to 514, 521 to the DWC section 402. DWC update of one or more of 524, 531 to 534, and 541 to 544 (for example, increment value, increment command, reset value, reset command).As indicated at 1008, the sub-group-based wear balancer 113 may also send an LWC offset read request to the LWC section 404 to request one or more of the LWC offsets 610 to 640 stored in the LWC section 404 The value of each. The sub-group-based wear equalizer 113 may receive the requested LWC offset value from the LWC section 404, and may further send the LWC offset about one or more of the LWC offsets 610 to 640 to the LWC section 404. Shift update (for example, increment value, increment instruction). As indicated at 1010, the sub-group-based wear leveler 113 can participate in combining the shared LWC base to send read requests to the LWC base register file 170, receive values from the LWC base register file, and send updates to the LWC base register file The similar operation.At any given time, the subgroup-based wear equalizer 113 can use the value received from the DWC section 402 to calculate the current SMU-level DWC for any given SMU 310 to 340 and/or any given subgroup 311 to 314, 321 The current subgroup level DWC to 324, 331 to 334, 341 to 344 or may be the current combined DWC as described herein. Alternatively or in addition, at any given time, the subgroup-based wear equalizer 113 may use the values received from the LWC section 404 and the LWC base register file 170 to calculate the current LWC for any given SMU 310-340.As also depicted in the message flow 1000, the sub-group-based wear equalizer 113 may also transmit SMU move commands to the memory device 130 from time to time, as indicated at 1012, thereby instructing the memory device 130 to execute a slave SMU (eg , SMU 320) to another SMU (for example, SMU 340) SMU move operation, which may be in response to determining that the subgroup (for example, subgroup 323) of the subgroup level DWC (for example, subgroup level DWC 523) has reached SMU rollover operation performed by the SMU moving the threshold. Finally, it may also be an SMU rollover operation, and as indicated at 1014, the sub-group-based wear equalizer 113 may instruct the memory device 130 to erase certain data, possibly after the SMU move operation to the SMU 340 has been performed. The data in 320 so that SMU 320 can be used for future SMU mobile operations for SMU 320 as the target SMU.Figure 11 shows an example machine of a computer system 1100 in which a set of instructions for causing the machine to perform any one or more of the methods discussed herein can be executed. In some embodiments, the computer system 1100 may correspond to a host system (for example, the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory subsystem (for example, the memory subsystem 110 of FIG. 1) Or it can be used to perform operations of the controller (for example, to execute an operating system to perform operations corresponding to the sub-group-based wear balancer 113 of FIG. 1). In alternative embodiments, the machine may be connected (eg, networked) to other machines in a local area network (LAN), intranet, extranet, and/or the Internet. The machine can operate as a server or a client machine in a client-server network environment, can be used as a peer machine in a peer-to-peer (or distributed) network environment, or can be used as a server in a cloud computing infrastructure or environment. Client machine.The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular phone, a Web device, a server, a network router, a switch or a bridge or can (sequentially or otherwise) Any machine that executes a set of instructions that specify actions to be performed by the machine. Further, although a single machine is shown, the term "machine" should also be understood to include machines that individually or collectively execute one (or more) instruction sets to perform any one or more of the methods discussed herein Any collection.The example computer system 1100 includes processing devices 1102 that communicate with each other via a bus 1130, a main memory 1104 (e.g., ROM, flash memory, DRAM such as SDRAM or RDRAM, etc.), static memory 1106 (e.g., flash memory, SRAM, etc.) And data storage system 1118.The processing device 1102 represents one or more general processing devices, such as a microprocessor, a central processing unit (CPU), and the like. More specifically, the processing device 1102 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets. , Or a processor that implements a combination of instruction sets. The processing device 1102 may also be one or more dedicated processing devices, such as ASIC, FPGA, digital signal processor (DSP), network processor, and so on. In at least one embodiment, the processing device 1102 is configured to execute instructions 1126 to perform the operations discussed herein. The computer system 1100 may further include a network interface device 1108 to communicate through the network 1120.The data storage system 1118 may include a machine-readable storage medium 1124 (also referred to as a computer-readable medium) on which is stored any one or more of the methods or functions described herein. One or more instruction sets 1126 or software. The instructions 1126 may also completely or at least partially reside in the main memory 1104 and/or the processing device 1102 during the execution of the instructions by the computer system 1100, and the main memory 1104 and the processing device 1102 also constitute machine-readable storage media. The machine-readable storage medium 1124, the data storage system 1118, and/or the main memory 1104 may correspond to the memory subsystem 110 of FIG.In an embodiment, instructions 1126 include instructions for implementing functionality corresponding to a sub-group-based wear equalizer (eg, sub-group-based wear equalizer 113 of FIG. 1). Although the machine-readable storage medium 1124 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be considered to include a single medium or multiple media storing one or more sets of instructions. The term "machine-readable storage medium" should also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Correspondingly, the term "machine-readable storage medium" should be regarded as including but not limited to solid-state memory, optical media, and magnetic media.Some parts of the foregoing detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits in computer memory. These algorithm descriptions and representations are used by those skilled in the data processing field to most effectively convey the essence of their work to other skilled in the art. Algorithms are here and generally considered to be a sequence of self-consistent operations leading to a desired result. Operations are those operations that require physical manipulation of physical quantities. Generally, although not required, these quantities take the form of electrical or magnetic signals that can be stored, combined, compared, and/or otherwise manipulated. It has been proven that sometimes it is convenient to refer to these signals as bits, data, values, elements, symbols, characters, items, quantities, etc., for general reasons in principle.However, it should be borne in mind that all of these or similar terms will be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the actions and processes of a computer system or similar electronic computing device, which will be expressed as data manipulation and transformation of physical (electronic) quantities in the registers and memory of the computer system It is other data similarly represented as physical quantities in the registers and memories of a computer system or other such information storage systems.The present disclosure also relates to a device for performing the operations described herein. Such a device may be specifically constructed for the intended purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media such as but not limited to those containing floppy disk, optical disk, CD-ROM, magneto-optical disk, ROM, RAM, EPROM, EEPROM, magnetic card or optical card. Any type of magnetic disk and/or any type of medium suitable for storing electronic instructions are each coupled to the computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other equipment. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized devices to perform the methods. As set forth in this disclosure, structures for a variety of these systems will appear. In addition, the present disclosure is not described with reference to any specific programming language. It should be understood that various programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, and the computer program product or software may include a machine-readable medium having instructions stored thereon, and the instructions may be used to control a computer system (or other electronic device or multiple devices). ) Is programmed to execute the process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (e.g., computer-readable) medium includes machine-readable (e.g., computer-readable) storage media, such as ROM, RAM, magnetic disk storage media, optical storage media, flash memory devices, and the like.In the foregoing specification, some example embodiments of the present disclosure have been described. It is obvious that various modifications can be made without departing from the broader scope and spirit of the present disclosure as set forth in the appended claims. Accordingly, the description and drawings should be understood in an illustrative rather than restrictive sense. The following is a non-exhaustive list of examples of embodiments of the present disclosure.Example 1 is a system that includes: a plurality of memory components, each memory component includes a plurality of management groups, and each management group includes a plurality of subgroups; and a processing device, which is operatively connected to the plurality of memory components Coupled to perform a wear leveling operation, the wear leveling operation including: maintaining a sub-group level DWC for each of the sub-groups of each of the management groups of memory components of the plurality of memory components; and In combination with the first write operation to the first subgroup of the first management group of the memory component, it is determined that the first subgroup level DWC of the first subgroup is equal to the management group movement threshold, and one or more are executed in response A set of management group flipping operations, wherein the set of management group flipping operations includes triggering a management group move operation from the first management group to the second management group of the memory component.Example 2 is the subject of Example 1, wherein the set of management group rollover operations further includes resetting the sub-group level DWC for each of the sub-groups of the first management group.Example 3 is the subject of Example 1 or Example 2, wherein the wear leveling operation further includes: maintaining a management group level DWC for each of the management groups of the memory component, and wherein the first subgroup level is determined DWC equal to the management group movement threshold includes determining: (i) the first sub-group-level DWC is equal to the maximum sub-group-level DWC; and (ii) the first management-group-level DWC of the first management group is equal to the management group Maximum level of DWC.Example 4 is the subject of Example 3, wherein the management group level DWC and the subgroup level DWC are maintained in a management table stored on the processing device.Example 5 is the subject of Example 3 or Example 4, wherein the management group rollover operation set further includes: resetting the first management group-level DWC; and for each of the subgroups of the first management group Reset the sub-group level DWC.Example 6 is the subject of any one of Examples 3 to 5, wherein the wear leveling operation set further comprises: determining that the first sub-group level DWC is smaller than the first sub-group level DWC in combination with a second write operation to the first sub-group The sub-group level DWC is the maximum value, and the first sub-group level DWC is incremented in response.Example 7 is the subject of any one of Examples 3 to 6, wherein the wear leveling operation further comprises: determining in combination with a third write operation to the first subgroup: (i) the first subgroup level DWC Equal to the maximum value of the subgroup-level DWC; and (ii) the first management group-level DWC is less than the maximum value of the management group-level DWC, and in response (i) is the subgroup of the first management group Each of resets the subgroup level DWC; and (ii) increments the first management group level DWC.Example 8 is the subject of any one of Examples 1 to 7, wherein the wear leveling operation further includes maintaining an LWC for each of the management group of the memory components.Example 9 is the subject of Example 8, wherein each LWC has an MSB part and an LSB part; a single shared LWC cardinality represents the MSB part of all of the LWC in the management group of the memory component; a separate management group The specific LWC offset represents the LSB portion of the LWC of the management group of the memory component; and the management group rollover operation set further includes incrementing the first LWC of the first management group.Example 10 is the subject of Example 9, wherein the LWC offset is stored in a management table on the processing device; and the shared LWC cardinality is stored outside the management table on the processing device.Example 11 is the subject of Example 9 or Example 10, wherein incrementing the first LWC includes: determining whether the first LWC offset of the first management group is less than or equal to the maximum value of the LWC offset; An LWC offset is less than the maximum LWC offset, then the first LWC offset is incremented; and if the first LWC offset is equal to the maximum LWC offset, then (i ) Decrease the LWC offset of each of the management groups of the memory component by the LWC base increment size; and (ii) increase the shared LWC base of the memory component by the LWC Base increment size.Example 12 is the subject of Example 11, wherein incrementing the first LWC further includes: if the first LWC offset is less than the maximum value of the LWC offset, determining whether the first LWC offset is equal to The LWC offset check threshold, and if it is, it is determined whether at least one of the LWC offsets of the first memory part except the first LWC offset is smaller than the LWC offset range and the LWC offset range. The difference between the size of the base increment, and if it is, the firmware of the processing device triggers an LWC offset imbalance event.Example 13 is a method comprising: maintaining a sub-group level DWC for each of the plurality of sub-groups of each of the plurality of management groups of memory components; and combining the first management group of the memory components The first write operation of the first subgroup determines that the first subgroup level DWC of the first subgroup is equal to the management group movement threshold, and responsively performs a set of one or more management group rollover operations, wherein the management The set of group flipping operations includes triggering a management group move operation from the first management group to the second management group of the memory component.Example 14 is the subject of Example 13, and further includes maintaining a management group level DWC for each of the management groups of the memory component, wherein determining that the first subgroup level DWC is equal to the management group movement threshold includes determining: (i) The first sub-group level DWC is equal to the maximum sub-group level DWC; and (ii) the first management group-level DWC of the first management group is equal to the maximum management group-level DWC.Example 15 is the subject of Example 13 or Example 14, and further includes determining that the first sub-group-level DWC is less than the maximum value of the sub-group-level DWC in combination with the second write operation to the first sub-group, and responsively The first subgroup level DWC is incremented.Example 16 is the subject of Example 14 or Example 15, and further includes determining in combination with a third write operation to the first subgroup: (i) the first subgroup level DWC is equal to the maximum value of the subgroup level DWC; And (ii) the first management group level DWC is less than the maximum value of the management group level DWC, and in response (i) reset the subgroup for each of the subgroups of the first management group Level DWC; and (ii) incrementing the first management group level DWC.Example 17 is the subject of any one of Examples 13 to 16, and further includes maintaining an LWC for each of the management group of the memory component.Example 18 is the subject of Example 17, wherein each LWC has an MSB part and an LSB part; a single shared LWC cardinality represents the MSB part of all of the LWC in the management group of the memory component; a separate management group The specific LWC offset represents the LSB portion of the LWC of the management group of the memory component; and the management group rollover operation set further includes incrementing the first LWC of the first management group.Example 19 is the subject of Example 18, wherein incrementing the first LWC includes: determining whether the first LWC offset of the first management group is less than or equal to the maximum LWC offset; if the first LWC is offset If the amount of shift is less than the maximum value of the LWC offset, then the first LWC offset is incremented; and if the first LWC offset is equal to the maximum value of the LWC offset, then (i) The LWC offset of each of the management groups of the memory component is reduced by the LWC base increment size; and (ii) the shared LWC base of the memory component is increased by the LWC base increment size.Example 20 is a non-transitory machine-readable storage medium containing instructions that, when executed by a processing device, cause the processing device to perform operations including the following: for each of a plurality of management groups of memory components Each of the plurality of sub-groups maintains a sub-group-level DWC; and the first sub-group of the first sub-group is determined in combination with the first write operation to the first sub-group of the first management group of the memory component The level DWC is equal to the management group movement threshold, and responsively executes a set of one or more management group flip operations, wherein the management group flip operation set includes triggering a second management group from the first management group to the memory component Management group move operation. |
Approaches for bridging communication between first and second buses are disclosed. Address translation information and associated security indicators are stored (202) in a memory. Each access request from the first bus includes a first requester security indicator and a requested address. Each access request from the first bus and directed to the second bus is either rejected (214), or translated (210) and communicated (212) to the second bus, based on the requester security indicator and the security indicator associated with the address translation information for the requested address. Each access request from the second bus to the first bus includes the requested address, and the access request is translated (226) and communicated (228) to the first bus along with the security indicator that is associated with the address translation information for the requested address. |
1.A method for bridge communication between a first bus and a second bus includes:Storing address translation information and associated security indicators in memory;In response to each first access request received from the first bus, wherein each of the first access requests includes a first requester security indicator and a first address, performing the steps of:Reject the first access request in response to the first requester security indicator indicating an unsafe requester and a security indicator associated with address translation information for the first address indicating a secure address range; as well asTranslating the first access request into a second access request for the second bus using the address translation information in response to the first requester security indicator indicating a secure requester, A second access request to the second bus; andIn response to each third access request received from the second bus and including a third address, the following steps are performed:Translating the third access request into a fourth access request for the first bus using the address translation information; andCommunicate the fourth access request and the security indicator associated with address translation information for a fourth address to the first bus.2.The method of claim 1, wherein the address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to the second A second address range of the bus, and each of the safety indicators is respectively associated with one of the address range maps.3.The method of claim 1 or 2, further comprising: in response to the first requester security indicator indicating an unsafe requestor and address translation information for the first address Indicating an unsafe address range, translating the first access request into the second access request for the second bus and communicating the second access request to the second bus.4.The method according to claim 1, characterized in that:The address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to a second address range of the second bus, and the safety indicator Each of which is associated with one of the address range maps, respectively; andEach address range map specifies a first base address and a second base address.5.The method of claim 4, wherein each address range map specifies a size.6.The method of claim 5, wherein the first base address is in a first address space of the first bus and the second base address is in a second address space of the second bus Inside.7.A method according to any one of claims 1-6, characterized in that said safety indicator is programmable.8.A bridge circuit for communicating between a first bus and a second bus includes:A memory configured to store address translation information and associated security indicators;An egress circuit coupled to the memory and the first and second buses, wherein the egress circuit is configured to receive a first access request from the first bus, each first access request including A first requester security indicator, and a first address, and for each first access request, the egress circuit is configured to:Reject the first access request in response to the first requester security indicator indicating an unsafe requester and a security indicator associated with address translation information for the first address indicating a secure address range; as well asTranslating the first access request into a second access request for the second bus using the address translation information in response to the first requester security indicator indicating a secure requester, A second access request to the second bus; andAn ingress circuit coupled to the memory and the first and second buses, wherein the ingress circuit is configured to receive a third access request from the second bus, each third access request including A third address, the ingress circuit further configured to:Translate the third access request into a fourth access request for the first bus using the address translation information; andCommunicate the fourth access request and the security indicator associated with address translation information for a fourth address to the first bus.9.The circuit of claim 8, wherein the address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to the second A second address range of the bus, and each of the safety indicators is respectively associated with one of the address range maps.10.The circuit of claim 8 or 9, wherein the egress circuit is further configured to: indicate an unsafe requestor in response to the first requester security indicator and a request for the first address The address translation information indicating an insecure address range, translating the first access request into the second access request for the second bus and communicating the second access request to the second bus.11.The circuit according to claim 8, characterized in that:The address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to a second address range of the second bus, and the safety indicator Each of which is associated with one of the address range maps, respectively; andEach address range map specifies a first base address and a second base address.12.Circuit according to claim 11, characterized in that each address range map is assigned a size.13.The circuit of claim 12, wherein the first base address is in a first address space of the first bus and the second base address is in a second address space of the second bus Inside.14.Circuit according to any of claims 8-13, characterized in that the safety indicator is programmable. |
Bridged bus communicationTechnical fieldThis application generally relates to bridged communication between buses.Background techniqueBuses are interconnected subsystems or circuits that transmit data between different devices within an electronic circuit. A bus defines a set of rules and connections to which each device connected to the bus must adhere to the rules and connections in order to effectively communicate over the bus. Examples of devices connected to the bus include, but are not limited to, a processor, a memory, a bridge of an external system, or the like. Unlike point-to-point connections, the bus can connect some peripheral devices on the same set of wires.Many electronic systems include multiple buses to facilitate the transfer of data between multiple devices in parallel. In some systems, it is possible to transfer data from the bus executing a certain protocol to the bus executing a different protocol. Connecting the implementation of different protocols bus will have compatibility issues. For example, Advanced eXtensible Interface (AXI) for Advanced Microprocessor Bus Architecture (AMBA) supports the isolation of secure and non-secure resources connected to the bus, and the isolation is a software control and hardware implementation of. In contrast, peripheral component interconnect-fast (PCIe) bus architectures do not provide similar security. There are problems connecting these buses when different buses have different characteristics.Content of the inventionIn one embodiment, a method for bridge communication between a first bus and a second bus includes storing address translation information and associated security indicators in a memory. Each first access request received from the first bus includes a first requester security indicator and a first address. In response to each first access request, the method, in response to the first requester security indicator indicating an unsafe requester, and a security indicator associated with the address translation information for the first address Indicates a secure address range, rejecting the first access request. Translating the first access request into a second access request for the second bus using the address translation information in response to the first requester security indicator indicating a secure requester, The second access request communicates to the second bus. Each third access request received from the second bus includes a third address. In response to each third access request, the method using the address translation information to translate the third access request into a fourth access request for the first bus; and sending the fourth access request and The security indicator associated with the address translation information for a fourth address is communicated to the first bus.In another embodiment, a bridge circuit for communicating between a first bus and a second bus includes a memory configured to store address translation information and associated security indicators. An egress circuit is coupled to the memory and the first and second buses. The exit circuit is configured to receive a first access request from the first bus, each first access request including a first requester security indicator and a first address. For each first access request, the egress circuit is configured to, in response to the first requester security indicator indicating an unsafe requester and to associate with the address translation information for the first address The security indicator indicates a secure address range, rejecting the first access request. The egress circuit translates the first access request into a second access request for the second bus using the address translation information in response to the first requester security indicator indicating a secure requester, And communicate the second access request to the second bus. An ingress circuit is coupled to the memory and the first and second buses. The ingress circuit is configured to receive a third access request from the second bus, each third access request including a third address. The ingress circuit further configured to: translate the third access request into a fourth access request for the first bus using the address translation information; and translate the fourth access request and a second access request for Said security indicator associated with said address translation information for a fourth address to said first bus.In another embodiment a system is provided. The system includes a first bus, a first set of master-slave circuits coupled to the first bus, a second bus, a bridge circuit coupled between the first bus and the second bus, A second set of master-slave circuits coupled to the second bus. The bridge circuit includes a memory configured to store address translation information and associated security indicators. An egress circuit is coupled to the memory and the first and second buses. The exit circuit is configured to receive a first access request from the first bus, each first access request including a first requester security indicator and a first address. For each first access request, the egress circuit is configured to, in response to the first requester security indicator indicating an unsafe requester and to associate with the address translation information for the first address The security indicator indicates a secure address range, rejecting the first access request. The egress circuit translates the first access request into a second access request for the second bus using the address translation information in response to the first requester security indicator indicating a secure requester, And communicate the second access request to the second bus. An ingress circuit is coupled to the memory and the first and second buses. The ingress circuit is configured to receive a third access request from the second bus, each third access request including a third address. The ingress circuit further configured to: translate the third access request into a fourth access request for the first bus using the address translation information; and translate the fourth access request and a second access request for Said security indicator associated with said address translation information for a fourth address to said first bus.Other features will be understood by consideration of the detailed description and the claims.BRIEF DESCRIPTION OF THE DRAWINGS FIGVarious aspects and features of the methods and systems will become more apparent upon reading of the detailed description below and on the accompanying drawings.Figure 1 shows a system comprising a bridge circuit for communicating between a first bus and a second bus wherein the first bus provides hardware-implemented control of the secure and insecure resources and the second bus is not Recognize the security mechanism of the first bus;Figure 2 is a flowchart of an example process for bridged communication between a first bus and a second bus; andFIG. 3 shows a programmable integrated circuit (IC) on which the bridge circuit of FIG. 1 and the process of FIG. 2 can be performed.detailed descriptionIn the following description, numerous specific details are set forth in more detail in the specific embodiments described herein. However, it will be apparent to one skilled in the art that one or more of the other embodiments and / or variations of these embodiments may be practiced without using all of the specific details described below. In other instances, well-known features have not been described in detail so as not to obscure the description of the examples in this application. For ease of presentation, the same reference numbers are used in the different drawings to refer to the same items; however, in alternative embodiments, these items may not be the same.The disclosed bridge circuit is able to adapt the security measures implemented by the first bus in the bridge communication to the second bus that does not implement the security mechanism of the first bus. According to one approach, the bridge circuit is capable of storing address translation information and associated security indicators. Address translation information is used to translate addresses between the address space of the first bus and the address space of the second bus. The security indicators associated with the address translation information are configurable so that different address ranges may have different security designation.The access request received from the first bus by the bridge circuit refers to the request address and has an associated requester security indicator. The request address refers to the address within the first address space of the first bus, and the requester security indicator indicates the supplicant's security setting. For example, supplicant's security settings may be secure or unsafe.In response to the requester security indicator indicating that the requester is insecure and the security indicator associated with the address translation information for the request address indicates a secure address range, the bridge circuit rejects the request from the first bus The second bus to access the request. If the requestor's security indicator indicates that the requestor is not secure and the scope of the request address is not secure, then the bridge circuit uses the address translation information to translate the access request into an access request for the second bus. If the requester security indicator indicates that the requester is secure, the bridge circuit translates the access request into an access request for the second bus using address translation information for both secure and insecure address ranges. The translated access request is communicated to the second bus.The access request received from the second bus by the bridge circuit refers to the request address but there is no associated requester security indicator therein. The requested address refers to the address in the second address space of the second bus. For requests received over the second bus, the bridge circuit uses the address translation information to translate the request into an access request for the first bus. The bridge circuit then communicates the translated access request and the security indicator associated with the address translation information to the first bus.Figure 1 shows a system 100 that includes a bridge circuit for communicating between a first bus and a second bus wherein the first bus provides control over the hardware implementation of secure and insecure resources and the second bus does not Identify the first bus security mechanism. Although not shown, it should be understood that the bridge and bus also provide data lanes in addition to the address lanes shown.The system 100 includes a first set of master-slave circuits 102 and a second set of master-slave circuits 112, where a first set of master-slave circuits 102 is coupled to a first bus 104, a bridge circuit 106, a bus controller 108, and a second bus 110, While the second group of master-slave circuits 112 is coupled to the second bus. The first group of master-slave circuits 102 includes one or more main circuits and one or more slave circuits, and the second group of master-slave circuits 112 includes one or more master circuits and one or more slave circuits. Examples of main circuits include microprocessors, direct memory access (DMA) circuits, and / or digital signal processors (DSPs). Examples of slave circuits include flash memory devices, solid state drives, or hard disk drives. The bridge circuit 106 is one example of a slave circuit on the first bus 104.The bridge circuit 106 translates requests between the first bus 104 and the second bus 110. The bus may have different address spaces, different physical configurations, and different security mechanisms. For example, bus 104 may be a parallel bus, such as an AXI bus, while bus 110 may be a serial bus, such as PCIe. The AXI bus can also isolate safe and unsafe resources. On the AXI bus, each device is assigned a security profile that indicates whether the device is secure or not secure. Memory access transactions are marked as indicating the requester's level of security, and the token is propagated throughout the interconnect system. Insecure master or software tasks allow access to only unsecured storage areas or slaves. Secure main unit or software tasks allow access to secure and insecure storage areas. Transaction security on the AXI bus is indicated by the state of the AxPROT [1] signal (x refers to R for read channel and W for write channel), which can be compared with the safety indicator 142. On the PCIe bus, there is no similar security information transmitted with the transaction.The egress circuit 122 translates requests from requesters (main circuits) on the first bus 104 into requests suitable for slave circuits on the second bus 110. An ingress circuit 124 translates requests from requesters on the second bus 110 into requests suitable for slave circuits on the first bus 104. The egress and ingress translation circuits translate addresses between the two address spaces and use the address range map 126 to implement the security mechanism of the first bus. For example, an address range map may be implemented within one or more dual port memories 128. In an exemplary implementation, address range map 126 may be configured by configuration signal 130.Each address range map describes its own address range. The information within the address range map specifies the size of the address range 132, a remote base address 134, a local base address 136, and a security indicator 138. The size of the address range indicates, for example, the number of addressable words within a range. The remote base address indicates a base address within the address space of the second bus 110 and the address range starts from the base address; the base base address indicates a base address within the address space of the first bus 104, and the address range ranges from Base address begins. The safety indicator 138 indicates whether the address range is safe or not.The exit circuit receives the access request from the first bus 104. Each access request (also referred to as a transaction in the following) includes a requester security indicator 142 and an address 144. The safety indicator specifies the security level of the main circuit that originated the request, for example, safe or not, and the address refers to the address of the address space of the first bus 104. In response to the access request, the egress circuit determines whether the request is translated and communicated to the second bus 110 based on the requester security indicator and the security indicator associated with the request address. If the requester security indicator indicates that the requester is secure, then the egress circuit translates the access request from the first bus into an access request for the second bus. Specifically, the request address is translated using one of the address map 126 that specifies the address range, where the request address falls within the address map (local base address <= request address <= (local base address + size) ). The address 146 output by the egress circuit is the address within the address space of the second bus 110 and is equal to: (request address - local base address) + remote base address. If the requester security indicator indicates an unsafe requester and the address translation information for the request address indicates an insecure address range, the egress circuit translates the access request as indicated above. In response to the requester security indicator indicating that the requestor is not secure and the security indicator associated with the address range indicates that the request address is within the secure address range, the egress circuit refuses the access request. This rejection can be indicated with a reject signal 148 from the egress circuit.The ingress circuit 124 receives and processes the request from the requester on the second bus 110 addressed to the device on the first bus 104. The request from the second bus includes the request address 152, but does not include the requester security indicator like the request from the first bus 104. The ingress circuit uses one of the address range maps to translate an access request from the second bus 110 into an access request for the first bus 104. The address range map is this: (remote base address <= request address <= (remote base address + size)). The address 154 output through the entry circuit is the address within the address space of the first bus 104 and is: (request address - remote base address) + local base address. Along with the address 154, the entry circuit also communicates to a security indicator 156 that is associated with a range of addresses for requesting an address. Thus, the safety indicator associated with the address range indicates the requester security level for all of the main circuits 112 that submitted the access request to the first bus 104. An acknowledge signal and a reject signal are provided to bus controller 108 on signal line 158.The bus controller provides an interface between the bridge circuit 106 and the second bus 110. In implementations where the bus 110 provides a serial link, such as Peripheral Component Interconnect Express (PCIe), the bus controller may include the transaction, link, and physical layers of the circuit.Figure 2 is a flowchart of an exemplary process for bridging communications between a first bus and a second bus. At block 202, the address map is configured with address translation information for translating the request between the first and second buses. Each address range map specifies the size, remote base address, local base address, and security indicator for the respective address range.For an access request from the first bus (bus to carry out security), the process proceeds to block 204. In FIG. 204, the address range of the address within the access request is determined. Decision block 206 determines if the specified range is safe. If the range is secure, decision block 208 determines whether the requester is secure, as indicated by a signal accompanying the access request. If the requester is secure, then at block 210, the process translates the requested address from the address space of the first bus into the address space of the second bus. At block 212, the access request is prepared with the translated address and communicated to the second bus. If the requester is unsafe and the address range for which the request address is specified is safe, decision block 208 directs the process to block 214 where the access request is denied with a signal sent to the requester.For an access request from the second bus (a bus that does not perform the security of the first bus), at block 220, the address range of the address within the access request is determined. At block 226, an address range map is used to translate the address from the second bus's address space to the address of the first bus's address space. At block 228, an access request is prepared with the translated address, and the status of the access request and the security indicator from the address range map is passed to the first bus. A device that receives an access request on the first bus can determine whether to allow or deny the request.Figure 3 shows a programmable integrated circuit (IC) on which the bridge circuit of Figure 1 and the process of Figure 2 can be implemented. The programmable IC of FIG. 3 shows an FPGA architecture (300) that includes a large number of different programmable tiles including a multi-gigabit transceiver ("MGT") 301, a configurable logic module , A random access memory module ("BRAM") 303, an input / output module ("IOB") 304, a configuration logic and clock logic ("CONFIG / CLOCKS") 305, a digital signal processing module Input / output modules ("I / O") 307 (eg, configuration ports and clock ports), and other programmable logic 308 such as digital clock manager, analog-to-digital converter, system monitoring logic, Some FPGAs also include a special processing module ("PROC") 310 and internal and external reconfiguration ports (not shown).In some FPGAs, each programmable cell includes a Programmable Logic Interconnect Element ("INT") 311 that has a standard connection to a programmable interconnect element within each adjacent cell. As a result, the programmable interconnect elements together perform a programmable interconnect resource for the illustrated FPGA. The programmable interconnect element 311 may also include connections to programmable logic elements within the same cell, as shown in the example included at the top of FIG. 3.For example, the CLB 302 may include a configurable logic element ("CLE") 312 that may be programmed to implement user logic plus a single programmable interconnect element ("INT") 311. In addition to including one or more programmable interconnect elements, BRAM 303 may include a BRAM logic element ("BRL") 313. Generally, the number of interconnecting elements included in a cell depends on the height of the cell. In the illustrated embodiment, the BRAM cell has the same height as the five CLBs, although other numbers (eg, four) may also be used. In addition to including a reasonable number of programmable interconnect elements, the DSP unit 306 may include DSP logic elements ("DSPLs") 314. In addition to one example of a programmable interconnect element 311, the IOB 304 may include, for example, two instances of an input / output logic element ("IOL") 315. It is well-known to those skilled in the art that the actual I / O pads, such as those connected to I / O logic 315, are fabricated using a metal overlying various of the logic modules and are generally not limited to The area of the input / output logic element 315.In the illustrated example, the columnar region near the middle of the die (shown in FIG. 3) is used as configuration logic, clock logic, and other control logic. A vertical column 309 extending from the pillar region is used to distribute the clock signal and the configuration signal across the width of the FPGA.Some FPGAs that use the architecture shown in Figure 3 include additional logic blocks that break up the regular columnar structures that make up a large part of the FPGA. Additional logic modules may be programmable modules and / or dedicated logic. For example, the processor module 310 shown in FIG. 3 spans some of the columns of the CLBs and BRAMs.Note that Figure 3 is intended only to illustrate an exemplary FPGA architecture. For example, the number of logic modules in a column, the relative width of the columns, the number and order of the columns, the type of logic modules included in the column, the relative size of the logic modules, and the interconnect / logic implementations included at the top of FIG. 3 are merely Example. For example, in real FPGAs, CLBs, including more than one adjacent column, are always included, no matter where the CLBs appear, for efficient implementation of user logic.The example method described herein relates to bridge communication between the first and second buses. The method involving bridged communication includes storing address translation information and an associated security indicator in a memory, in response to each first access request received from the first bus, wherein each first access request includes a first A requester security indicator, and a first address, the steps of: in response to the first requester security indicator indicating an unsafe requestor and associated with the address translation information for the first address The security indicator indicating a secure address range, rejecting the first access request; translating, using the address translation information, the first access request in response to the first requester security indicator indicating a secure requester A second access request for the second bus and to communicate the second access request to the second bus; and in response to receiving from the second bus each third Accessing a request, performing the steps of: using the address translation information to translate the third access request into a fourth access request for the first bus; and transmitting the Four access request indicator and the secure communication information associated with the fourth address for address translation to the first bus.In some of the methods, the address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to a second address range of the second bus , And each of the safety indicators is associated with one of the address range maps, respectively.Some of the methods further include in response to the first requester security indicator indicating an unsafe requester and address translation information for the first address indicating an insecure address range, returning the first access request Translate into the second access request for the second bus and communicate the second access request to the second bus.Some of the methods further include: the address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to a second address range of the second bus , And each of the safety indicators is respectively associated with one of the address range maps; and each address range map specifies a first base address and a second base address.Some of the methods further include: each address range map specifying a size.Some of the methods further include that the first base address is in a first address space of the first bus and the second base address is in a second address space of the second bus.Some of the methods further comprise: the safety indicator is programmable.The exemplary apparatus described herein generally relates to a bridge circuit. In this device, a bridge circuit for communicating between a first bus and a second bus includes a memory configured to store address translation information and an associated security indicator, an egress circuit coupled to the A memory, and the first and second buses, wherein the egress circuit is configured to receive first access requests from the first bus, each first access request including a first requester security indicator and a An address, and for each first access request, the egress circuit is configured to: indicate an unsafe requestor in response to the first requester security indicator and an address associated with the address for the first address Translating information indicates a secure address range, rejecting the first access request; responsive to the first requester security indicator indicating a secure requester, using the address translation information to translate the first Translate an access request into a second access request for the second bus and communicate the second access request to the second bus; and an ingress circuit coupled to the second bus To the memory and the first and second buses, wherein the ingress circuit is configured to receive a third access request from the second bus, each third access request including a third address, the ingress The circuit is further configured to: translate the third access request into a fourth access request for the first bus using the address translation information; and translate the fourth access request and a fourth request for the fourth address Said security indicator associated with said address translation information to said first bus.In some such devices, the address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to a second address range of the second bus , And each of the safety indicators is associated with one of the address range maps, respectively.In some such devices, the egress circuit is further configured to indicate an unsafe address in response to the first requester security indicator indicating an unsafe requestor and address translation information for the first address Range, translating the first access request into the second access request for the second bus and communicating the second access request to the second bus.In some such devices, the address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to a second address range of the second bus , And each of the safety indicators is respectively associated with one of the address range maps; each address range map specifies a first base address and a second base address.In some of these devices, each address range map specifies a size.In some such devices, the first base address is in a first address space of the first bus and the second base address is in a second address space of the second bus.In some such devices, the safety indicator is programmable.A system is now provided as another embodiment. The system includes a first bus, a first set of master-slave circuits coupled to the first bus, a second bus, a bridge circuit coupled between the first bus and the second bus, Wherein the bridge circuit comprises: a memory configured to store address translation information and an associated security indicator; an egress circuit coupled to the second bus A memory, and the first and second buses, wherein the egress circuit is configured to receive first access requests from the first bus, each first access request including a first requester security indicator and a An address, and for each first access request, the egress circuit is configured to: indicate an unsafe requestor in response to the first requester security indicator and an address associated with the address for the first address The security indicator associated with the translated information indicates a secure address range, rejecting the first access request; translating information using the address in response to the first requestor security indicator indicating a secure requestor The first access request translates into a second access request for the second bus and communicates the second access request to the second bus; and an ingress circuit coupled to the memory And the first and second buses, wherein the ingress circuit is configured to receive a third access request from the second bus, each third access request including a third address, the ingress circuit being further configured To translate the third access request into a fourth access request for the first bus using the address translation information; and translate the fourth access request and the address for the fourth address The security indicator associated with the translation information is communicated to the first bus.In some such systems, the address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to a second address range of the second bus , And each of the safety indicators is associated with one of the address range maps, respectively.In some such systems, the egress circuit is further configured to indicate an unsafe address in response to the first requester's security indicator indicating an unsafe requestor and address translation information for the first address Range, translating the first access request into the second access request for the second bus and communicating the second access request to the second bus.In some such systems, the address translation information includes a plurality of address range maps, each mapping a first address range of the first bus to a second address range of the second bus , And each of the safety indicators is respectively associated with one of the address range maps; each address range map specifies a first base address and a second base address.In some of these systems, each address range map specifies a size.In some such systems, the first base address is in a first address space of the first bus and the second base address is in a second address space of the second bus.Although some aspects and features may be described in some cases in a single figure, it is to be understood that features from one of the figures may be combined with features of another figure, even if the combination is not explicitly shown or Described as a combination.These methods and systems are believed to be applicable to a variety of systems for bridging communications between different buses. Other aspects and features will be apparent to one of ordinary skill in the art upon consideration of the specification. The methods and systems may be implemented as one or more processors configured to execute software, an application specific integrated circuit (ASIC), or logic on a programmable logic device. The description and drawings are by way of example only and the scope of the application is defined by the claims. |
A technique for thread synchronization and communication. More particularly, embodiments of the invention pertain to managing communication and synchronization among two or more threads of instructions being executing by one or more microprocessors or microprocessor cores. |
1.A device that includes:a cache memory, the cache memory including one or more monitor bit fields, the one or more monitor bit fields indicating whether a corresponding cache line is to be monitored, to see if a thread can be modified to correspond to the cache The event of the data of the line;Detecting logic for detecting whether data has been written to the cache line when the watch bit in the one or more bit fields is set.2.The apparatus of claim 1 further comprising a memory for storing a plurality of instructions, the plurality of instructions comprising a first instruction to set the monitoring bit, a second to clear the monitoring bit An instruction and a third instruction for enabling the detection logic.3.The device of claim 2 wherein said first instruction and said second instruction are the same instruction.4.The device of claim 1 wherein said event is notified by an interrupt mechanism or a user level interrupt mechanism.5.The device of claim 4 wherein said user level interrupt mechanism causes an instruction indicated by said thread to be executed.6.The device of claim 1 wherein said cache memory includes a consistency status field for storing consistency information associated with said cache line.7.The device of claim 6 wherein said detection logic is operative to detect a transition indicating said consistency status field to be written to said cache line.8.The device of claim 7 wherein said transition of said coherency status field comprises a transition from a shared state to an inactive state.9.A system comprising:a cache memory, the cache memory including a plurality of watch bit fields, the plurality of watch bit fields indicating whether a corresponding cache line is to be monitored, to see if a thread is enabled to be modified by the send thread to correspond to the cache line The event of the data;a first memory storing a first instruction for setting a bit within the plurality of watch bit fields and a third for enabling the detection logic to detect whether data has been written to the cache line by the transmit thread instruction.10.The system of claim 9 further comprising a processor if said detection logic detects that data has been written to said cache line and has set a corresponding one of said plurality of watch bit fields corresponding to the receive thread A monitor bit, then the processor executes the receive thread to read the data from the cache line.11.The system of claim 10 wherein said first memory comprises a second instruction for clearing at least some of said plurality of watch bit fields.12.The system of claim 10 wherein said first memory is operative to store a plurality of instructions that enable detection logic to detect whether data has been written to said high speed by said transmitting thread A cache line, wherein each of the plurality of instructions and the third instruction has an associated priority.13.The system of claim 12 wherein said detection logic comprises programming information to a status channel for detecting a scene.14.The system of claim 9 wherein said cache memory includes a consistency status field for storing consistency information associated with said cache line.15.The system of claim 14 wherein said detection logic is operative to detect a transition indicating said consistency status field of said data has been written to said cache line.16.The device of claim 9, wherein the detection logic comprises logic for detecting one of a group consisting of an exception, a fault, a trap, an interrupt in response to data written to the cache line.17.A method comprising:Having the cache line be monitored to see if there is data written to the cache line by an instruction within the sending thread;Causing the data written to the cache line to be detected;Detecting an event that enables another thread to modify data corresponding to the cache line;Invoking a handler in response to detecting the data written to the cache line;The data is delivered to a receiving thread.18.The method of claim 17 wherein enabling the cache line to be monitored comprises executing an instruction to set at least one watch bit in a watch bit field corresponding to the cache line.19.The method of claim 18, wherein causing said data written to said cache line to be detected comprises executing an instruction to program state channel logic for detecting said high speed corresponding to being monitored The scene of the cache line.20.The method of claim 19, wherein asserting said data written to said cache line to enable said detection logic to detect said data written to said cache line is asserted The signal is selected from the group consisting of an exception, a fault, a trap, and an interrupt.21.The method of claim 19 wherein detecting comprises detecting a consistent state transition of said cache line from a first state to a second state.22.The method of claim 21 wherein said first state is a shared state and said second state is an invalid state.23.The method of claim 19, wherein the scenario comprises: detecting if the cache line is to be monitored; if the cache line is to be monitored, detecting whether the cache line has occurred Consistent state transition from a state to an invalid state.24.The method of claim 23 wherein the instructions for logic programming the status channel are one of a plurality of instructions for logic programming the status channel, wherein each instruction corresponds to one or more threads Monitor different instances of the cache line.25.The method of claim 24 wherein each of said plurality of instructions has a unique priority to control an order in which said plurality of instructions are executed.26.A machine readable medium having stored on the machine readable medium a set of instructions that, when executed by a machine, cause the machine to perform a method comprising the steps of:Storing information describing variables corresponding to cache lines to be monitored;Using a comparison rule to determine if the variable has been set to a first value;If the comparison rule is met, then the pointer is directed to at least one instruction to be executed, wherein the at least one instruction response satisfies the comparison rule to enable information to be shared between two or more threads.27.The machine readable medium of claim 26 wherein at least one argument is optionally passed to the at least one instruction as the response satisfies the comparison rule.28.The machine readable medium of claim 27, further comprising clearing the variable after the comparison rule is satisfied.29.The machine readable medium of claim 28 wherein said two or more threads will continue to execute regardless of whether said comparison rule is satisfied.30.The machine readable medium of claim 29, wherein the comparison rule utilizes processor logic to determine whether the variable has been set to the first value. |
Thread communication and synchronization technologyTechnical fieldEmbodiments of the invention relate to microprocessor architectures. More specifically, embodiments of the present invention relate to the management of communication and synchronization between two or more threads executing within one or more microprocessors or microprocessor cores.Background techniqueInstructions within modern computer programs can be organized to execute in accordance with various instruction strings or "threads." In general, threads executing instructions within a processing resource utilize and/or generate a set of state information that is unique to, or at least associated with, a particular thread. However, each thread may also share state information or other information, such as data to be manipulated by one or more threads, in which case it may be desirable to pass information from one thread to another.In a typical shared memory microprocessor or processing system, each thread exchanges information through a thread (sending thread) for storing information in a storage unit, which information can be read by another thread (receiving thread). Typically, the receiving thread can poll the storage unit at various times to detect when the sending thread has updated the data. In some prior art implementations, the detection mechanism can detect when the shared information is written and can alert the receiving thread in response thereto.In the latter case, the detection mechanism detects when shared information is written to the storage unit and alerts the receiving thread, in which case some prior art utilizes monitoring or "snapping" between microprocessors or microprocessors such as DRAM Special hardware detection logic for the interconnection between memories. The detection logic can be configured to monitor commands to be transferred between interconnects to a particular address, which may require detection logic for each memory cell to be monitored.Prior art such as the above can be costly in terms of die area and power. Moreover, these prior art techniques may be difficult to adjust when monitoring the update of multiple memory cells, resulting in a software development challenge.There is a prior art that covers mechanisms for reporting events directly to user-level threads running on a microprocessor without requiring traditional intervention by the operating system to notify of an interrupt or anomaly. These user-level interrupts or user-level exceptions are based on having a mechanism that holds enough information about the current state of the thread and redirects the thread to a predetermined block that executes the "handler" code in response to the event. As part of the handler code, the thread can perform whatever it wants to do and then return to the execution path it was in before the event. You can also choose not to return to the execution path, but continue to execute a completely different task group.DRAWINGSThe embodiments of the invention are illustrated by way of example and not limitation, in the drawingsFigure 1 illustrates a portion of a processing system that can be used in conjunction with at least one embodiment of the present invention.2 illustrates cache entries and corresponding coherency and monitoring items that may be used in accordance with one embodiment of the present invention.3 is a flow diagram showing operations involved in detecting a consistent state transition that may indicate execution of a write to a cache line by a thread, in accordance with one embodiment.Figure 4 illustrates a front side bus (FSB) computer system in which one embodiment of the present invention may be utilized.Figure 5 illustrates a computer system arranged in a point-to-point (PtP) configuration.Detailed waysEmbodiments of the invention relate to microprocessor architectures. More specifically, embodiments of the invention relate to the management of communication and synchronization between two or more threads of instructions executed by one or more microprocessors or microprocessor cores. At least one embodiment of the present invention provides a mechanism for identifying, by a thread, those storage units to be notified if any other thread modifies values stored in a set of storage units. In one embodiment, the notification may be performed by a user-level interrupt/exception mechanism within the microprocessor, or by some other logic or software within the computing system. In one embodiment, communication and synchronization between threads is accomplished by enabling a particular cache coherency event to be notified to a thread regarding cache lines accessed by one or more other threads.Unlike some prior art inter-thread communication techniques, embodiments of the present invention may utilize a number of existing resources within a processor or computer system rather than utilizing special detection hardware to monitor the particular memory location to which the transmitting thread is written. . In particular, at least one embodiment utilizes consistency information already present in the cache line to detect when information is written to a cache line corresponding to a unit within a memory such as a DRAM. More specifically, a cache line that is currently in a state that allows a local read of a data value corresponding to the row (such as a "shared" state) must be consistent before another thread modifies any data value corresponding to the row. Sexual action.Unlike some prior art inter-thread communication techniques, embodiments of the present invention may allow for monitoring many unique updates made by other threads to a storage unit. In particular, at least one embodiment utilizes a common reporting mechanism to indicate whether another thread has updated one or more of the monitored storage units.One embodiment of the present invention can utilize a minimum amount of detection logic that is only used to detect cache line state transitions to detect cache line state transitions and utilize a user-level interrupt mechanism to notify the receiving thread so that the receiving thread can retrieve the writes to Cache line information. In other embodiments, detection of cache line state transitions may be programmed as a monitoring event or "scene" into the state channel of the processor. In other embodiments, detection of cache line state transitions may occur in response to hardware mechanisms such as interrupts, exceptions, traps, faults, and the like.In one embodiment, one or more cache lines may be monitored or associated by including one or more watch bits in a cache line, or otherwise associating one or more watch bits with a cache line. This or these cache lines are "marked" as cache lines to be monitored. In order to set the monitoring bits, at least one embodiment may utilize commands or instructions or some other means. In addition, multiple monitor bits can be utilized so that portions of the user's code can configure the monitored cache line independently of other portions of the code. After the cache line state is detected, the watch bit can be cleared via a clear command or instruction or some other means.Figure 1 illustrates a portion of a processing system that can be used in conjunction with at least one embodiment of the present invention. In particular, Figure 1 shows a processor or processing core 101 having a cache memory 105 associated with it, which may be processed by processor/core 101 or some other processing One or more threads of instructions executed within a resource (not shown) are shared. In one embodiment of the invention, cache memory 105 is treated as if it were exclusively used by that thread so that the thread can store information in the cache line regardless of the other threads that are using the cache line. .Also shown in FIG. 1 is memory 110, which may be comprised of DRAM or some other memory technology such as SRAM, magnetic disk or compact disk. In one embodiment, cache memory 105 includes entries for a subset of the items of mirror memory 110. Thus, the cache memory can include an agent for notifying access to data from the cache memory regarding when a particular cache line (eg, cache line "A" in FIG. 1) contains invalid data ("I" state) or When the cache line is modified ("M" state) for consistency information, such that when the cache line can be shared between various agents, threads, or programs ("S" state), as well as on a particular thread, agent, or When the program exclusively uses the cache line ("E" state), it does not contain the same data as the corresponding memory item (eg, memory item "A" in Figure 1).2 illustrates cache entries and corresponding consistency and monitoring items that may be used in accordance with one embodiment of the present invention. In particular, cache line 201 of cache 200 may store data corresponding to the cache line in field 203, address tags and consistency information in field 205, and store monitoring information in field 207. In order to enable state changes to the cache line to be monitored, one or more bits are set in the monitor information field. In addition, if the cache is shared by multiple hardware threads (for example, each hardware thread runs a separate software thread), then depending on how many instances in the thread can monitor the cache line, each thread can correspond to the monitor information field. Multiple bits.For example, in Figure 2, the watch bit labeled "a" corresponds to the first thread in which only three instances (which can be repeated) monitor the corresponding cache line. The watch bit labeled "b" corresponds to the second thread, which has two instances (which can be repeated) to monitor the corresponding cache line. The watch bit labeled "d" corresponds to the third thread in which only one instance (which can be repeated) monitors the corresponding cache line. Thus, each bit corresponding to each instance of the corresponding cache line that will be monitored within each thread can be independently set or cleared.Of course, the more monitor bit fields that exist in the cache line, the more threads and the more instances in the thread can monitor the cache line at a time. In one embodiment, the cache line contains six watch bit fields, allowing two threads to monitor the cache line for three different instances within each thread. In other embodiments, more or fewer bit fields may be used to enable more or fewer threads or instances within the thread that may monitor the cache line.In one embodiment, a memory update performed by one thread of the shared cache relative to other threads sharing the same cache is processed as a consistency event from other threads that do not share the cache. For example, if a thread updates a value stored in a cache line, other threads that set the watch bit can detect the update and notify the corresponding thread through an interrupt mechanism such as a user-level interrupt mechanism. In other embodiments, the interrupt mechanism can be an interrupt mechanism that is invisible to the user.In one embodiment, two separate commands or instructions may be executed within the processor or in logic within the cache memory to set the monitor bits and clear the monitor bits, respectively. For example, in one embodiment, a "load monitor" instruction can be executed that has an address corresponding to the cache line and has corresponding data to be written to the monitor bit as an attribute. Similarly, in one embodiment, a "clear monitor" instruction can be executed that has an address corresponding to the cache line and has corresponding data to be written as an attribute to clear the watch bit. In one embodiment, an instruction is used to set the watch bit and clear the watch bit, depending on the value of the watch bit property of the instruction. In another embodiment, an instruction is used to clear all of the specific attributes at each cache line.The detection of state transitions for cache lines that have been marked as being monitored (e.g., by setting corresponding monitoring bits in one embodiment) can be implemented in a variety of ways. For example, in one embodiment, logic such as logic that performs a Boolean OR function (such as an OR gate) can be utilized to detect if a cache line has any of its corresponding monitor bits set, and if so, detect high speed The consistency bit of the cache line (labeled "c" in Figure 1) indicates whether a state transition has occurred to indicate that another thread has performed a write to that cache line. In one embodiment, a state transition from any state that allows local read of corresponding data to an I state may indicate that the thread has or is planning to write information to the corresponding cache line. In addition, writes to the cache line performed by another thread of the shared cache can also be detected as an update.In other embodiments, the coherency state transition of the cache line may trigger an interrupt, exception, fault, trap, or other signaling mechanism within the processing hardware to indicate that the thread has written information to the cache line. In other embodiments, other mechanisms may be utilized to indicate a consistent state transition indicating that the thread has written data to a particular cache line.In one embodiment, events are monitored on a per thread basis, in this embodiment, a logical combination of events called "scenarios" can be defined to detect a cache that can indicate that data has been written to the cache line Consistent state transition information for the row. In this case, a processor state storage area called a "channel" can be programmed to perform substantially the same logical functions as the hardware and/or software described above for detecting a consistent state transition of a cache line. The occurrence of this scenario can trigger a soft yield event, such as a fault class or a trap class, which can call the yield event handler to process the scene.In one embodiment, the mechanism for notifying the thread of an event indicating that the monitored row has been modified or will be modified shortly may have a mask that can be programmed to monitor any one of the set of bits. For example, in one embodiment, the channel is programmed by performing a Boolean operation such as a logical AND operation between the channel mask and the programming bits to be written to the channel. In one embodiment, the mechanism is a user-level interrupt mechanism, while in other embodiments, the mechanism is an interrupt mechanism that is invisible to the user. In one embodiment, the mechanism for notifying the thread of an event indicating that the monitored line has been modified or will be modified shortly may also notify the software thread in response to other events, such as context switching to a software thread in the hardware.Regardless of how a possible update of the monitored row is detected (indicating that a thread has been or may be written to a particular cache line), detection of the state transition may invoke a handler to process the detection. One possible task that the handler is to perform is to read one of the monitored addresses or a set of monitored addresses to see if another thread has updated the storage unit with the associated value; if the storage unit has been updated to the relevant value, then Appropriate actions can be taken, such as calling a specific software function.3 is a flow diagram showing operations involved in detecting a consistent state transition that may indicate execution of a write to a cache line by a thread, in accordance with one embodiment. At operation 301, the cache line can be monitored by one or more threads by setting a number of monitor bits equal to the number of instances in each thread that will monitor the cache line. In one embodiment, the cache line is enabled by executing an instruction whose attribute corresponds to a watch bit to be set, such as a "load monitor" instruction. At operation 305, the thread writes information to the monitored cache line, causing a consistent state transition, and then at operation 310, the handler is invoked to retrieve information written to the cache line so that the information can be delivered to the monitor (receive) thread.In one embodiment, the consistency state transition can be detected using logic for detecting if a watch bit is set, and if a watch bit is set, the logic detects if a consistent state transition has occurred. In other embodiments, the consistency state transitions may be detected by software, hardware, or some combination thereof. Moreover, in at least one embodiment, the coherency state transition is detected by programming the scene into the processor state channel and the coherency state transition is reported to the receiving thread by a user level interrupt mechanism.At operation 315, the monitor bit corresponding to the detected consistent state transition may be cleared and optionally reset by another thread or a monitoring instance within the same thread. In one embodiment, the watch bit can be cleared by executing an instruction other than the instruction to set the watch bit, such as a "clear monitor" instruction, the attribute of this different instruction corresponding to the cleared watch bit. In other embodiments, the same instruction (e.g., "load monitor" instruction) for setting the watch bit can be used to clear the watch bit by using the attribute corresponding to the cleared watch bit.In one embodiment, an interface is provided to a user's software program, in which the software can specify variables to be monitored and actions taken in response thereto. In one embodiment, the user's software program can provide specific memory variables, specific comparison rules for evaluating the values of the specified memory variables, and function pointers with optional arguments that are called when the value of the memory variable satisfies the evaluation criteria. . In this embodiment, the software may specify this information by means such as instructions or by a collection of multiple instructions.In addition, the user software can specify a number of variables to monitor, each variable corresponding to a unique or common response action. While monitoring this (these) variables, the thread can continue to perform other functions. When a function is called in response to the occurrence of a monitored variable, the function can return control to the thread so that the thread can continue to execute, providing an adjustable and flexible interface.In one embodiment, an interface such as that described above includes information describing each variable, a comparison rule for the variable, and an action or function to be called and its arguments. In one embodiment, the information is stored in a table within a storage area such as a host computer system memory (e.g., DRAM). Software, firmware, or hardware can read the table as appropriate, read the specified variables for each item, and execute comparison rules to determine if the action should be invoked.In addition, each of the lines corresponding to the variable to be monitored can be tagged using the previously described mechanism for marking the rows to be monitored in the cache. If an event is detected in the monitored row indicating that the row may now be modified by another thread, then appropriate software, firmware or hardware can be activated as described above to evaluate all of the monitored variables in the table. If no variables meet their criteria, the software, firmware, or hardware will ensure that all appropriate rows are still being monitored and will return to the work performed before calling it.The software, firmware, or hardware used to evaluate the variable table and call the appropriate function can manipulate the thread storage stack so that when it calls a function in response to a variable that satisfies its criteria, the function can return directly to the previously running task. Alternatively, the software, firmware or hardware can manipulate the stack so that the function will return to a particular piece of code, ensuring that all cache lines corresponding to the variable are properly monitored before eventually returning to the previously run task. Another alternative is to have a special return instruction that will be called with a response to a variable that satisfies its criteria. This special return instruction will ensure that all cache lines corresponding to variables are properly monitored before eventually returning to the previously run task.Figure 4 illustrates a front side bus (FSB) computer system in which one embodiment of the present invention may be utilized. The processor 505 accesses data from the primary (L1) cache memory 510 and the main memory 515. In other embodiments of the invention, the cache memory may be a secondary (L2) cache or another memory within a computer system memory hierarchy. Moreover, in some embodiments, the computer system of FIG. 4 can include both an L1 cache and an L2 cache.A storage area 506 of machine state is shown within the processor of FIG. In one embodiment, the storage area may be a set of registers, while in other embodiments, the storage area may be other memory structures. Also shown in FIG. 4 is a storage area 507 for saving a region segment, according to one embodiment. In other embodiments, the save region segment may exist in other devices or memory structures. The processor can have any number of processing cores. However, other embodiments of the invention may be implemented in other devices, such as independent bus agents, within the system, or in hardware, software, or some combination thereof throughout the system.The main memory can be implemented in various memory sources, such as dynamic random access memory (DRAM), hard disk drive (HDD) 520, or a memory containing various storage devices and techniques disposed at a remote location of the computer system via network interface 530. source. The cache memory can be located within the processor or in close proximity to the processor, such as on the local bus 507 of the processor.In addition, the cache memory can contain relatively fast storage elements, such as six-transistor (6T) elements or other storage elements having approximately equal or faster access speeds. The computer system of Figure 4 may be a PtP network such as a bus proxy of a microprocessor that communicates via a bus signal dedicated to each agent on a point-to-point (PtP) network. Figure 5 illustrates a computer system arranged in a point-to-point (PtP) configuration. In particular, Figure 5 illustrates a system in which a processor, memory, and input/output devices are interconnected by a plurality of point-to-point interfaces.The system of Figure 5 may also include a number of processors, of which only two of the processors 670, 680 are shown for clarity. Processors 670, 680 can each include local memory controller hubs (MCH) 672, 682 for interfacing with memories 22, 24. Processors 670, 680 can exchange data using PtP interface circuits 678, 688 via a point-to-point (PtP) interface 650. Processors 670, 680 can each exchange data with chipset 690 via point-to-point interface circuits 676, 694, 686, 698 via separate PtP interfaces 652, 654. Chipset 690 can also exchange data with high performance graphics circuitry 638 via high performance graphics interface 639. Embodiments of the invention may be located in any processor having any number of processing cores, or within each PtP bus agent in FIG.However, other embodiments of the invention may be present in other circuits, logic units or devices within the system of FIG. Moreover, other embodiments of the invention may be distributed among several circuits, logic units or devices as shown in FIG.Aspects of embodiments of the invention may be implemented using complementary metal-oxide-semiconductor (CMOS) circuits and logic devices (hardware), while other aspects may be implemented using instructions (software) stored on a machine readable medium, such instructions The processor, when executed by a processor, will cause the processor to perform a method for implementing an embodiment of the invention. Moreover, some embodiments of the invention may be performed separately in hardware, while other embodiments may be performed separately in software.Although the present invention has been described with reference to the illustrative embodiments, it is not intended to be construed Various modifications of the illustrative embodiments and other embodiments that are obvious to those skilled in the art are deemed to be within the spirit and scope of the invention. |
A method of forming a semiconductor-on-insulator (SOI) device. The method includes providing an SOI wafer having an active layer, a substrate and a buried insulator layer therebetween; defining an active region in the active layer; forming a source, a drain and body in the active region, the source and the drain forming respective hyperabrupt junctions with the body, the hyperabrupt junctions being formed by an SPE process which includes amorphizing the at least one of the source and the drain, implanting dopant ion species and recrystalizing at temperature of less than 700° C.; forming a gate disposed on the body such that the source, drain, body and gate are operatively arranged to form a transistor; and forming a silicide region in each of the source and the drain, the silicide regions being spaced from the respective hyperabrupt junctions by a lateral distance of less than about 100 Å. |
What is claimed is: 1. A method of forming a semiconductor-on-insulator (SOI) device, comprising the steps of:providing an SOI wafer having a semiconductor active layer, a semiconductor substrate and a buried insulator layer disposed therebetween; defining an active region in the active layer; forming a source, a drain and body in the active region, at least one of the source and the drain forming a hyperabrupt junction with the body; forming a gate disposed on the body such that the source, drain, body and gate are operatively arranged to form a transistor; and forming a silicide region in the at least one of the source and drain forming the hyperabrupt junction with the body, the silicide region having a generally vertical interface, the generally vertical interface being laterally spaced apart from the hyperabrupt junction by about 60 Å to about 150 Å. 2. A method of forming an SOI device as set forth in claim 1, wherein the vertical interface is laterally spaced apart from the hyperabrupt junction by a distance less than about 100 Å.3. A method of forming an SOI device as set forth in claim 1, wherein the generally vertical interface extends adjacent the hyperabrupt junction along a distance of about 70 Å to about 130 Å.4. A method of forming an SOI device as set forth in claim 1, further comprising the step of forming a second silicide region in the other of the at least one of the source and the drain, the other of the at least one of the source and the drain forming a hyperabrupt junction with the body region and the second silicide region having a generally vertical interface being laterally spaced apart from the respective hyperabrupt junction by about 60 Å to about 150 Å.5. A method of forming an SOI device as set forth in claim 4, wherein the source silicide region and drain suicide region are substantially symmetric with one another about the gate.6. A method of forming an SOI device as set forth in claim 4, wherein the generally vertical interfaces of each of the silicide regions extend adjacent the respective hyperabrupt junctions along a distance of about 70 Å to about 130 Å.7. A method of forming an SOI device as set forth in claim 1, wherein the hyperabrupt junction is formed by a solid phase epitaxy (SPE) process, the SPE process including amorphizing the at least one of the source and the drain, implanting dopant ion species and recrystalizing at temperature of less than 700[deg.] C.8. A method of forming an SOI device as set forth in claim 7, further comprising the step of forming source and drain extensions, the extensions being formed by a source/drain extension SPE process.9. A method of forming a semiconductor-on-insulator (SOI) device, comprising the steps of:providing an SOI wafer having a semiconductor active layer, a semiconductor substrate and a buried insulator layer disposed therebetween; defining an active region in the active layer; forming a source, a drain and body in the active region, the source and the drain forming respective hyperabrupt junctions with the body, the hyperabrupt junctions being formed by a solid phase epitaxy (SPE) process, the SPE process including amorphizing the at least one of the source and the drain, implanting dopant ion species and recrystalizing at temperature of less than 700[deg.] C.; forming a gate disposed on the body such that the source, drain, body and gate are operatively arranged to form a transistor; and forming a silicide region in each of the source and the drain, the silicide regions being spaced from the respective hyperabrupt junctions by a lateral distance of less than about 100 Å. 10. A method of forming an SOI device as set forth in claim 9, wherein the silicide regions each have a generally vertical interface, the generally vertical interfaces extending adjacent the respective hyperabrupt junctions along a distance of about 70 Å to about 130 Å.11. A method of forming an SOI device as set forth in claim 9, wherein the source silicide region and drain silicide region are substantially symmetric with one another about the gate.12. A method of forming an SOI device as set forth in claim 9, further comprising the step of forming source and drain extensions using a source/drain extension SPE process. |
TECHNICAL FIELDThe invention relates generally to semiconductor-on-insulator (SOI) devices and methods for forming the same and, more particularly to controlling floating body effects and contact resistance within an SOI device.BACKGROUND ARTTraditional semiconductor-on-insulator (SOI) integrated circuits typically have a silicon substrate having a buried oxide (BOX) layer disposed thereon. A semiconductor active layer, typically made from silicon, is disposed on the BOX layer. Within the active layer, active devices, such as transistors, are formed in active regions. The size and placement of the active regions are defined by isolation regions. As a result of this arrangement, the active devices are isolated from the substrate by the BOX layer. More specifically, a body region of each SOI transistor does not have body contacts and is therefore "floating."SOI chips offer potential advantages over bulk chips for the fabrication of high performance integrated circuits for digital circuitry. Such digital circuitry is typically made from partially-depleted metal oxide semiconductor field effect transistors (MOSFETs). In such circuits, dielectric isolation and reduction of parasitic capacitance improve circuit performance, and virtually eliminate latch-up in CMOS circuits. In addition, circuit layout in SOI can be greatly simplified and the packing density greatly increased.However, devices formed from SOI materials typically exhibit parasitic effects due to the presence of the floating body (i.e., "floating body effects"). These floating body effects may result in undesirable performance in SOI devices. Therefore, it will be appreciated that a need exists for SOI MOSFETs having reduced floating body effects.SUMMARY OF THE INVENTIONAccording to one aspect of the invention, the invention is a method of forming a semiconductor-on-insulator (SOI) device. The method includes the steps of providing an SOI wafer having a semiconductor active layer, a semiconductor substrate and a buried insulator layer disposed therebetween; defining an active region in the active layer; forming a source, a drain and body in the active region, at least one of the source and the drain forming a hyperabrupt junction with the body; forming a gate disposed on the body such that the source, drain, body and gate are operatively arranged to form a transistor; and forming a silicide region in the at least one of the source and drain forming the hyperabrupt junction with the body, the silicide region having a generally vertical interface, the generally vertical interface being laterally spaced apart from the hyperabrupt junction by about 60 Å to about 150 Å.According to another aspect of the invention, the invention is a method of forming a semiconductor-on-insulator (SOI) device. The method includes the steps of providing an SOI wafer having a semiconductor active layer, a semiconductor substrate and a buried insulator layer disposed therebetween; defining an active region in the active layer; forming a source, a drain and body in the active region, the source and the drain forming respective hyperabrupt junctions with the body, the hyperabrupt junctions being formed by a solid phase epitaxy (SPE) process, the SPE process including amorphizing the at least one of the source and the drain, implanting dopant ion species and recrystalizing at temperature of less than 700[deg.] C.; forming a gate disposed on the body such that the source, drain, body and gate are operatively arranged to form a transistor; and forming a silicide region in each of the source and the drain, the silicide regions being spaced from the respective hyperabrupt junctions by a lateral distance of less than about 100 Å.BRIEF DESCRIPTION OF THE DRAWINGSThese and further features of the present invention will be apparent with reference to the following description and drawings, wherein:FIG. 1 is a cross-sectional view of a semiconductor-on-insulator (SOI) device in accordance with the present invention;FIG. 1A is an enlarged, partial view of the SOI device of FIG. 1;FIG. 2 is a flow chart of a method of making the SOI device of FIG. 1; andFIGS. 3-9 are cross-sectional views of SOI in various stages of fabrication.DISCLOSURE OF THE INVENTIONIn the detailed description which follows, identical components have been given the same reference numerals, regardless of whether they are shown in different embodiments of the present invention. To illustrate the present invention in a clear and concise manner, the drawings may not necessarily be to scale and certain features may be shown in somewhat schematic form.Referring initially to FIG. 1, a semiconductor-on-insulator (SOI) device 10 according to the present invention is shown. In the illustrated embodiment, the SOI device is a transistor and, more specifically, is a partially depleted metal oxide semiconductor field effect transistors (MOSFET). The device 10 is fabricated in conjunction with an SOI wafer 12. The SOI wafer includes an active layer 14 (also referred to as a semiconductor layer 14), a buried insulator layer 16 (also referred to herein as a buried oxide (BOX) layer 14), and a semiconductor substrate 18. In one embodiment, the wafer 12 has a silicon semiconductor layer 14, a silicon substrate 18, and a silicon dioxide (SiO2) buried insulator layer 16.Within the semiconductor layer 14, isolation regions 17 define the size and placement of an active region 19, the active region 19 having a source region (or source 20), a drain region (or drain 22) and a body region (or body 24) disposed therebetween. The source 20 and the drain 22 are doped as described in more detail below, such that the source 20 and the drain 22 are doped to form N-type regions or P-type regions as desired. The body 24 is doped to have opposite doping as the source 20 and the drain 22. Alternatively, the body 24 can be undoped.The source 20 and the drain 22 each include extensions 43 (FIG. 1A) extending under sidewall spacers 44, the sidewall spacers 44 being disposed adjacent a gate stack (or gate 46). The gate 46 is disposed on top of the body 24. The gate 46 includes a gate dielectric 50 and a gate electrode 48 disposed thereon as is known in the art. The gate dielectric 50 may be formed from conventional materials, such as silicon dioxide, silicon oxynitride, or silicon nitride (Si3N4), and the gate electrode 48 can be formed from a conductive material, such as polysilicon.The source 20 and the drain 22 also include deep implants as described below in more detail. The deep implants are doped so that a source/body hyperabrupt junction 40 is formed and a drain/body hyperabrupt junction 42 is formed. In addition, the junctions 40 and 42 are physically steep and are formed to be as vertical as possible. Therefore, the hyperabrupt junctions 40 and 42 generally extend at least from the lower edge of the extensions 43 (i.e., at the "corner" where the deep implant intersects with the extensions 43) towards the BOX layer 16. The depth of the hyperabrupt junctions 40 and 42 is defined by the depth to which the source 20 and the drain 22 are amorphized during an amorphization step carried out prior to dopant implantation. Below the amorphization depth, the doping concentration of the deep implants falls off, reducing the degree of abruptness of the source/body junction and the drain/body junction below the amorphization depth.The device 10 also includes a source silicide region 54, a drain silicide region 56 and a gate silicide region 55. In the illustrated embodiment, the source and drain silicide regions 54 and 56 are substantially symmetric about the gate 46, although it will be appreciated that the silicide regions 54 and 56 may be asymmetrical relative to the gate 46. The silicide regions 54 and 56 have upper surfaces 58 and 60, respectively, for external electrical connection using components such as contacts, vias and conductor lines.The illustrated source silicide region 54 interfaces the non-silicided portion of the source 20 along a lateral interface 68 and a generally vertical interface 70. The interfaces 68 and 70 are generally smooth and are generally perpendicular to one another, although a corner radius may be present at the junction where the interfaces 68 and 70 meet and the interfaces 68 and 70 may be bowed, arced or otherwise non-linear. Similarly, the drain silicide region 56 has a lateral interface 72 and a vertical interface 74, which are generally smooth and perpendicular to one another, although a corner radius may be present at the junction where the interfaces 72 and 74 meet and the interfaces 72 and 74 may be bowed, arced or otherwise non-linear.As shown in FIG. 1A, the interface 70 is laterally spaced from the hyperabrupt junction 40 as indicated by reference number 80. The lateral distance 80 is about 60 Å to about 150 Å. In another embodiment, the lateral distance is about 90 Å to about 120 Å, and in another embodiment, the lateral distance is less than about 100 Å, but not contacting the hyperabrupt junction 40. With respect to the foregoing ranges, and all other ranges and ratios herein, the range and ratio limits can be combined. As indicated by reference number 82, the interface 70 extends in a generally vertical arrangement adjacent the hyperabrupt junction 40 along a distance of about 70 Å to about 130 Å. In one embodiment, the vertical distance 82 is about 1.0 to about 1.5 times the lateral distance 80, and in one embodiment, the vertical distance 82 is about 1.2 to about 1.3 times the lateral distance 80. Similarly, the same or similar spacing parameters for the drain silicide region 56 are formed.According to the invention, the proximity of the silicide regions 54 and 56 to the respective source/body hyperabrupt junction 40 and drain/body hyperabrupt junction 42 enhances junction recombination and reduces floating body effects. In addition, the hyperabrupt source/body junction 40 and the hyperabrupt drain/body junction 42 allows for lower contact resistance. More particularly, the proximity of the silicide regions 54 and 56 to the hyperabrupt junctions 40 and 42 tends to make the device 10 more leaky. However, in the presence of these leaky diode junctions, the silicide may have a tendency to attract with lightly doped portions of the junction, increasing the tunneling barrier and, thus, increasing the contact resistance. In the present invention, the hyperabrupt nature of the junctions 40 and 42 allows for the placement of the silicide interfaces 70 and 74 to be in close proximity thereto (e.g., a distance of less than 100 Å).FIG. 2 is a flow chart of a method 100 for forming the device 10. In step 102 and as illustrated in FIG. 3, an SOI wafer 110 is provided. As mentioned, the SOI wafer 12 includes the substrate 18, the active, or semiconductor, layer 14 and the BOX layer 16 disposed therebetween. The semiconductor layer 14 may be suitably doped for the formation of a device with a body having P or N type doping. The wafer 12 may be formed using techniques known in the art, such as a wafer bonding technique of a separated by implanted oxygen (SIMOX) technique.Thereafter, in step 104 and as illustrated in FIG. 4, isolation regions 17 are formed to define the active region 19. In step 106 and as illustrated in FIG. 4, the gate 46, including the gate dielectric 50 and the gate electrode 48, is formed using conventional techniques. For example, a layer of dielectric material (e.g., Si02 or Si3N4) may be deposited on and/or grown on the semiconductor layer 14. Thereafter a layer of conductive gate electrode material (e.g., polysilicon) may be deposited on the layer of dielectric material by using, for example, low pressure chemical vapor deposition (LPCVD). The dielectric and electrode materials may be selectively removed, for example by well-known photolithography and selective etching methods, to form the gate 46 in a desired location. An example of a suitable etching method is reactive ion etching (RIE), using an appropriate etchant. It will be appreciated that a wide variety of other suitable gate structures as are known in the art may be formed in step 106. In addition, the gate 46 can be pre-doped and activated using known techniques.In step 108, a halo can be implanted as is well known in the art.In step 110 and as illustrated in FIG. 5, respective source 20 and drain 22 extensions 43 are formed by implanting ions 112 using, for example, a lightly doped drain (LDD) technique. Exemplary ions 112 for extension 43 formation include phosphorous or arsenic to establish N-type doping and boron or antimony to establish P-type doping. An exemplary implantation energy range is about 5 to 80 KeV, and an exemplary dosage range is about 1*10<12 >to about 5*10<15 >atoms/cm<2>. It will be appreciated that the gate 46 acts as a self-aligned mask during extension 43 formation. Some dopant may diffuse under the gate 46 as is conventional. It will further be appreciated that, if desired, a separate doping mask or temporary spacer may be used in place of or in addition to the gate 46. Thereafter, in step 114, the halo (if formed) and the extensions 43 are activated with a thermal cycle, such as a rapid temperature anneal (RTA).As an alternative, the extensions 43 can be formed using a solid phase epitaxy (SPE) process. More specifically, SPE is used to amorphize the semiconductor layer 14 with ion species, such as, silicon or germanium. The energy and dosage of the ion species can be determined empirically for the device being fabricated. Next, dopant is implanted to achieve the desired N-type or P-type doping and then the semiconductor layer 14 is recrystalized using a low temperature anneal (i.e., at a temperature of less than about 700[deg.] C.).Referring to FIG. 6, in step 116, the side wall spacers 44 are formed adjacent the gate 46. The spacers 44 are formed using conventional techniques and are made from a material such as silicon oxide (SiO2) or a nitride (e.g., Si3Na).In step 118 and as illustrated in FIG. 7, source 20 and drain 22 deep implant regions are formed, thereby forming the source 20 and the drain 22 from the respective deep implant regions and the extensions 43. In one embodiment, the deep implants are formed using an SPE process. More specifically, SPE is used to amorphize the semiconductor layer 14 with ion species, such as silicon or germanium. The energy and dosage of the ion species can be determined empirically for the device being fabricated. In one embodiment, silicon ions are used to amorphize the semiconductor layer 14 and an exemplary energy range is about 5 keV to about 100 keV and an exemplary dosage range is about 1*10<15 >atoms/cm<2 >to about 1*10<16 >atoms/cm<2>. Next, dopant is implanted with ions 119 to achieve the desired N-type of P-type doping and then the semiconductor layer 14 is recrystalized using a low temperature anneal (i.e., at a temperature of less than about 700[deg.] C.).The semiconductor layer 14 is amorphized to a desired depth, wherein the depth defines the depth of the hyperabrupt junctions formed along the diode interfaces between the source 20 and the body 24 and between the drain 22 and the body 24, respectively. The gate 46 and the spacers 44 act as a self aligned mask during ion 119 implantation, however, some diffusion of the implanted ions 119 under the spacers 44 will generally occur as is known in the art. Exemplary ions 119 include phosphorous or arsenic to establish N-type doping and boron or antimony to establish P-type doping. An exemplary energy range for the deep implantation 182 is about 5 KeV to about 50 KeV, depending of the dopant species. An exemplary dosage range for the deep implantation is about 1*10<15 >atoms/cm<2 >to about 1*10<16 >atoms/cm<2>.Following step 118, an exemplary range of concentrations of the dopants in the source 20 and the drain 22 at or near the hyperabrupt junctions 40 and 42 is about 1*10<20 >atoms/cm<3 >or greater. An exemplary range of concentrations of the dopants in the body 24 at or near the hyperabrupt junctions 40 and 42 is about 1*10<15 >atoms/cm<3 >to about 1*10<19 >atoms/cm<3>.In step 120 and as illustrated in FIG. 8, silicide formation is initiated by depositing a layer of metal 122 upon the gate 46, the spacers 44, and the exposed portions of the semiconductor layer 14 in at least the area of the active region 19. The metal layer 122 is formed from a suitable metal, such as titanium, cobalt, or nickel. The metal layer 122 may be deposited, for example, by sputtering. Silicide is formed by reacting the metal layer 220 with the portions of the source 20, the drain 22 and the gate electrode 48 that are in contact with the metal layer 122 using one of a number of silicidation or salicidation processes and thereby forming the silicide regions 54, 56 and 55 discussed above. An exemplary method includes annealing by raising the temperature of the semiconductor device 10 being formed to a suitable level (e.g., about 500[deg.] C. to about 700[deg.] C.) for a suitable length of time (e.g., about 10 seconds to about 10 minutes). Rapid thermal annealing (RTA) may also be employed, for example at a temperature of about 600[deg.] C. to about 900[deg.] C. for about 5 second to about 120 seconds. It will be appreciated that other temperatures and heating times may be employed.As illustrated, the silicide regions 54 and 56 will tend to encroach underneath the spacers 44. In one embodiment, the silicide regions 54 and 56 will encroach under the spacers a lateral distance of about zero Å to about 100 Å.As mentioned, the vertical interfaces 70 and 72 and the lateral interfaces 68 and 72 of the respective silicide regions 54 and 56 are smooth. Various techniques to control the roughness of silicide formation are known in the art. For example, if titanium is used in the silicidation or salicidation process, a preamorphization implant (PAI) to form a layer of amorphous silicon on or in the source 20 and drain 22 can be carried out to control the silicide interface smoothness and to lower the interface sheet resistance. Excess metal of the metal layer 122 can be removed by conventional, well-known methods.As discussed above, the proximity of the silicide regions 54 and 56 to the respective hyperabrupt junctions 60 and 62 enhances junction recombination, thereby reducing floating body effects. In addition, the hyperabrupt junctions 60 and 62 lowers contact resistance within the device 10. As a result, overall operational performance of the device is improved.Although particular embodiments of the invention have been described in detail, it is understood that the invention is not limited correspondingly in scope, but includes all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto. |
In one embodiment, an apparatus comprises a memory to store executable instructions of an operating system and a processor to identify a request for data from an application; determine whether a persistent page cache stores a copy of the data, wherein the persistent page cache is directly addressable by the processor and is to cache data of a storage device that is not directly addressable by the processor; and access the data from the persistent page cache. |
A method comprising:identifying a request for data from an application;determining whether a persistent page cache stores a copy of the data, wherein the persistent page cache is directly addressable by a processor and is to cache data of a storage device that is not directly addressable by the processor; andaccessing the data from the persistent page cache.The method of claim 1, further comprising:identifying a request for second data from a second application;determining, whether a volatile page cache stores a copy of the data, wherein the volatile page cache is directly addressable by the processor and is to cache data of the storage device; andaccessing the data from the volatile page cache.The method of any of claims 1-2, further comprising:implementing a volatile page cache manager that is to determine that a file that includes the data is marked for caching in the persistent page cache; andsending a request for the data to a persistent page cache manager.The method of any of claims 1-2, further comprising implementing a first file system of an operating system, wherein the first file system is to:determine whether a file that includes the data is marked for caching in the persistent page cache or volatile page cache; andin response to determining that the file is marked for caching in the persistent page cache, send a request for the data to a second file system.The method of any of claims 1-2, further comprising:implementing a first file system that is to send data requests towards the volatile page cache;implementing a second file system that is to send data requests towards the persistent page cache; andimplementing a shim layer that is to intercept a data request sent to the first file system and communicate the data request to the second file system.The method of any of claims 1-5, wherein the request for data comprises a file descriptor.The method of any of claims 1-6, further comprising sending a request to the storage device to copy the data to the persistent page cache upon a determination that the persistent page cache does not store a copy of the data.The method of any of claims 1-7, further comprising translating a file descriptor and offset of the request for data into a logical block address and send the logical block address to the storage device in a request to the storage device.The method of any of claims 1-8, wherein the volatile page cache is to be stored in a volatile memory that is further to store application code and application data.The method of any of claims 1-9, wherein the persistent page cache is to be stored in 3D crosspoint memory.The method of any of claims 1-10, further comprising determining whether to cache data in the volatile page cache or the persistent page cache based on at least one of:a hint from an application that issues a system call referencing the data;whether the data is opened for writing;whether the data is required for booting; orwhether the data is file data or metadata.The method of any of claims 1-11, further comprising, upon receiving a request to sync dirty data of the persistent page cache, update metadata in the persistent page cache to mark the dirty data as persistent.An apparatus comprising means for performing the method of any of Claims 1-12.Machine-readable storage including machine-readable instructions, when executed, to implement the method of any of Claims 1-12.An apparatus comprising logic, at least a portion of which is in hardware logic, the logic to perform the method of any of Claims 1-12. |
FIELDThe present disclosure relates in general to the field of computer development, and more specifically, to data caching.BACKGROUNDA computer system may include one or more central processing units (CPUs) which may communicate with one or more storage devices. A CPU may include a processor to execute an operating system and/or other software applications that utilize a storage device coupled to the CPU. The software applications may write data to and read data from the storage device.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a block diagram of components of a computer system in accordance with certain embodiments.FIG. 2 illustrates a block diagram of components of a computer system implementing operating system based caching in accordance with certain embodiments.FIG. 3 illustrates a block diagram of components of a computer system implementing a persistent page cache in accordance with certain embodiments.FIG. 4 illustrates a block diagram of components of a computer system implementing a persistent memory file system and a persistent page cache in accordance with certain embodiments.FIG. 5 illustrates a block diagram of components of a computer system implementing a persistent memory file system shim layer and a persistent page cache in accordance with certain embodiments.FIG. 6 illustrates an example flow for providing data to a processor from a page cache in accordance with certain embodiments.Like reference numbers and designations in the various drawings indicate like elements.DETAILED DESCRIPTIONAlthough the drawings depict particular computer systems, the concepts of various embodiments are applicable to any suitable computer systems. Examples of systems in which teachings of the present disclosure may be used include desktop computer systems, server computer systems, storage systems, handheld devices, tablets, other thin notebooks, system on a chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular phones, digital cameras, media players, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include microcontrollers, digital signal processors (DSPs), SOCs, network computers (NetPCs), set-top boxes, network hubs, wide area networks (WANs) switches, or any other system that can perform the functions and operations taught below. Various embodiments of the present disclosure may be used in any suitable computing environment, such as a personal computing device, a server, a mainframe, a cloud computing service provider infrastructure, a datacenter, a communications service provider infrastructure (e.g., one or more portions of an Evolved Packet Core), or other environment comprising one or more computing devices.FIG. 1 illustrates a block diagram of components of a computer system 100 in accordance with certain embodiments. System 100 includes a central processing unit (CPU) 102 coupled to an external input/output (I/O) controller 104, a storage device 106, a volatile system memory device 108, and a persistent system memory device 110. During operation, data may be transferred between storage device 106 and CPU 102, between volatile system memory device 108 and CPU 102, between persistent system memory device 110 and CPU 102, or between any of storage device 106, volatile system memory device 108, and persistent system memory device 110. In various embodiments, particular data operations (e.g., read or write operations) involving a storage device 106, volatile system memory device 108, or persistent system memory device 110 may be issued by an operating system 122 and/or other logic (e.g., application 124) executed by processor 111.Operating system based caching is a caching technique in which a host computing device (e.g., a CPU) executes logic that controls the caching of data stored on a storage device (e.g., a hard disk drive) to a smaller and faster cache storage device (e.g., a solid state drive (SSD)). When data that is not currently cached by the host is requested by an application executed by the host, the data may be retrieved from the storage device and stored in memory that may be accessed more easily by the host computing device (i.e., the data may be cached by the host). For example, data retrieved from the storage device (e.g., a hard disk drive (HDD)) may be cached by storing the retrieved data in a cache storage device (e.g., SSD), a system memory device, and/or one or more lower level caches of the CPU. After the data is cached, the data may be retrieved from one of the caches rather than the storage device, thus reducing the amount of latency for data accesses by the host.In an operating system based caching system, an operating system may coordinate the caching of storage data from a storage device in a storage cache device comprising persistent (i.e., non-volatile) storage as well as a page cache in volatile system memory (e.g., dynamic random-access memory (DRAM)). A page cache (which is sometimes called a buffer cache or disk cache), is a cache for pages corresponding to data of a storage device, such as a HDD. An operating system may maintain a page cache in otherwise unused portions of the system memory (e.g., physical memory not directly allocated to applications may be used by the operating system for the page cache), resulting in quicker access to the contents of cached pages. A page cache is generally transparent to applications (i.e., applications are unaware as to whether the data retrieved is from a page cache or from the storage device).In general, system memory may be CPU addressable (e.g., directly addressable by the processor) while the storage device is not. For example, a memory space may be directly addressable by a processor if the CPU can construct the physical address of data based on the address provided in an instruction executed by the processor. As an example, processor 111 of CPU 102 may directly address system memory by using load and store primitives (e.g., load and store instructions executed by cores 114A and 114B). In various embodiments, an address specified in the load and store primitives may be a physical address of the system memory or a virtual address that is translated to a physical address by CPU 102 (e.g., via a memory management unit of the CPU). In contrast, an external storage device (e.g., storage device 106 or a cache storage device) is not CPU addressable, as the CPU 102 must translate a memory address specified by an instruction (or a physical address corresponding to a virtual address specified by the processor) into a logical block address of the storage device 106 (the storage device 106 then translates this logical block address into a physical address of the requested data on the storage device 106). As another example, a memory space may be directly addressable by a processor if the memory space can provide the data to a location within the processor (e.g., in response to a load instruction by the processor). For example, a system memory may provide requested data to a register of the processor making it immediately available whereas a storage device must first copy the data to a system memory before the data is usable by the processor (which may require the processor to retry an instruction after the data has been brought into the system memory).While storing the page cache in the system memory provides CPU addressability, caching in the storage cache device provides persistence (i.e., a power failure will not result in loss of the stored data). However, utilization of a volatile page cache and a storage cache device results in various drawbacks. Due to the volatility of the page cache, the operating system must typically populate the page cache upon every reboot of the computer system by copying data from the storage cache device to the volatile page cache. Additionally, the capacity of the volatile page cache is typically much smaller than the capacity of the cache storage device. This causes storage data to be evicted from the volatile page cache to the cache storage device and then repopulated back in the volatile page cache based on the application access pattern resulting in additional overhead. Additionally, because the page cache is volatile, data stored therein needs to be frequently flushed to the cache storage device to achieve persistence. This frequent flushing causes significant performance overhead especially for synchronous writes. Managing a page cache alongside other data stored in a volatile system memory also incurs relatively large costs, e.g., during the scanning of page lists when inactive pages are evicted to swap space. Additionally, caching storage data in a volatile page cache may consume precious volatile memory reducing the amount of volatile memory available to an operating system and applications for storing associated code and volatile data.In various embodiments of the present disclosure, a computing system 100 comprises a page cache 136 stored in persistent memory 134 such as 3-dimensional (3D) crosspoint memory (or other persistent memory described herein). The persistent page cache 136 provides both CPU addressability and persistence for cached storage data (i.e., data that has a corresponding copy stored in storage device 106). Accordingly, cached data is available in the address space of CPU 102 even after a reboot of the computer system without having to move the data from the address space of storage device 106 after reboot. The need for frequent copying of storage data between a volatile system memory device and a non-volatile storage cache device is also reduced. Latency of I/O requests that hit in the persistent page cache (that would have missed in the volatile page cache) are reduced considerably. Persistence committing primitives (e.g., instructions requesting the movement of data from the page cache to persistent memory, such as calls to fsync and msync) result in minimal overhead because the associated data will already be stored in a persistent page cache (e.g., such instructions may merely include the updating of metadata to indicate that the data is stored in persistent memory). The usage of volatile memory is reduced, thus freeing up volatile memory for use by applications (operating systems typically use a portion of the volatile system memory for the page cache, while the rest of the memory may be used by applications). Additionally, the scanning of page lists to free volatile memory for applications may be accomplished much faster because the pages of the persistent page cache 136 do not need to be scanned. Finally, the persistent page cache 136 may enable efficient journaling for implementing transactions in file systems. By using the persistent page cache 136 as an implicit journal log, the need for a separate journal is eliminated. Typical file systems stage transactions in a DRAM-based page cache and then flush the transactions to persistence (i.e., to the cache storage device) on transaction commit. Since page cache 136 is persistent, a slightly modified logging protocol can be used to commit the transaction in the persistent page cache 136, without the need to flush data to a persistent storage device, resulting in improved transaction performance in file systems (or other storage management software such as object storage systems).CPU 102 comprises a processor 111, such as a microprocessor, an embedded processor, a DSP, a network processor, a handheld processor, an application processor, a coprocessor, an SOC, or other device to execute code (i.e., software instructions). Processor 111, in the depicted embodiment, includes two processing elements (cores 114A and 114B in the depicted embodiment), which may include asymmetric processing elements or symmetric processing elements. However, a processor may include any number of processing elements that may be symmetric or asymmetric.In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.A core 114 may refer to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. A hardware thread may refer to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.In various embodiments, the processing elements may also include one or more arithmetic logic units (ALUs), floating point units (FPUs), caches, instruction pipelines, interrupt handling hardware, registers, or other hardware to facilitate the operations of the processing elements.I/O controller 112 is an integrated I/O controller that includes logic for communicating data between CPU 102 and I/O devices, which may refer to any suitable logic capable of transferring data to and/or receiving data from an electronic system, such as CPU 102. For example, an I/O device may comprise a controller of an audio/video (A/V) device such as a graphics accelerator; a controller of a data storage device (e.g., storage device 106), such as an SSD, HDD, a Non-Volatile Dual In-line Memory Module (NVDIMM), or optical storage disk; a wireless transceiver; a network processor; a network interface controller; or a controller for another input device such as a monitor, printer, mouse, keyboard, or scanner; or other suitable device. In a particular embodiment, an I/O device may comprise a storage device controller (not shown) of storage device 106.An I/O device may communicate with the I/O controller 112 of the CPU 102 using any suitable signaling protocol, such as peripheral component interconnect (PCI), PCI Express (PCIe), Universal Serial Bus (USB), Serial Attached SCSI (SAS), Serial ATA (SATA), Fibre Channel (FC), IEEE 802.3, IEEE 802.11, or other current or future signaling protocol. In particular embodiments, I/O controller 112 and the underlying I/O device may communicate data and commands in accordance with a logical device interface specification such as Non-Volatile Memory Express (NVMe) (e.g., as described by one or more of the specifications available at www.nvmexpress.org/specifications/) or Advanced Host Controller Interface (AHCI) (e.g., as described by one or more AHCI specifications such as Serial ATA AHCI: Specification, Rev. 1.3.1 available at http://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-ahci-spec-rev1-3-1.html). In various embodiments, I/O devices coupled to the I/O controller may be located off-chip (i.e., not on the same chip as CPU 102) or may be integrated on the same chip as the CPU 102.Memory controller 116 is an integrated memory controller that includes logic to control the flow of data going to and from one or more system memory devices (sometimes referred to as main memory), such as volatile system memory device 108 or persistent system memory device 110. Memory controller 116 may include logic operable to read from a system memory device, write to a system memory device, or to request other operations from a system memory device. In various embodiments, memory controller 116 may receive write requests from cores 114 and/or I/O controller 112 (e.g., when a storage device 106 performs a direct memory access (DMA) operation) and may provide data specified in these requests to a system memory device for storage therein. Memory controller 116 may also read data from a system memory device and provide the read data to I/O controller 112 or a core 114. During operation, memory controller 116 may issue commands including one or more addresses of a system memory device in order to read data from or write data to memory (or to perform other operations). In some embodiments, memory controller 116 may be implemented on the same chip as CPU 102, whereas in other embodiments, memory controller 116 may be implemented on a different chip than that of CPU 102. I/O controller 112 may perform similar operations with respect to one or more storage devices 106.Volatile memory controller 118 may communicate commands and data with volatile system memory device 108 and persistent memory controller 120 may communicate commands and data with persistent system memory device 110. In the embodiment depicted, volatile system memory device 108 and persistent system memory device 110 are shown as discrete devices, though in other embodiments, volatile memory 126 and persistent memory 134 may be integrated on the same device. Similarly, memory controller 116 is shown as including separate volatile and persistent memory controllers (118 and 120), though in other embodiments, a single memory controller may communicate with both volatile system memory device 108 and persistent system memory device 110 (or a single device that includes both volatile memory 126 and persistent memory 134). Memory controller 116 may be operable to determine, based on an address of a request, whether the request should be sent to volatile memory 126 or persistent memory 134 and may format the request accordingly.The CPU 102 may also be coupled to one or more other I/O devices (such as any of those listed above or other suitable I/O devices) through external I/O controller 104. In a particular embodiment, external I/O controller 104 may couple a storage device 106 to the CPU 102. External I/O controller 104 may include logic to manage the flow of data between one or more CPUs 102 and I/O devices. In particular embodiments, external I/O controller 104 is located on a motherboard along with the CPU 102. The external I/O controller 104 may exchange information with components of CPU 102 using point-to-point or other interfaces.Volatile system memory device 108 may store any suitable data, such as data used by processor 111 to provide the functionality of computer system 100. In the embodiment depicted, volatile memory 126 stores page cache 128, application code 130, and application data 132. In a particular embodiment, volatile memory 126 does not store a page cache 128 (instead the entire page cache is implemented in persistent memory 134). However, as explained in greater detail below, it may be advantageous in some situations to maintain a page cache 128 in volatile memory 126 for a portion of cached storage data as well as a page cache 136 in persistent memory 134 for cached storage data.Page cache 128 or 136 may cache physical pages (sometimes referred to as frames) of storage data of a storage device 106. The page cache 128 may be maintained by the operating system 122 using volatile memory 126 that is also used by the applications executed by processor 111 (e.g., the page cache 128 may be implemented using memory that is left over after other portions of the volatile memory 126 is used for application code and data), while page cache 136 may, at least in some embodiments, be dedicated to the caching of storage data. Application code 130 may include executable instructions associated with the applications (e.g., a text segment). Application data 132 may include a stack segment storing a collection of frames that store function parameters, return addresses, local variables, or other data; a heap segment that is used when an application allocates memory dynamically at run time, a data segment that includes static variables and initialized global variables; a segment that stores uninitialized global and static variables; and/or any other suitable data associated with one or more applications 124 executed through the operating system 122.In a particular embodiment, page cache 128 or 136 may cache file data using radix-tree structure. Each file (which may, e.g., be identified by an inode in a Linux based operating system) having data stored in the page cache may be represented by a radix tree. A radix-tree maps file offsets (which are represented using leaf nodes of the radix-tree) to data pages of the page cache. When pages are cached in the page cache, file data is read from storage device 106 and stored into the radix-tree leaf nodes. Dirty data (data that has been modified by processor 111 and not yet written back to storage device 106) in a radix-tree is either synchronously (e.g., using the fsync operation) or asynchronously (e.g., using periodic writeback) written to the storage device 106.In various embodiments, the operating system 122 may maintain a page table for each active application, which stores information used to determine a physical memory page residing in a system memory device based on a virtual address (e.g., of an instruction executed by a core 114). In some embodiments, the page tables may be stored in either volatile memory devices or persistent memory devices and individual virtual page addresses may map to physical page addresses in either volatile memory devices or persistent memory devices.A system memory device (e.g., volatile system memory device 108 and/or persistent system memory device 110) may be dedicated to a particular CPU 102 or shared with other devices (e.g., one or more other processors or other device) of computer system 100. In various embodiments, a system memory device may be checked to see whether it stores requested data after a determination is made that last level cache of CPU 102 does not include requested data.In various embodiments, a system memory device may include a memory comprising any number of memory modules, a memory device controller, and other supporting logic (not shown). A memory module may include persistent memory and/or volatile memory. Volatile system memory device 108 includes volatile memory 126 and persistent system memory device 110 includes persistent memory 134, though either system memory device may include volatile memory and persistent memory in some embodiments.Volatile memory is a storage medium that requires power to maintain the state of data stored by the medium. Examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In some embodiments, any portion of memory 108 that is volatile memory can comply with JEDEC standards including but not limited to Double Data Rate (DDR) standards, e.g., DDR3, 4, and 5, or Low Power DDR4 (LPDDR4) as well as emerging standards.Persistent memory is a storage medium that does not require power to maintain the state of data stored by the medium. In various embodiments, persistent memory may be byte or block addressable. Nonlimiting examples of persistent memory may include any or a combination of: solid state memory (such as planar or 3D NAND flash memory or NOR flash memory), 3D crosspoint memory, memory that uses chalcogenide phase change material (e.g., chalcogenide glass), ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, polymer memory (e.g., ferroelectric polymer memory), ferroelectric transistor random access memory (Fe-TRAM) ovonic memory, nanowire memory, electrically erasable programmable read-only memory (EEPROM), a memristor, single or multi-level phase change memory (PCM), Spin Hall Effect Magnetic RAM (SHE-MRAM), and Spin Transfer Torque Magnetic RAM (STTRAM).A storage device 106 may store any suitable data, such as data used by processor 111 to provide functionality of computer system 100. For example, data associated with programs that are executed or files accessed by cores 114A and 114B may be stored in storage device 106. In various embodiments, a storage device 106 may store persistent data (e.g., a user's files or software application code) that maintains its state even after power to storage device 106 is removed. A storage device 106 may be dedicated to CPU 102 or shared with other devices (e.g., another CPU or other device) of computer system 100.In various embodiments, storage device 106 may comprise a solid state drive; a hard disk drive; a memory card; a NVDIMM; a tape drive; or other suitable mass storage device. In particular embodiments, storage device 106 is a block based storage device that stores data blocks addressable by a host computing device (e.g., CPU 102) by logical block addresses (LBAs).Storage device 106 may include any suitable interface to communicate with I/O controller 112 or external I/O controller 104 using any suitable communication protocol such as a DDR-based protocol, PCI, PCIe, USB, SAS, SATA, FC, System Management Bus (SMBus), or other suitable protocol. A storage device 106 may also include a communication interface to communicate with I/O controller 112 or external I/O controller 104 in accordance with any suitable logical device interface specification such as NVMe, AHCI, or other suitable specification.In various embodiments, the storage device 106 also includes an address translation engine that includes logic (e.g., one or more logic-to-physical (L2P) address tables) to store and update a mapping between a logical address space (e.g., an address space visible to a computing host coupled to the storage device 106) and the physical address space of the storage media of the storage device 106 (which may or may not be exposed to the computing host). The logical address space may expose a plurality of logical groups of data which are physically stored on corresponding physical groups of memory addressable, by the storage device 106, through the physical address space of the storage device 106. Thus, the L2P address table may translate between an LBA provided by a host and a physical address of corresponding data. In a particular embodiment, an LBA specifies the minimum amount of data that may be referenced using a write or read command (which may sometimes be referred to as a page). In various example, an LBA may refer to a block size of 512 bytes, 1 Kilobyte (KB), 2KB, 4KB, or other suitable block size.In some embodiments, all or some of the elements of system 100 are resident on (or coupled to) the same circuit board (e.g., a motherboard). In various embodiments, any suitable partitioning between the elements may exist. For example, the elements depicted in CPU 102 may be located on a single die (i.e., on-chip) or package or any of the elements of CPU 102 may be located off-chip or off-package.The components of system 100 may be coupled together in any suitable manner. For example, a bus may couple any of the components together. A bus may include any known interconnect, such as a multi-drop bus, a mesh interconnect, a ring interconnect, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g. cache coherent) bus, a layered protocol architecture, a differential bus, and a Gunning transceiver logic (GTL) bus. In various embodiments an integrated I/O subsystem includes point-to-point multiplexing logic between various components of system 100, such as cores 114, memory controller 116, I/O controller 112, integrated I/O devices, direct memory access (DMA) logic (not shown), etc. In various embodiments, components of computer system 100 may be coupled together through one or more networks comprising any number of intervening network nodes, such as routers, switches, or other computing devices. For example, a computing host (e.g., CPU 102) and the storage device 106 may be communicably coupled through a network.Although not depicted, system 100 may use a battery and/or power supply outlet connector and associated system to receive power, a display to output data provided by CPU 102, or a network interface allowing the CPU 102 to communicate over a network. In various embodiments, the battery, power supply outlet connector, display, and/or network interface may be communicatively coupled to CPU 102. Other sources of power can be used such as renewable energy (e.g., solar power or motion based power).FIG. 2 illustrates a block diagram of components of a computer system 200 implementing operating system based caching in accordance with certain embodiments. System 200 may include any of the components of system 100. Various components of system 200 (e.g., virtual file system 204, file system 206, volatile page cache manager 208, block layer 210, storage caching layer 212, and storage device drivers 214 and 216) may comprise logic (e.g., software modules) implemented by operating system 122A, which may have any suitable characteristics of operating system 122.In the embodiment depicted, application 124 issues a read or write system call 202. The system call may specify any suitable information identifying data, such as a file descriptor identifying the file to be accessed (in some situations this may include a path and/or a name of a file), an amount (e.g., number of bytes) to read or write, an offset into the file (e.g., in terms of bytes from the start of the file), a buffer in which the read data is to be placed or in which the write data is stored, or other suitable data associated with the data to be read or written.The system call 202 is received by virtual file system 204. The virtual file system 204 may be an abstraction of file system 206, such that applications may generate system calls without having to format the requests in accordance with any of a number of file systems that may be implemented by an operating system. If multiple file systems are implemented by operating system 122A, the virtual file system 204 may determine the appropriate file system 206 to which the system call should be sent. The virtual file system 204 may format the system call in a manner that is compatible with the particular file system 206 to which the system call is sent.File system 206 may represent any suitable file system, such as a File Allocation Table (FAT), New Technology File System (NTFS), Resilient File System (ReFS), HFS+, a native Linux file system, ISOFS, or other suitable file system. In general, a file system makes stored data visible to an application or user (e.g., by organizing storage in a hierarchical namespace). A file system may manage access to both the content of files and metadata about those files.The file system 206 may call a page cache application program interface (API) based on the received system call. In other embodiments, the virtual file system may directly call a page cache API based on the received system call (such a call is indicated by the dotted line between the virtual file system 204 and the volatile page cache managers 208 and 308 in FIGs. 2 and 3 ). The API call may include any of the information described above as being included in the system call or information derived therefrom. In one embodiment, the API call includes a file identifier (such as an inode as used in Linux operating systems or other similar identifier), a file offset, and a number of bytes. The API call is sent to the volatile page cache manager 208, which determines whether the requested data is stored in volatile page cache 128. Volatile page cache manager 208 may maintain a record of the data stored within volatile page cache 128 and the status of such data.In the case of a read system call, if the requested data is stored in volatile page cache 128, then the data is provided to the requesting application (e.g., the volatile page cache manager 208 may send a request for the data to the volatile system memory device 108 through the volatile memory controller 118. If the requested data is not in the volatile page cache 128, the volatile page cache manager 208 notifies the file system 206 of such. The file system then determines one or more LBAs that correspond to the data specified in the system call 202. For example, the file system 206 may map the file descriptor and offset to an LBA.The LBA(s) are passed to the block layer 210. In a particular embodiment, the LBA(s) determined by file system 206 may be relative LBAs (e.g., the file system 206 may not be aware of one or more other partitions on storage device 106 and thus the determined LBA(s) may be specific to a partition associated with the file system). The block layer 210 has knowledge of the other partitions on storage device 106 and may translate the determined LBA(s) into absolute LBA(s). In some embodiments, the block layer 210 may submit a request with the absolute LBA(s) to an I/O scheduler, which aggregates similar requests before sending an aggregated request to the storage caching layer 212. The storage caching layer 212 determines whether the storage cache device 218 (e.g., an SSD or other storage device that is faster than the storage device 106) has cached data corresponding to the determined LBA(s). If the storage cache device 218 is currently caching the data, the storage caching layer 212 may send a request to the storage device driver 216 to retrieve the data and the storage device driver 216 may send a request to the storage cache device 218 for the data (e.g., via a controller, such as I/O controller 112). If the storage cache device 218 does not have the data cached, the storage caching layer 212 sends a request for the data stored at the determined LBA(s) to the storage device driver 214 which then requests the data from storage device 106 (e.g., via I/O controller 112). The data is then cached in the storage cache device. In either case, the data may be sent to the volatile page cache 128 for storage therein (e.g., via a direct memory access (DMA) operation), so that the processor 111 may access the data from the volatile page cache 128.In the case of a write system call, corresponding (though not identical) operations may be performed and the data may be written to any one or more of the volatile page cache 128, storage cache device 218, and/or storage device 106 as a result of the system call 202. In a particular embodiment, a write system call writes the data to the volatile page cache 128 or the persistent page cache 136 and the operating system asynchronously flushes the dirty page cache pages to the storage device 106. Thus, completion of a write system call itself doesn't necessarily guarantee the data is persistent (indeed the data is not persistent if it is only stored in the volatile page cache 128). In order to ensure persistence, the application may issue an additional system call (e.g., fsync or msync) to instruct the operating system to synchronously flush the dirty pages from the page cache to the storage device 106. When the data is written to the volatile page cache 128, this includes flushing the data to the storage device 106. When the data is written to the persistent page cache 136, such system calls may merely involve flushing the data from one or more CPU caches (e.g., L1 cache, L2 cache, LLC, etc.) followed by updating metadata to reflect that the data is persistent and do not necessarily cause the data to be synchronously flushed to the storage device 106 (though in particular embodiments, such data could be flushed to the storage device 106 in response to these system calls).FIG. 3 illustrates a block diagram of components of a computer system 300 implementing a persistent page cache 136 in accordance with certain embodiments. System 300 may include any of the components of system 100 or 200. Various components of system 300 are implemented by operating system 122B (including volatile page cache manager 308 and persistent page cache manager 312), which may have any suitable characteristics of the other operating systems described herein.In the embodiment depicted, the storage cache device 218 has been omitted, as the persistent page cache 136 may provide persistent storage of cached data that is also directly addressable by the processor 111. As in the system of FIG. 2 , a read or write system call 202 may result in the file system 206 sending a page cache API call to the volatile page cache manager 308 (which may include any of the characteristics of volatile page cache manager 208). However, volatile page cache manager 308 includes page cache selection and forwarding logic 310, which is operable to determine whether the API call represents a request for the volatile page cache 128 or the persistent page cache 136. If the request is for the volatile page cache 128, then the volatile page cache manager 308 services the request (e.g., by determining whether the requested data is stored by volatile page cache 128 and either requesting the data from volatile page cache 128 or providing an indication to file system 206 that the data is not stored in the volatile page cache 128). If the request is for the persistent page cache 136, then logic 310 forwards the request to the persistent page cache manager 312. Persistent page cache manager 312 may perform functions similar to a volatile page cache manager, but with respect to the persistent page cache 136. The persistent page cache manager 312 may service the received request (e.g., by determining whether the requested data is stored by persistent page cache 136 and either requesting the data from persistent page cache 136 or providing an indication to the file system 206, e.g., via logic 310, that the data is not stored in the persistent page cache 136).If the data is not stored by the selected page cache, the data is requested from storage device 106 (in a manner similar to that describe above in connection with FIG. 2 ). The requested data is then written to the selected page cache, where the processor 111 can access it (e.g., via a load instruction) and provide the requested data to application 124.In particular embodiments, persistent page cache 136 may include persistent versions of volatile page cache 128's core data structures. As just one example, persistent page cache 136 may include a persistent radix tree. In a particular embodiment, a slab allocator (which operating system 122 may use to manage volatile memory) manages the volatile page cache 128, but does not manage the persistent page cache 136. In another embodiment, a single allocator may manage both the volatile page cache 128 and the persistent page cache 136.Since almost all existing file systems utilize page caching, these file systems may be compatible with system 300 with little to no changes to the file systems, though changes may be made to the operating system memory management system to accommodate the address space of the additional page cache (i.e., persistent page cache 136). For example, the operating system 122B may manage a table of file mappings that includes a bit for each file mapping that indicates whether the corresponding file is to be cached in the volatile page cache 128 or the persistent page cache 136. Logic 310 (or other selection logic described below) may access the appropriate entry to determine the value of this bit when determining whether the API call should be forwarded to persistent page cache manager 312 or serviced by volatile page cache manager 308.Because, in some embodiments, the performance of persistent memory 134 may be lower than the performance of volatile memory 126 (e.g., DRAM), it may be advantageous to cache some files in the volatile page cache 128 and other files in the persistent page cache 136. Operating system 122 (or any of the variants thereof described herein) can support selective caching of files in the volatile page cache 128 or persistent page cache 136. In one example, the decision of whether to cache in the persistent page cache 136 may be based on a hint from an application (e.g., a flag received in a system call such as file open() or fadvise()). In another example, the OS 122 can make the determination based on heuristics. For example, files opened for writing or boot-time files may be cached in the persistent page cache 136. As another example, the OS 122 can initially cache a file in the persistent page cache 136 and track the cache hit rate of the file. If the hit rate increases beyond a certain threshold, the file can additionally or alternatively be cached in volatile page cache 128 to improve access time. In other embodiments, instead of selecting the page cache on a per-file basis, an entire file system may be designated for caching in the persistent page cache 136. For example, when a disk or a partition of a disk is mounted with a persistent cache option, all the addresses mappings of the file structures read from that disk may be marked with a persistent flag, causing the files (when cached) to be cached in persistent page cache 136.FIG. 4 illustrates a block diagram of components of a computer system 400 implementing a persistent memory file system 404 and a persistent page cache 136 in accordance with certain embodiments. FIG. 4 illustrates a block diagram of components of a computer system implementing a persistent page cache in accordance with certain embodiments. System 400 may include any of the components of system 100 or other systems described herein. Various components of system 400 are implemented by operating system 122C (including persistent memory file system 404), which may have any suitable characteristics of any of the operating systems described herein.As in the system of FIG. 2 , a read or write system call may result in system call being passed to a file system 206A. The file system may have any suitable characteristics of file system 206. The file system 206A may additionally include file system selection and forwarding logic 402 which is operable to determine whether the system call represents a request for the volatile page cache 128 or the persistent page cache 136. If the request is for the volatile page cache 128, then logic 402 allows the request to be serviced by file system 206A (e.g., in a manner similar to that described above). If the request is for the persistent page cache 136, then logic 402 may make an API call to persistent memory file system 404. The API call may include any suitable parameters from the system call 202 or other parameters derived therefrom. In a particular embodiment, the API call is a file system-cache API call, as used in Linux based operating systems or similar API call.Persistent memory file system 404 is any suitable persistent memory aware file system, such as a file system that implements the functionality of a Persistent Memory File System (PMFS), a Linux based DAX-EXT4 or DAX-XFS file system, a Windows based DAS or DAX mode NTFS, or other suitable file system. A traditional file system is configured to check a page table before accessing storage. In various embodiments, a persistent memory aware file system is configured to perform reads and writes directly to a storage device (i.e., without first checking for a copy of the data in a page cache). Thus, PM file system 404 may be configured to create a persistent page cache 136 that is accessed directly upon a data access request (without a first check to a traditional page cache, such as volatile page cache 128). In a particular embodiment, persistent memory file system 404 is configured to send requests to persistent page cache 136, but not volatile page cache 128.When PM file system 404 receives the API call, persistent page cache manager 406 may service the request (e.g., by determining whether the requested data is stored by persistent page cache 136 and either requesting the data from persistent page cache 136 or providing an indication to the file system 206A, e.g., via logic 402, that the data is not stored in the persistent page cache 136).When the file system 206A receives a system call representing a request for volatile page cache 128, an API call may be made to volatile page cache manager 208 by the file system 206A and the volatile page cache manager 208 may service the request (e.g., by determining whether the requested data is stored by volatile page cache 128 and either requesting the data from volatile page cache 128 or providing an indication to file system 206A that the data is not stored in the volatile page cache 128). In various embodiments, an operating system may cache storage data in both volatile page cache 128 and persistent page cache 136. In a particular embodiment, upon a determination that volatile page cache 128 does not include the data of the request, a file system (e.g., any of the file systems described herein) may make an API call to the persistent page cache manager to determine whether the persistent page cache 136 includes the data.Regardless of the page cache that was checked, when file system 206A receives an indication that the data was not stored in the selected page cache (or in some embodiments that the data was not stored in either page cache), it may request the data from storage device 106 and the data may then be stored to the selected page cache for retrieval by processor 111.In system 400, the PM file system 404 is effectively used as a cache for the file system 206A. Thus, the file system 206A is modified to use the PM file system 404 as a page cache (in place of volatile page cache 128) for some data. When a PM file system 404 attempts to access data, it may avoid the volatile page cache 128 and attempt to access the data in the persistent page cache 136. In various embodiments, the application is unaware of PM file system 404 and is under the assumption that file system 206A handles all of the read and write system calls. The PM file system 404 may access the persistent page cache 136 directly (e.g., without going through a device driver and a block layer).FIG. 5 illustrates a block diagram of components of a computer system 500 implementing a persistent memory file system shim layer 502 and a persistent page cache 136 in accordance with certain embodiments. System 500 may include any of the components of system 100 or other systems described herein. Various components of system 500 are implemented by operating system 122D (including persistent memory file system shim layer 502), which may have any suitable characteristics of any of the operating systems described herein.The shim layer 502 intercepts requests sent to the file system 206 (e.g., by the application 124 and/or virtual file system 204). Shim layer 502 determines whether the requests are related to files that are to be cached in the persistent page cache 136 or the volatile page cache 128. If the request relates to a file that is marked for caching by the volatile page cache 128, the request is allowed to pass through the shim layer 502 to the file system 206, where it is processed in a manner similar to that described above. If the request relates to a file that is marked for caching by the persistent page cache 136, the shim layer 502 redirects the request to the PM file system 404. In some embodiments, the shim layer 502 may also reformat the request into a format that is compatible with PM file system 404. The request is then serviced by persistent page cache manager 406 in a manner similar to that described above. If the persistent page cache 136 does not include the requested data, the shim layer 502 is notified by the PM file system 404 and a request is made through file system 206 for the data to be copied from storage device 106 to the persistent page cache 136. The request from the shim layer may also indicate to the file system to not check in the volatile page cache (e.g., using DIRECT I/O) before accessing the storage device 106.In a particular embodiment, instead of determining between passing an intercepted system call to the file system 206 or redirecting the call (e.g., via an API call) to the PM file system 404, the shim layer 502 may make API calls to the PM file system 404 by default. If the persistent page cache manager 406 determines the data is not in the persistent page cache 136, the shim layer 502 may be notified and the shim layer may then pass the system call to the file system 206 for processing. This effectively enables the shim layer to present a new persistent memory file system of size equal to storage device 106 to the operating system and applications by caching data in the persistent memory file system 404.In a particular embodiment, a filter driver of operating system 122D (e.g., some Windows based operating systems provide filter drivers that may run on top of a file system) may be used to implement at least a part of shim layer 502. Thus, in one example, a filter driver may run on top of an NTFS without requiring any significant modifications to the NTFS to implement system 500.FIG. 6 illustrates an example flow 600 for providing data to a processor 111 from a page cache in accordance with certain embodiments. Various operations of flow 600 may be performed by any suitable logic of system 100, such as CPU 102, volatile page cache 128, persistent page cache 136, or storage device 106.At 602, a data request is received, e.g., from an application executed by processor 111. At 604, a determination is made as to whether the data is associated with volatile page cache 128 or persistent page cache 136. As one example, a table that maps files to the page caches may be accessed to determine which page cache is assigned to cache data of a file referenced by the data request.If the volatile page cache is associated with the data, a determination is made as to whether the volatile page cache stores the requested data at 606. If the volatile page cache stores the data, then the data is provided from the volatile page cache to the processor at 608. The data may be provided in any suitable manner. As just one example, the data may be placed on a bus by volatile system memory device 108 and copied into a register of the processor 111. If the data is not in the volatile page cache, an LBA corresponding to the data is determined at 610 (e.g., based on a file descriptor and offset of the data request) and a request with the LBA is sent to the storage device. The requested data is copied from the storage device to the volatile page cache at 612 and then provided to the processor at 608.If the persistent page cache is associated with the data, a determination is made as to whether the persistent page cache stores the requested data at 614. If the persistent page cache stores the data, then the data is provided from the persistent page cache to the processor at 616. The data may be provided in any suitable manner. As just one example, the data may be placed on a bus by persistent system memory device 110 and copied into a register of the processor 111. If the data is not in the persistent page cache, an LBA corresponding to the data is determined at 618 (e.g., based on a file descriptor and offset of the data request) and a request with the LBA is sent to the storage device. The requested data is copied from the storage device to the persistent page cache at 620 and then provided to the processor at 616.The flow described in FIG. 6 is merely representative of operations that may occur in particular embodiments. In other embodiments, additional operations may be performed by the components of system 100. Various embodiments of the present disclosure contemplate any suitable signaling mechanisms for accomplishing the functions described herein. Some of the operations illustrated in FIG. 6 may be repeated, combined, modified or deleted where appropriate. Additionally, operations may be performed in any suitable order without departing from the scope of particular embodiments.A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language (HDL) or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In some implementations, such data may be stored in a database file format such as Graphic Data System II (GDS II), Open Artwork System Interchange Standard (OASIS), or similar format.In some implementations, software based hardware models, and HDL and other functional description language objects can include register transfer language (RTL) files, among other examples. Such objects can be machine-parsable such that a design tool can accept the HDL object (or model), parse the HDL object for attributes of the described hardware, and determine a physical circuit and/or on-chip layout from the object. The output of the design tool can be used to manufacture the physical device. For instance, a design tool can determine configurations of various hardware and/or firmware elements from the HDL object, such as bus widths, registers (including sizes and types), memory blocks, physical link paths, fabric topologies, among other attributes that would be implemented in order to realize the system modeled in the HDL object. Design tools can include tools for determining the topology and fabric configurations of system on chip (SoC) and other hardware device. In some instances, the HDL object can be used as the basis for developing models and design files that can be used by manufacturing equipment to manufacture the described hardware. Indeed, an HDL object itself can be provided as an input to manufacturing system software to cause the manufacture of the described hardware.In any representation of the design, the data representing the design may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.In various embodiments, a medium storing a representation of the design may be provided to a manufacturing system (e.g., a semiconductor manufacturing system capable of manufacturing an integrated circuit and/or related components). The design representation may instruct the system to manufacture a device capable of performing any combination of the functions described above. For example, the design representation may instruct the system regarding which components to manufacture, how the components should be coupled together, where the components should be placed on the device, and/or regarding other suitable specifications regarding the device to be manufactured.Thus, one or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, often referred to as "IP cores" may be stored on a non-transitory tangible machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that manufacture the logic or processor.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In various embodiments, the language may be a compiled or interpreted language.The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable (or otherwise accessible) by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information therefrom.Instructions used to program logic to perform embodiments of the disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magnetooptical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).Logic may be used to implement any of the flows (e.g., flow 600) or functionality of any of the various components of systems depicted throughout the figures, such as CPU 102, external I/O controller 104, storage device 106, system memory devices 108 and 110, other components described herein, or subcomponents thereof. "Logic" may refer to hardware, firmware, software and/or combinations of each to perform one or more functions. As an example, logic may include hardware, such as a micro-controller or processor, associated with a non-transitory medium to store code adapted to be executed by the micro-controller or processor. Therefore, reference to logic, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of logic refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term logic (in this example) may refer to the combination of the hardware and the non-transitory medium. In various embodiments, logic may include a microprocessor or other processing element operable to execute software instructions, discrete logic such as an application specific integrated circuit (ASIC), a programmed logic device such as a field programmable gate array (FPGA), a memory device containing instructions, combinations of logic devices (e.g., as would be found on a printed circuit board), or other suitable hardware and/or software. Logic may include one or more gates or other circuit components, which may be implemented by, e.g., transistors. In some embodiments, logic may also be fully embodied as software. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. Often, logic boundaries that are illustrated as separate commonly vary and potentially overlap. For example, first and second logic may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware.Use of the phrase 'to' or 'configured to,' in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still 'configured to' perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate 'configured to' provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term 'configured to' does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.Furthermore, use of the phrases 'capable of/to,' and or 'operable to,' in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example, the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.In at least one embodiment, an apparatus comprises a memory to store executable instructions of an operating system; and a processor to identify a request for data from an application; determine whether a persistent page cache stores a copy of the data, wherein the persistent page cache is directly addressable by the processor and is to cache data of a storage device that is not directly addressable by the processor; and access the data from the persistent page cache.In an embodiment, the processor is to identify a request for second data from a second application; determine, whether a volatile page cache stores a copy of the data, wherein the volatile page cache is directly addressable by the processor and is to cache data of the storage device; and access the data from the volatile page cache. In an embodiment, the processor is to implement a volatile page cache manager that is to determine that a file that includes the data is marked for caching in the persistent page cache; and send a request for the data to a persistent page cache manager. In an embodiment, the processor is to implement a first file system of the operating system, wherein the first file system is to determine whether a file that includes the data is marked for caching in the persistent page cache or volatile page cache; and in response to determining that the file is marked for caching in the persistent page cache, send a request for the data to a second file system. In an embodiment, the processor is to implement a first file system that is to send data requests towards the volatile page cache; implement a second file system that is to send data requests towards the persistent page cache; and implement a shim layer that is to intercept a data request sent to the first file system and communicate the data request to the second file system. In an embodiment, the request for data comprises a file descriptor. In an embodiment, the processor is to send a request to the storage device to copy the data to the persistent page cache upon a determination that the persistent page cache does not store a copy of the data. In an embodiment, the processor is to translate a file descriptor and offset of the request for data into a logical block address and send the logical block address to the storage device in the request to the storage device. In an embodiment, the volatile page cache is to be stored in a volatile memory that is further to store application code and application data. In an embodiment, the persistent page cache is to be stored in 3D crosspoint memory. In an embodiment, the processor is to determine whether to cache data in the volatile page cache or the persistent page cache based on at least one of a hint from an application that issues a system call referencing the data; whether the data is opened for writing; whether the data is required for booting; or whether the data is file data or metadata. In an embodiment, the processor is to, upon receiving a request to sync dirty data of the persistent page cache, update metadata in the persistent page cache to mark the dirty data as persistent.In at least one embodiment, a method comprises identifying a request for data from an application; determining whether a persistent page cache stores a copy of the data, wherein the persistent page cache is directly addressable by a processor and is to cache data of a storage device that is not directly addressable by the processor; and accessing the data from the persistent page cache.In an embodiment, the method further comprises identifying a request for second data from a second application; determining, whether a volatile page cache stores a copy of the data, wherein the volatile page cache is directly addressable by the processor and is to cache data of the storage device; and accessing the data from the volatile page cache. In an embodiment, the method further comprises implementing a volatile page cache manager that is to determine that a file that includes the data is marked for caching in the persistent page cache; and sending a request for the data to a persistent page cache manager. In an embodiment, the method further comprises implementing a first file system of an operating system, wherein the first file system is to determine whether a file that includes the data is marked for caching in the persistent page cache or volatile page cache; and in response to determining that the file is marked for caching in the persistent page cache, send a request for the data to a second file system. In an embodiment, the method further comprises implementing a first file system that is to send data requests towards the volatile page cache; implementing a second file system that is to send data requests towards the persistent page cache; and implementing a shim layer that is to intercept a data request sent to the first file system and communicate the data request to the second file system. In an embodiment, the request for data comprises a file descriptor. In an embodiment, the method further comprises sending a request to the storage device to copy the data to the persistent page cache upon a determination that the persistent page cache does not store a copy of the data. In an embodiment, the method further comprises translating a file descriptor and offset of the request for data into a logical block address and send the logical block address to the storage device in a request to the storage device. In an embodiment, the volatile page cache is to be stored in a volatile memory that is further to store application code and application data. In an embodiment, the persistent page cache is to be stored in 3D crosspoint memory. In an embodiment, the method further comprises determining whether to cache data in the volatile page cache or the persistent page cache based on at least one of a hint from an application that issues a system call referencing the data; whether the data is opened for writing; whether the data is required for booting; or whether the data is file data or metadata. In an embodiment, the method further comprises, upon receiving a request to sync dirty data of the persistent page cache, update metadata in the persistent page cache to mark the dirty data as persistent.In at least one embodiment, a non-transitory machine readable storage medium includes instructions stored thereon, the instructions when executed by a processor to cause the processor to identify a request for data from an application; determine whether a persistent page cache stores a copy of the data, wherein the persistent page cache is directly addressable by the processor and is to cache data of a storage device that is not directly addressable by the processor; and access the data from the persistent page cache.In an embodiment, the instructions when executed are to further cause the processor to identify a request for second data from a second application; determine, whether a volatile page cache stores a copy of the data, wherein the volatile page cache is directly addressable by the processor and is to cache data of the storage device; and access the data from the volatile page cache. In an embodiment, the instructions when executed are to further cause the processor to implement a volatile page cache manager that is to determine that a file that includes the data is marked for caching in the persistent page cache; and send a request for the data to a persistent page cache manager. In an embodiment, the instructions when executed are to further cause the processor to implement a first file system of an operating system, wherein the first file system is to determine whether a file that includes the data is marked for caching in the persistent page cache or volatile page cache; and in response to determining that the file is marked for caching in the persistent page cache, send a request for the data to a second file system. In an embodiment, the instructions when executed are to further cause the processor to implement a first file system that is to send data requests towards the volatile page cache; implement a second file system that is to send data requests towards the persistent page cache; and implement a shim layer that is to intercept a data request sent to the first file system and communicate the data request to the second file system.In at least one embodiment, a computer system comprises a volatile memory to store a volatile page cache; a persistent memory to store a persistent page cache; and a processor to identify a request for data from an application; determine whether the persistent page cache stores a copy of the data, wherein the persistent page cache is directly addressable by the processor and is to cache data of a storage device that is not directly addressable by the processor; and access the data from the persistent page cache.In an embodiment, the processor is to identify a request for second data from a second application; determine, whether the volatile page cache stores a copy of the data, wherein the volatile page cache is directly addressable by the processor and is to cache data of the storage device; and access the data from the volatile page cache. In an embodiment, the volatile page cache is to be stored in a volatile memory that is further to store application code and application data. In an embodiment, the computer system further comprises the storage device. In an embodiment, the computer system further comprises one or more of: a battery communicatively coupled to the processor, a display communicatively coupled to the processor, or a network interface communicatively coupled to the processor.In at least one embodiment, a system comprises means to identify a request for data from an application; means to determine whether a persistent page cache stores a copy of the data, wherein the persistent page cache is directly addressable by the processor and is to cache data of a storage device that is not directly addressable by the processor; and means to access the data from the persistent page cache.In an embodiment, the system further comprises means to identify a request for second data from a second application; means to determine, whether a volatile page cache stores a copy of the data, wherein the volatile page cache is directly addressable by the processor and is to cache data of the storage device; and means to access the data from the volatile page cache.In an embodiment, the system further comprises means to implement a volatile page cache manager that is to determine that a file that includes the data is marked for caching in the persistent page cache; and means to send a request for the data to a persistent page cache manager. In an embodiment, the system further comprises means to implement a first file system of an operating system, wherein the first file system is to determine whether a file that includes the data is marked for caching in the persistent page cache or volatile page cache; and in response to determining that the file is marked for caching in the persistent page cache, send a request for the data to a second file system. In an embodiment, the system further comprises means to implement a first file system that is to send data requests towards the volatile page cache; means to implement a second file system that is to send data requests towards the persistent page cache; and means to implement a shim layer that is to intercept a data request sent to the first file system and communicate the data request to the second file system.Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment. |
PROBLEM TO BE SOLVED: To provide an electronic device with a black mask capable of absorbing light and generating power.SOLUTION: An electronic device comprises: a display area comprising a plurality of optical display elements; and a photovoltaic black mask deposited in areas between the optical display elements of the display area and positioned between a substrate and the optical display elements. The photovoltaic black mask comprises: at least one layer configured to absorb light; and at least one layer configured to generate power. |
A display area comprising a plurality of optical display elements, and a photovoltaic black mask deposited on the substrate and patterned and forming a gap in which the optical display elements are located The photovoltaic black mask is at least one layer configured to absorb light and generate power, and a planarization layer filling the gap of the photovoltaic black mask. An electronic device comprising: the patterned photovoltaic black mask and a planarization layer wherein the planarization layer is disposed between the optical display element and the substrate.The electronic device of claim 1, wherein the photovoltaic black mask comprises at least 10% of the display area.The electronic device of claim 1, wherein the photovoltaic black mask comprises an antireflective layer deposited on a substrate.The electronic device according to claim 3, wherein the antireflective layer comprises at least one material layer.The photovoltaic black mask is patterned to form an opening through the second electrode layer and the semiconductor layer to the first electrode layer, the first electrode layer being the antireflective layer The electronic device according to claim 3, wherein the semiconductor layer is deposited on the first electrode layer, and the second electrode layer is deposited on the semiconductor layer.The electronic device of claim 1, wherein the photovoltaic black mask is patterned into discrete sections.The electronic device according to claim 6, wherein the electrodes in the section are connected in series or in parallel.11. The method of claim 1, further comprising a processor configured to communicate with the plurality of optical display elements, and a processor configured to process image data, and a storage device configured to communicate with the processor. Electronic device according to claim 1.9. The electronic device of claim 8, further comprising a driver circuit configured to transmit at least one signal to the plurality of optical display elements.The electronic device of claim 9, further comprising a controller configured to send at least a portion of the image data to the driver circuit.The electronic device of claim 8, further comprising an image source module configured to send the image data to the processor.The electronic device of claim 11, wherein the image source module includes at least one of a receiver, a transceiver, and a transmitter.9. The electronic device of claim 8, further comprising an input device configured to receive the input data and to communicate the input data to the processor.A display area including a plurality of optical display elements, a means for generating power and absorbing light, which is set in a gap in which the optical display elements are located, and a flattening means for filling the gap of the means for absorbing light; The planarizing means as well as the means for generating the power and absorbing light are deposited in the area between the optical display elements of the display area and disposed between the substrate and the optical display elements , Electronic devices.15. The electronic device of claim 14, wherein the absorbing means and the power generating means comprise at least 10% of the display area.The electronic device of claim 14 further comprising an antireflective layer deposited on a substrate.The electronic device of claim 16, wherein the antireflective layer comprises at least one material layer.15. The electronic device of claim 14, wherein the absorbing means and the power generating means are patterned into separate sections.The electronic device according to claim 18, wherein the electrodes in the section are connected in series or in parallel.The electronic device according to claim 1, wherein the plurality of optical display elements are reflective display elements.15. The electronic device of claim 14, wherein the plurality of optical display elements are reflective display elements. |
Device having a generating black mask and method of manufacturing the sameThe field of the invention relates to microelectromechanical systems (MEMS).Microelectromechanical systems (MEMS) include micro mechanical elements, actuators, and electronics. The micromechanical element can be fabricated using deposition, etching, and / or other micromachining processes, wherein the micromachining process etches away portions of the substrate and / or deposited material layers, or multiple Add layers to form electrical and electromechanical devices. One type of MEMS device is referred to as an interferometric modulator. As used herein, the terms interferometric modulator or interferometric light modulator refer to devices that selectively absorb and / or reflect light using the principles of optical interference. In some embodiments, the interferometric modulator may comprise a pair of conductive plates, one or both of which are wholly or partially transparent and / or reflective and relative when appropriate electrical signals are applied. It is possible to do physical exercise. In certain embodiments, one plate may comprise a fixed layer deposited on a substrate, and the other plate may comprise a metal film separated from the fixed layer by an air gap. As described in more detail herein, the position of the other plate relative to one can change the optical interference of light incident on the interferometric modulator. Such devices have a wide range of uses and these types of features are available to improve their existing products and to take advantage of their features in creating new products that have not yet been developed. Utilizing and / or modifying the characteristics of the device should be beneficial in the art.In one embodiment, the electronic device comprises a display area comprising a plurality of optical display elements, and a photovoltaic black mask deposited in the area between each optical display element of the display area The black mask comprises at least one layer configured to absorb light and at least one layer configured to generate electrical power.In another embodiment, a method of making a photovoltaic black mask comprises: depositing an antireflective layer on a substrate; depositing a first electrode layer on the antireflective layer; Depositing a semiconductor layer on the first electrode layer, depositing a second electrode layer on the semiconductor layer, an antireflective layer, a first electrode layer, a semiconductor layer, and a second electrode Patterning a portion of the layer.In another embodiment, the electronic device comprises a display area comprising a plurality of optical display elements, means for absorbing light, and means for generating power, the absorption means and power generating means comprising each optical display of the display area Deposit in the area between the elements.Figure 14 is an isometric view illustrating a portion of an embodiment of a interferometric modulator display in which the movable reflective layer of the first interferometric modulator is in the relaxed position and the movable reflective layer of the second interferometric modulator is in the actuated position. FIG.FIG. 7 is a system block diagram illustrating one embodiment of an electronic device incorporating a 3 × 3 interferometric modulator display.FIG. 3 is a diagram of the position of the moveable mirror relative to the applied voltage in an exemplary embodiment of the interferometric modulator of FIG.FIG. 7 illustrates a set of row and column voltages that can be used to drive an interferometric modulator display.FIG. 3 illustrates an exemplary frame of display data for the 3 × 3 interferometric modulator display of FIG. 2;FIG. 5B is an example timing diagram for row and column signals that may be used to write the frame of FIG. 5A.FIG. 1 is a system block diagram illustrating an embodiment of an image display device comprising a plurality of interferometric modulators.FIG. 1 is a system block diagram illustrating an embodiment of an image display device comprising a plurality of interferometric modulators.Figure 2 is a cross-sectional view of the device of Figure 1;FIG. 7A is a cross section of an alternative embodiment of an interferometric modulator.FIG. 7C is a cross section of another alternative embodiment of an interferometric modulator.FIG. 7C is a cross section of yet another alternative embodiment of an interferometric modulator.FIG. 10 is a cross section of a further alternative embodiment of an interferometric modulator.FIG. 7 is a top view of a portion of an interferometric modulator array, showing non-active areas including structures contained within a plurality of optical display elements.FIG. 7 is a top front view of a portion of an interferometric modulator array, showing non-actuation regions including structures contained within a plurality of optical display elements.FIG. 1 is a cross-sectional view of a MEMS device having a mask or light absorbing area according to one embodiment.FIG. 7 illustrates power generation black masking according to one embodiment.FIG. 5 illustrates a method of manufacturing a power generation black mask according to one embodiment.FIG. 5 illustrates a method of manufacturing a power generation black mask according to one embodiment.FIG. 5 illustrates a method of manufacturing a power generation black mask according to one embodiment.FIG. 5 illustrates a method of manufacturing a power generation black mask according to one embodiment.FIG. 5 illustrates a method of manufacturing a power generation black mask according to one embodiment.FIG. 5 illustrates a method of manufacturing a power generation black mask according to one embodiment.FIG. 5 illustrates a method of manufacturing a power generation black mask according to one embodiment.FIG. 6 is a view showing a power generation black mask according to another embodiment.FIG. 6 is a view showing a power generation black mask according to another embodiment.FIG. 7 shows a series connected generation black mask according to another embodiment.FIG. 6 is a graph showing the amount of light reflected and absorbed by one embodiment of a power generation black mask.FIG. 6 is a table showing the materials and thicknesses of each layer of one embodiment of a power generation black mask.The following detailed description is directed to certain specific embodiments. However, other embodiments may be used and, depending on the element, may be implemented in many different ways. Although the description will refer to each drawing, in the drawings, the same parts will be denoted by the same numerals throughout. As will be apparent from the following description, each embodiment displays an image, whether it is moving (e.g. video), stationary (e.g. still image), or whether it is a character or a picture. It may be implemented on any device configured. More specifically, it is contemplated that each embodiment may be implemented in or associated with various electronic devices, such as, but not limited to, the following devices. Mobile phones, wireless devices, personal digital assistants (PDAs), handheld or portable computers, GPS receivers / navigators, cameras, MP3 players, camcorders, game consoles, watches, watches, calculators, television monitors, flats Panel display device, computer monitor, car display device (for example, odometer display device etc.), control device and / or display device of cockpit, display device of camera view (for example display device of rear confirmation camera in vehicle), electronic photograph Devices such as electronic billboards or signs, projectors, buildings, packaging, and aesthetic structures (e.g., a display of an image on a piece of jewelry). MEMS devices of similar construction to the devices described herein can also be used in non-display applications such as electronic switching devices.The desire for a more power efficient display of a mobile device while maintaining the display quality of a conventional display is easily accomplished with an optical mask having power generation capabilities. For these and other reasons, the amount of power used by the device may be reduced or sufficient power generated to minimize the amount of additional passive or inactive optical content within the display. It is even desirable to charge the components. In one embodiment, the multipurpose optical component acts as a power generating optical mask, such as a "black mask" to absorb ambient or stray light, improve the light response of the display by increasing the contrast ratio, and also black The mask is also used to generate power for the device. Power generation black masks may be used in displays and may generate power to reduce the overall power consumption of the device. In addition, the generating black mask can generate sufficient power to charge the components of the device. In some applications, the black mask can reflect light of a predetermined wavelength to appear as a color other than black. In one embodiment, a MEMS display, eg, an array of interferometric modulators, comprises a dynamic optical component (eg, a dynamic interferometric modulator) and a static offset laterally from the dynamic optical component. Optical components (eg, static interferometric modulators). The static optical component acts as a "black mask" to absorb ambient light or stray light in the non-operating area of the display to improve the light response of the dynamic optical component and also to generate power Work as a part. For example, the non-actuated area can include one or more areas of the MEMS display other than the area corresponding to the movable reflective layer. The inactive area can also include an area of the display that is not used to display the image or data displayed on the display.Although a MEMS device including an interferometric modulator will be used to describe one embodiment, each portion of the present disclosure generally has the non-active area required to absorb light. It should be understood that the invention may be applied to other optical devices such as various imaging display devices and optoelectronic devices that do not include interferometric modulators (eg, LCDs, LEDs and plasma displays). As will be apparent from the following description, each part of the present disclosure displays an image whether moving (e.g. video), stationary (e.g. still image), or pictorial, whether it be a character or not It may be implemented on any device configured as such. More specifically, the present disclosure may be applied to various electronic devices, for example but not limited to the following devices. Mobile phones, wireless devices, personal digital assistants (PDAs), handheld or portable computers, GPS receivers / navigators, cameras, MP3 players, camcorders, game consoles, watches, watches, calculators, television monitors, flats Panel display device, computer monitor, car display device (for example, odometer display device etc.), control device and / or display device of cockpit, display device of camera view (for example display device of rear confirmation camera in vehicle), electronic photograph Devices such as electronic billboards or signs, projectors, buildings, packaging, and aesthetic structures (e.g., a display of an image on a piece of jewelry). MEMS devices of similar construction to the devices described herein can also be used in non-display applications such as electronic switching devices. Furthermore, the present disclosure is by no means limited to use in a display device.One embodiment of an interferometric modulator display comprising an interferometric MEMS display is shown in FIG. In these devices, each pixel is in either the bright or dark state. In the bright ("on" or "open") state, the display element reflects most of the incident visible light towards the user. When in the dark ("off" or "closed") state, the display element reflects little incident visible light towards the user. Depending on the embodiment, the reflection properties of the light in the "on" and "off" states may be reversed. The MEMS pixels can be configured to reflect primarily at a selected color, allowing for a color display as well as a black and white display.FIG. 1 is an isometric view depicting two adjacent pixels in a series of pixels in a display, wherein each pixel comprises a MEMS interferometric modulator. In some embodiments, the interferometric modulator display comprises a row / column array of these interferometric modulators. Each interferometric modulator includes a pair of reflective layers arranged at a variable and controllable distance from one another to vary at least one dimension to form a resonant optical gap. In one embodiment, one of the reflective layers may move between two positions. In the first position, referred to herein as the relaxed position, the movable reflective layer is disposed at a relatively long distance from the fixed partially reflective layer. In the second position, referred to herein as the actuated position, the movable reflective layer is disposed closer adjacent to the partially reflective layer. Incident light reflected from the two layers interferes in a constructive or destructive manner, depending on the position of the movable reflective layer, producing either totally reflective or non-reflective states for each pixel .The illustrated part of the pixel array in FIG. 1 comprises two adjacent interferometric modulator 12a and 12b. In the left interferometric modulator 12a, the movable reflective layer 14a is shown in a relaxed position at a predetermined distance from the optical stack 16a, the optical stack 16a comprising a partially reflective layer. In the interferometric modulator 12b on the right, the movable reflective layer 14b is shown in an actuated position adjacent to the optical stack 16b.Optical stacks 16a and 16b (collectively referred to as optical stack 16) typically comprise several coalescing layers, as described herein, in which layers indium tin oxide (ITO) Etc., partially reflective layers such as chromium, and transparent dielectrics may be included. Thus, the optical stack 16 is conductive, partially transparent, partially reflective, and also by depositing, for example, one or more of the layers described above on the transparent substrate 20. It may be manufactured. The partially reflective layer can be formed of various materials that partially reflect, such as various metals, semiconductors, and dielectrics. The partially reflective layer can be formed of one or more layers of materials, and each of the layers can be formed of a single material or a combination of materials.In some embodiments, each layer of optical stack 16 may be patterned into parallel strips to form row electrodes in a display, as described further below. A movable reflective layer 14a, 14b is a series of parallel strips of layers (perpendicular to the row electrodes 16a, 16b) deposited on top of the deposited metal layers or posts 18, and the intervening sacrificial material deposited between each post 18 It may be formed as When the sacrificial material is etched away, the movable reflective layers 14a, 14b are separated from the optical stacks 16a, 16b by a defined gap 19. Materials having high conductivity and reflective properties, such as aluminum, may be used for the reflective layer 14 and these strips may form column electrodes in the display.When no voltage is applied, the gap 19 remains between the movable reflective layer 14a and the optical stack 16a with the movable reflective layer 14a mechanically relaxed, as shown by the pixel 12a in FIG. However, when a potential difference is applied to the selected row and column, the capacitors formed at the intersections of the row and column electrodes in the corresponding pixels are charged, and the electrodes are pulled together by electrostatic force. When the voltage is high enough, the movable reflective layer 14 deforms and is pressed against the optical stack 16. As shown by pixel 12b on the right side of FIG. 1, a dielectric layer (not shown in this figure) in optical stack 16 can prevent shorting and control the separation distance between layers 14 and 16 . The behavior is the same regardless of the polarity of the applied potential difference. In this way, row / column actuation that can control either reflective or non-reflective pixel states is similar in many ways to that used in conventional LCD and other display technologies.FIGS. 2 through 5B illustrate one exemplary process and system for using an array of interferometric modulators in a display application.FIG. 2 is a system block diagram illustrating one embodiment of an electronic device that may incorporate aspects of the present invention. In the exemplary embodiment, the electronic device comprises a processor 21. The processor 21 includes an ARM, Pentium (registered trademark), Pentium (registered trademark) II, Pentium (registered trademark) III, Pentium (registered trademark) IV, Pentium (registered trademark) Pro, 8051, MIPS (registered trademark), Power PC It may be any general purpose single-chip or multi-chip microprocessor such as (registered trademark), ALPHA (registered trademark), or any special purpose microprocessor such as digital signal processor, microcontroller, or programmable gate array. As is conventional in the art, processor 21 may be configured to execute one or more software modules. In addition to executing the operating system, the processor may be configured to execute one or more software applications, including a web browser, a telephone application, an email program, or any other software application .In one embodiment, processor 21 is also configured to communicate with array driver 22. In one embodiment, array driver 22 comprises row driver circuitry 24 and column driver circuitry 26 that provide signals to display array or panel 30. A cross-sectional view of the array shown in FIG. 1 is shown by line 1-1 in FIG. For MEMS interferometric modulators, the row / column actuation protocol may utilize the hysteresis characteristics of these devices shown in FIG. A potential difference of, for example, 10 volts may be required to deform the movable layer from the relaxed state to the actuated state. However, when the voltage decreases from that value, the movable layer maintains its state as the voltage recedes below 10 volts. In the exemplary embodiment of FIG. 3, the movable layer does not relax completely until the voltage drops below 2 volts. Thus, in the example shown in FIG. 3, there is a window of applied voltage of about 3-7 volts, within which voltage the device is stable in either the relaxed or actuated state. This is referred to herein as the "hysteresis window" or "stability window". For a display array having the hysteresis characteristics of FIG. 3, during row strobing, the pixels to be actuated in the row to be strobed are exposed to a voltage difference of about 10 volts and the pixels to be relaxed close to zero volts The row / column actuation protocol can be designed to be exposed to differences. After strobing, each pixel is exposed to a steady state voltage difference of about 5 volts so as to remain in any state provided by the row strobing. Each pixel knows the potential difference within the "stability window" of 3-7 volts in this example after being written. This feature makes the pixel design shown in FIG. 1 stable under the same applied voltage conditions in either an actuated or relaxed pre-existing state. Each pixel of the interferometric modulator, whether in the actuated or relaxed state, is essentially a capacitor formed by the fixed and moving reflective layers, and thus consumes little power This steady state can be held at the voltage within the hysteresis window. Essentially, no current flows to the pixel if the applied potential is fixed.In a typical application, a display frame may be created by asserting the set of column electrodes in accordance with the desired set of active pixels in the first row. A row pulse is then applied to the row 1 electrode to activate each pixel corresponding to the asserted column line. The asserted set of column electrodes is then changed to correspond to the desired set of active pixels in the second row. A pulse is then applied to the row 2 electrodes to activate the appropriate pixels in row 2 according to the asserted column electrodes. Each pixel in row 1 is not affected by the pulse in row 2 and remains set during the pulse in row 1. This may be repeated sequentially for the entire series of rows to generate a frame. Generally, each frame is refreshed and / or updated with new display data by continually repeating this process at some desired number of frames per second. A wide variety of protocols for driving the row and column electrodes of a pixel array to produce a display frame are also well known and may be used in conjunction with the present invention.FIGS. 4, 5A and 5B illustrate one possible actuation protocol for generating display frames on the 3 × 3 array of FIG. FIG. 4 shows a possible set of column and row voltage levels that may be used for pixels exhibiting the hysteresis curve of FIG. In the embodiment of FIG. 4, to activate the pixel, the appropriate column needs to be set to -Vbias and the appropriate row to be set to + .DELTA.V, which correspond to -5 and +5 volts respectively May be Relaxing the pixel is accomplished by setting the appropriate column to + Vbias, setting the appropriate row to the same + ΔV, and creating a zero volt potential difference across the pixel. In each row where the row voltage is held at zero volts, each pixel is stable in whatever state it was originally, regardless of whether the column is + Vbias or -Vbias. As also shown in FIG. 4, it will be appreciated that a voltage of opposite polarity to the voltage described above can be used. For example, to activate a pixel, it may be necessary to set the appropriate column to + Vbias and the appropriate row to -ΔV. In this embodiment, releasing the pixel is accomplished by setting the appropriate column to -Vbias, setting the appropriate row to the same -ΔV, and creating a zero volt potential difference across the pixel.FIG. 5B is a timing diagram showing a series of row and column signals applied to the 3 × 3 array of FIG. 2, resulting in the display arrangement shown in FIG. 5A. Here, the active pixel is non-reflective. Prior to writing the frame shown in FIG. 5A, each pixel can be in any state, and in this example, all the rows are at 0 volts and all the columns are at +5 volts. With these applied voltages, all pixels are stable in their existing actuated or relaxed states.In the FIG. 5A frame, pixels (1,1), (1,2), (2,2), (3,2) and (3,3) are active. To achieve this, during "line time" for row 1, columns 1 and 2 are set to -5 volts and column 3 is set to +5 volts. Since all pixels stay within the 3-7 volt stability window, the state of any pixels is not altered by this. Row 1 is then strobed with a pulse rising from 0 volts to 5 volts and returning to zero. This activates the (1,1) and (1,2) pixels and relaxes the (1,3) pixel. Other pixels in the array are not affected. To set row 2 as desired, column 2 is set to -5 volts and columns 1 and 3 are set to +5 volts. Then, with the same strobe applied to row 2, pixel (2,2) will be activated and pixel (2,1) and (2,3) will relax. Again, the other pixels of the array are not affected. By setting columns 2 and 3 to -5 volts and column 1 to +5 volts, row 3 is similarly set. The row 3 strobe sets the row 3 pixels as shown in FIG. 5A. After writing the frame, the row potentials are zero, the column potentials can remain at either +5 or -5 volts, and the display is then stable in the arrangement of FIG. 5A. It will be appreciated that the same procedure can be used for arrays of dozens or hundreds of rows and columns. The timing, order, and levels of voltages used to perform row and column actuation can be widely varied within the general principles described above, and the above examples are merely exemplary. It will be appreciated that any operating voltage method may be used with the systems and methods described herein.6A and 6B are system block diagrams illustrating one embodiment of a display device 40. FIG. Display device 40 may be, for example, a cellular or mobile telephone. However, the same components of display 40, or slight variations thereof, are also illustrative of various types of display such as televisions and portable media players.The display device 40 includes a housing 41, a display 30, an antenna 43, a speaker 45, an input device 48, and a microphone 46. In general, the housing 41 is formed from any of a variety of manufacturing processes well known to those skilled in the art, including injection molding and vacuum molding. Furthermore, the housing 41 may be made of any of a variety of materials including, but not limited to, plastics, metals, glasses, rubbers, and ceramics, or combinations thereof. In one embodiment, the housing 41 comprises a removable part (not shown), which is exchanged for other removable parts of different colors or including various logos, pictures or symbols. It is also good.The display 30 of the exemplary display 40 may be any of a variety of displays, including bi-stable displays, as described herein. In other embodiments, the display 30 may be a flat panel display such as the plasma, EL, OLED, STN LCD, or TFT LCD described above, or a CRT or other as is well known to those skilled in the art. Non flat panel displays such as electron tube devices are included. However, to describe the present embodiment, as described herein, the display 30 includes an interferometric modulator display.The components of one embodiment of exemplary display device 40 are schematically illustrated in FIG. 6B. The illustrated exemplary display device 40 comprises a housing 41 and can comprise additional components at least partially stored therein. For example, in one embodiment, the exemplary display device 40 comprises a network interface 27 that includes an antenna 43, which is coupled to the transceiver 47. The transceiver 47 is connected to the processor 21, which is connected to the conditioning hardware 52. Conditioning hardware 52 may be configured to condition the signal (eg, filter the signal). Conditioning hardware 52 is connected to the speaker 45 and the microphone 46. The processor 21 is also connected to an input device 48 and a driver control device 29. Driver controller 29 is coupled to frame buffer 28 and array driver 22, which is further coupled to display array 30. Due to the design requirements of the particular exemplary display device 40, the power supply 50 supplies power to all components.The network interface 27 comprises an antenna 43 and a transceiver 47 so that the exemplary display device 40 can communicate with one or more devices over the network. In one embodiment, network interface 27 may also have some processing power to alleviate processor 21's requirements. The antenna 43 is any antenna known to the person skilled in the art for transmitting and receiving signals. In one embodiment, the antenna transmits and receives RF signals according to the IEEE 802.11 standard, including IEEE 802.11 (a), (b), or (g). In another embodiment, the antenna transmits and receives RF signals according to the Bluetooth standard. In the case of a cellular telephone, the antenna is designed to receive CDMA, GSM, AMPS or other known signals used to communicate in a wireless mobile telephone network. The transceiver 47 preprocesses the signal received from the antenna 43 so that the signal may be received by the processor 21 and further processed. The transceiver 47 also processes the signal received from the processor 21 such that the signal may be transmitted from the exemplary display device 40 via the antenna 43.In an alternative embodiment, the transceiver 47 can be replaced by a receiver. In yet another alternative embodiment, the network interface 27 can be replaced by an image source that can store or generate image data to send to the processor 21. For example, the image source may be a digital video disc (DVD) or hard disk drive that stores image data or software modules that generate the image data.Processor 21 generally controls the overall operation of exemplary display 40. Processor 21 receives data, such as compressed image data from network interface 27 or an image source, and processes the data into a format that can be raw image data or easily processed and converted to raw image data. The processor 21 then sends the processed data to the driver controller 29 or frame buffer 28 for storage. Raw data usually refers to information that identifies the image characteristics at each location in the image. For example, such image characteristics may include color, saturation, and grayscale levels.In one embodiment, processor 21 comprises a microcontroller, CPU, or arithmetic logic unit to control the operation of exemplary display device 40. Conditioning hardware 52 generally comprises amplifiers and filters for transmitting signals to the speaker 45 and for receiving signals from the microphone 46. The conditioning hardware 52 may be a separate component in the exemplary display 40 and may be incorporated into the processor 21 or other components.The driver controller 29 takes raw image data generated by the processor 21 either directly from the processor 21 or from the frame buffer 28 and reformats this raw image data appropriately for high speed transmission to the array driver 22. Do. Specifically, the driver control unit 29 reformats the raw image data and converts it into a data flow having a raster-like format so that it has a time order suitable to scan the entire display array 30. . The driver controller 29 then sends the formatted information to the array driver 22. A driver controller 29, such as an LCD controller, often cooperates with the system processor 21 as a stand-alone integrated circuit (IC), although such controller may be implemented in many ways. These devices may be embedded in the processor 21 as hardware, embedded in the processor 21 as software, or completely integrated in hardware together with the array driver 22.In general, array driver 22 receives formatted information from driver controller 29 and reformats the video data into parallel sets of waveforms. These waveforms are applied many times per second to hundreds or thousands of leads coming from the display x-y matrix of pixels.In one embodiment, driver controller 29, array driver 22, and display array 30 are suitable for any type of display described herein. For example, in one embodiment, driver controller 29 is a conventional display controller or a bi-stable display controller (eg, an interferometric modulator controller). In other embodiments, array driver 22 is a conventional driver or a bi-stable display driver (e.g., an interferometric modulator display). In one embodiment, driver controller 29 is integral with array driver 22. Such embodiments are common in highly integrated systems such as cellular phones, watches, and other small area displays. In yet other embodiments, display array 30 is a typical display array or a bi-stable display array (eg, a display including an array of interferometric modulators).The input device 48 allows the user to control the operation of the exemplary display device 40. In one embodiment, the input device 48 includes a keypad such as a QWERTY keyboard or a telephone keypad, a button, a switch, a touch screen, or a pressure sensitive or heat sensitive membrane. In one embodiment, microphone 46 is an input device for an exemplary display device 40. When the microphone 46 is used to input data into the device, voice commands may be provided by the user to control the operation of the exemplary display 40.Power supply 50 may include various energy storage devices well known in the art. For example, in one embodiment, power supply 50 is a rechargeable battery, such as a nickel cadmium battery or a lithium ion battery. In another embodiment, the power supply 50 is a renewable energy source, a capacitor, or a solar cell comprising a plastic solar cell and a solar cell paint. In another embodiment, power supply 50 is configured to receive power from a wall outlet.In some embodiments, the control programming function is in a driver control that can be located in several places in the electronic display system, as described above. In some embodiments, the control programming function is in the array driver 22. Those skilled in the art will appreciate that the foregoing optimization may be implemented in any number of hardware and / or software components, as well as various configurations.The details of the structure of interferometric modulators that operate in accordance with the principles set forth above may be varied. For example, FIGS. 7A-7E illustrate five different embodiments of the movable reflective layer 14 and its supporting structure. FIG. 7A is a cross-sectional view of the embodiment of FIG. 1, where a strip 14 of metallic material is deposited on orthogonally extending supports 18. In FIG. 7B, the moveable reflective layer 14 is attached to the support by tethers 32 only at the corners. In FIG. 7C, the movable reflective layer 14 is suspended from the deformable layer 34, which may comprise a flexible metal. The deformable layer 34 is connected to the substrate 20 directly or indirectly around the periphery of the deformable layer 34. These connections are referred to herein as posts. The embodiment shown in FIG. 7D has a post plug 42 on which the deformable layer 34 is located. The movable reflective layer 14 is suspended above the gap, as in FIGS. 7A-7C, but the deformable layer 34 is a hole between the deformable layer 34 and the optical stack 16. It does not form a pillar by filling up. Rather, the posts are formed from the planarization material used to form the post plug 42. The embodiment shown in FIG. 7E is based on the embodiment shown in FIG. 7D, but may be adapted to handle any of the embodiments shown in FIGS. 7A-7C as well as additional embodiments not shown. Good. In the embodiment shown in FIG. 7E, additional layers of metal or other conductive material have been used to form the bus structure 44. This enables the routing of signals along the back of the interferometric modulator and eliminates the large number of electrodes that might otherwise have to be formed on the substrate 20.In the embodiment as shown in FIG. 7, the interferometric modulator acts as a direct-view device in which the image is viewed from the front side of the transparent substrate 20, ie, the side opposite to the side on which the modulator is disposed. In these embodiments, the reflective layer 14 optically shields each portion of the interferometric modulator on the opposite reflective layer side of the substrate 20, including the deformable layer 34. This allows the shielded area to be configured and activated without adversely affecting the image quality. Such shielding enables the bus structure 44 in FIG. 7E, which isolates the optical properties of the modulator from the electromechanical properties of the modulator, such as the addressing and the movement resulting from the addressing. it can. This separable modulator architecture allows the structural design and materials used for the electromechanical and optical aspects of the modulator to be selected and functioned independently of one another. Furthermore, the embodiments shown in FIGS. 7C-7E have the additional benefits obtained by separating the optical properties of the reflective layer 14 from their mechanical properties, which are implemented by the deformable layer 34. Be done. This allows the structural design and material used for the reflective layer 14 to be optimized for optical properties, and the structural design and material used for the deformable layer 34 to have the desired mechanical properties Can be optimized forFigures 8A and 8B show an example of a portion of a display having a display element that can incorporate a black mask. Figures 8A and 8B show an example of a portion of a display comprising an array of interferometric modulators. Black masks can be used in the arrays shown in FIGS. 8A and 8B, and can be used in any type of display where it is useful to mask certain areas of the display from ambient light. A plurality of pixels 12 of the array are shown in FIG. 8A. In FIG. 8B, an example of a support 18 disposed on a plurality of pixels of an array of interferometric modulators that can be masked to improve the light response of the display is visible to the display. It is shown from the "back" side of the substrate opposite to the "side". It may be desirable to minimize the light reflected from certain areas of the array in order to improve the light response (e.g., contrast) of the display. Any region of the interferometric modulator that increases the reflectivity of the display in the dark state uses a black mask to increase the contrast ratio (eg, between the structure and the light entering the interferometric modulator. It can be masked) by depositing a mask in between. Some of the areas that can be masked to beneficially affect the display include, but are not limited to, line cuts between interferometric modulators 72 (FIG. 8A), supports 18, and the eye of the display. Included is the bend region of the moveable mirror layer connected to and / or around the support 18 visible from the touch side, as well as the region between the moveable mirror layers of the adjacent interferometric modulator 76 (FIG. 8A). The mask is placed in such a region, thereby separating the mask from the moveable mirror of the interferometric modulator and, for example, ambient light can propagate to the moveable mirror and be reflected therefrom, but the region other than the moveable mirror is It can be masked, thus preventing ambient light from reflecting off any other structures in the masked area. These masked areas can be called "inactive areas" because they are static or do not provide light modulation and do not, for example, comprise movable mirrors. In some embodiments, the mask can be arranged such that light entering the interferometric modulator strikes either the masked area or the moveable mirror. In other embodiments, at least a portion of the non-actuation region is masked.FIG. 9 shows a simplified cross-sectional view of two elements of a multi-element display device 100 according to one embodiment. The display comprises two optical components (for clarity, the other optical components are not shown), which in this embodiment are interferometric interferometric modulators 104. As mentioned above, interferometric modulator device 104 comprises a configuration of reflective and / or transmissive films that are driven as they move toward substrate 102 in the direction indicated by arrow 106. The film provides the desired light response. In FIG. 9, reference numeral 108 indicates the non-operational area of interferometric modulator 104. Typically, when the viewer views the display 100 from the direction indicated by the arrow 110 indicating viewing direction, the reflection of ambient light from the inactive area 108 does not degrade the light response provided by the interferometric modulator device 104 Thus, it is desirable that the non-active area 108 absorb light or act as a black mask. In other embodiments, it may be desirable to mask the non-active area 108 with a color mask other than black (eg, green, red, blue, yellow, etc.).The mask for the non-active area 108 may be made of a material selected to have a light response that absorbs or attenuates the light. The material used to make the mask may be electrically conductive. According to embodiments herein, the mask for each non-actuation region 108 can be manufactured as a stacked thin film. For example, in one embodiment, the stacked thin films may comprise a reflector layer disposed on an absorbing layer disposed on a dielectric layer that does not absorb light. In other embodiments, the non-working region 108 may comprise a single layer of organic or inorganic material and a layer of conductive material such as chromium or aluminum that attenuates or absorbs light.A power generation black mask 1024 according to one embodiment is shown in FIG. The power generation black mask 1024 is formed on a substrate 1004, an antireflective layer 1008 disposed on the substrate 1004, a first electrode layer 1012 disposed on the antireflective layer 1008, and the first electrode layer 1012. A semiconductor layer 1016 disposed, and a second electrode layer 1020 disposed on the semiconductor layer 1016 are included. The black mask improves the display quality of the display device. This improvement is provided by the various features of the black mask. For example, the black mask minimizes the amount of additional passive or inactive optical content in the display. In addition, the black mask absorbs ambient or stray light and improves the light response of the display by increasing the contrast ratio. The power generation black mask according to one embodiment provides all of the benefits listed above and provides further benefits. The power generation components of the black mask may allow the device to use less power. In addition, black mask power generation components can be used to generate power to charge at least one component in the device. For example, a generating black mask can generate sufficient power to charge the batteries used by the device. Alternatively, the generating black mask can power other components in the device.11A-11G illustrate a method of fabricating a power generation black mask 1128 according to one embodiment. In this embodiment, the generating black mask 1128 is manufactured for use in a display. In FIG. 11A, the method begins at substrate 1104. Substrate 1104 may comprise glass or any other material suitable for use as a substrate. In FIG. 11B, an antireflective layer 1108 is disposed on the substrate 1104. The antireflective layer 1108 reduces the amount of incident light that is reflected back from the device by optically matching the substrate 1104 with the subsequent layer 1112. The antireflective layer 1108 may include multiple layers with alternating high and low refractive indices. Additionally, the antireflective layer may comprise SiO2, SiNx, MgF2, ITO, Al2O3, Yi2O3, ZnO, or any other material suitable for use as an antireflective layer. In FIG. 11C, a first electrode layer 1112 is disposed on the antireflective layer 1108. The first electrode layer 1112 may comprise ITO or other substantially transparent material suitable for use as an electrode. In FIG. 11D, a semiconductor layer stack 1116 is disposed on the first electrode layer 1112. The semiconductor layer stack 1116 may include a layer of a set of pn junctions or pin junctions corresponding to a set of conventional Si, CdTe, or any other suitable semiconductor material for photovoltaic cells. In FIG. 11E, the second electrode layer 1120 is disposed on the semiconductor layer 1116. The second electrode layer 1120 may comprise ITO, Al, or any other material suitable for use as an electrode. The second electrode layer 1120 may be transparent or reflective. The power generation black mask 1128 includes an antireflective layer 1108, a first electrode layer 1112, a semiconductor layer 1116 and a second electrode layer 1120. In FIG. 11F, the power generation black mask 1128 is patterned. In this embodiment, the generation black mask 1128 is patterned to allow the pixel elements of the display to be placed over the gaps in the generation black mask 1128. A planarization layer 1124 may be deposited on the patterned power generation black mask 1128, as shown in FIG. 11G. This planarization layer 1124 allows the patterned power generation black mask 1128 to be used as an engineering substrate 1132 in other manufacturing processes. Depending on the manufacturing process, the structure may be manufactured on the engineering substrate 1132 as in the case of direct manufacturing on a flat substrate such as glass, plastic or the like. For example, a display comprising an IMOD may be fabricated on the surface of the engineering substrate 1132.FIG. 12A shows a power generation black mask according to another embodiment. In this embodiment, the manufacturing method is similar to FIGS. 11A-11E. An insulator layer 1224 is deposited on the generator black mask 1228. The insulator 1224 is then patterned to form an opening in the second electrode layer 1220. Thus, the second electrode layer 1220 is exposed, and another structure can be connected to the second electrode layer 1220.FIG. 12B shows a power generation black mask according to still another embodiment. In this embodiment, the manufacturing method is similar to FIGS. 11A-11E. A generation black mask 1228 is patterned to form an opening in the first electrode layer 1212. An insulator layer 1224 is deposited over the opening formed for the first electrode layer. The insulator layer 1224 is then patterned to form an opening in the first electrode layer 1212. Thus, the first electrode layer 1212 is exposed, and another structure can be connected to the first electrode layer 1212.FIG. 13 shows power generation black masking according to another embodiment. This embodiment uses the embodiment described in FIGS. 12A-12B. A power generation black mask 1300 is shown from the top. The power generation black mask 1300 is patterned to correspond to pixel elements disposed around the opening 1320 and divided into individual sections 1304, 1308, 1312, and 1316. Each section 1304, 1308, 1312, and 1316 may be configured to expose the first electrode layer 1212 as shown in FIG. 12B and to expose the second electrode layer 1220 as shown in FIG. 12A. You may configure it. Alternatively, each section 1304, 1308, 1312, and 1316 may be configured such that a portion of the section exposes the first electrode layer and a portion of the section 1312 exposes the second electrode layer. This allows each section to be connected in series or in parallel. In FIG. 13, sections 1312 are connected in series by pairs of columns. Section 1316 is connected in series to section 1312, and sections 1308 and 1304 are connected in series. In the present specification, the configuration of each connection or the configuration of the exposed electrode layer is not limited in any way. The sections 1304, 1308, 1312, and 1316 of the generating black mask may be connected in series, in parallel, or a combination of both. Each section of the power generation black mask may expose the first electrode layer 1212, the second electrode layer 1330, or a combination of both. The configuration of each section and each electrode layer may be specific to the device using the power generation black mask 1300. For example, a device requiring a higher voltage may connect each section 1304, 1308, 1312, and 1316 in series and connect each electrode layer, as shown in FIG.The following is a conservative estimate of the amount of power generated by the generation black mask according to one embodiment. The power generation black mask contains approximately 10% of the display area of a 1.8 inch diagonal IMOD display. The width of the display is 0.035 meters and the height of the display is 0.040 meters, resulting in a display area of 0.0014 square meters. The black mask covers nearly 10% of the display area, making it 0.000114 square meters. The electrical efficiency of the generating black mask is 10%. The amount of incident sunlight is 1000 W / m 2, and it is assumed that only 50% of the incident sunlight reaches the power generation black mask by conservative estimation. 1000 W / m 2 is the amount of sunlight received under optimal conditions. Optimal conditions may include receiving sunlight at midday, in areas closer to the equator, in areas free of clouds or fog. For the conditions given in the following example, an estimate of the power produced by a 1.8 inch diagonal display is 7 milliwatts or 0.007 watts. This is calculated by multiplying 500 W / m 2 by the area of the black mask 0.00014 m 2 and then multiplying the electrical efficiency of the power generation black mask by 10%. The black mask may cover approximately 10% to 30% of the display area. The electrical efficiency of the generating black mask may range from 5% to 20%. The amount of incident light reaching the generating black mask affects the time of day, the weather (ie clouds or fog), the geographical location, and the amount of sunlight that can reach the device It may depend on various other conditions that may be. The following example is merely illustrative of a conservative estimate of the amount of power generated by a generation black mask, and in no way limits the amount of power that a generation black mask can generate. The amount of power generated may be different in various embodiments.FIG. 14A is a graph showing the amount of light reflected and absorbed by a generating black mask. The x-axis of the graph represents various wavelengths of incident light. The y-axis represents percentage. The y-axis is on a scale of 1, which means that the percentage is 10% at 0.10. As shown in the graph, power generation black masks generally reflect a small amount of incident light and absorb most of it. For example, for light having a wavelength of 550 nm, approximately 0.5% of the incident light is reflected and 99.5% of the incident light is absorbed.FIG. 14B is a table showing materials and thicknesses used in electrode layers and semiconductor layers of a power generation black mask according to one embodiment. The first electrode layer is transparent, comprises ITO, and has a thickness of approximately 72 nm. The semiconductor layer contains a-Si and has a thickness of approximately 15 nm. The second electrode layer is reflective and contains Cr and has a thickness of approximately 100 nm. The power generation black mask shown in this embodiment reflects approximately 0.5% of incident light.The embodiments described above provide the functionality of a black mask, but also provide additional benefits. A powered black mask according to one embodiment makes the device using the black mask more power efficient and reflects less than 1% of the incident light. Power generation black masks can be used to reduce the amount of power used by devices using power generation black masks. In addition, the power generation black mask can be used to generate power for operating or charging at least one component of the device that uses the power generation black mask. In other embodiments, the generation black mask can be patterned to provide openings in either the first or second electrode layer, or both. In other embodiments, the generating black mask can be divided into separate sections, and each section can be connected in series or in parallel, or both. Although the various embodiments described herein relate to MEMS or display devices, it will be understood that this disclosure is not limited to use in such devices. Any device using a black mask may use embodiments of the present invention.Numerous different modifications can be made from such previous embodiments, and each aspect of the invention described herein is merely illustrative and not limiting the scope of the present invention. I will understand. The detailed description of certain embodiments presents various descriptions of specific embodiments of the present invention. However, the invention can be implemented in many different ways, as defined and covered in the claims.The terminology used in the description provided herein is merely used in conjunction with the detailed description of certain specific embodiments of the present invention, and thus any limiting or limiting interpretation It should not be Moreover, each embodiment of the present invention may include several novel features, none of which is solely responsible for its desirable attributes, or the book described herein. It is not essential to the practice of the invention.Reference Signs List 12 pixel 12a branch interferometric modulator 12b branch interferometric modulator 14 movable reflective layer 14a movable reflective layer 14b movable reflective layer 16 optical stack 16a optical stack 16b optical stack 18 post 19 pole 20 gap 20 substrate 21 processor 22 array driver 24 row driver circuit 26 columns Driver circuit 27 Network interface 28 Frame buffer 29 Driver control device 30 Display 32 Connection 34 Deformable layer 40 Display device 41 Housing 42 Support plug 43 Antenna 44 Bus structure 45 Speaker 46 Microphone 47 Transceiver 48 Input device 50 Power supply 52 Conditioning hardware 72: interferometric modulator 76: interferometric modulator 100: display of multiple elements 102: substrate 104: interferometric modulator 106: arrow 08 non-operation area 110 arrow indicating the viewing direction 102 substrate 1004 substrate 1008 anti-reflection layer 1012 first electrode layer 1016 semiconductor layer 1020 second electrode layer 1024 power generation black mask 1104 substrate 1108 anti-reflection layer 1112 first electrode layer 1116 Semiconductor layer stack 1120 second electrode layer 1124 planarization layer 1128 power generation black mask 1132 engineering substrate 1212 first electrode layer 1220 second electrode layer 1224 insulator layer 1228 power generation black mask 1300 power generation black mask 1304 section 1308 section 1312 section 1316 section 1320 opening 1330 second electrode layer |
A more efficient memory access is provided by providing fast access to words on the same physical row (wordline) in memory whether sequential or not. The host computer (11) provides address AIN to a multiplexer (13). The multiplexer (13) selects the host address AIN or an incremented address from a counter (15). The address A1 from the multiplexer (13) is applied to a current address latch (19) for the flash memory bank (21). A comparator (23) compares previous row address bits stored in a previous address latch (25) with row address bits of the current address stored in a current address latch (19). The comparator (23) indicates normal or page number of wait states. |
A method of accessing a memory to provide fast access to words on the same row in memory comprising the steps of:detecting whether the current address is on the same physical memory row as a previous address, andgenerating a different number of wait states depending on whether the access is on the same or different row.The method of Claim 1 wherein said detecting steps includes storing previous address bits and comparing the previous row address bits with the current row address bits.The method of Claim 2 including the step providing a counter of consecutive addresses in a row and detecting the presence of a first control signal that calls for the addresses from a counter for providing the incrementation of addresses from the counter and detecting the presence of a second control signal for providing random row address within a wordline from a host.The method of any preceding claim wherein the detecting step includes determining if a new preemptive access is on the same wordline before the previous address is completed.A memory arranged to perform the method of any preceding claim. |
Field of InventionThis invention relates to memory access and more particularly to an improved burst access operation to include fast access to words on the same row (wordline) in memory.Background of InventionA flash memory bank has a dedicated synchronous read port interface which facilitates read access from main flash memory. Referring to Fig. 1 there is illustrated a block diagram of a flash memory bank 21 and a read interface (control logic) 10 responsive to signals from an interface bus which communicates with the host. The interface bus consists of an address bus, a data bus, and several control signals. The width of the address bus is dependent on the number of words in the associated flash and the width of the data read port interface. A host provides an address AR and an address valid signal ADVZ for standard accesses. Standard read refers to random accesses anywhere in the address space of the flash bank. A read port interface synchronizes the access to the system clock SYSCLOCK. Two ready signals (READY and PREADY), one a pipelined version of the other, are generated in the read port interface, to signal that data is available to the host. The flash bank read interface has internal wait state generation to create these signals. All outputs from the read ports are generated on the rising edges of the system clock with the exception of the output data. Output data is passed directly from the flash bank to the bus interface.Table 1 shows the read port signals. In the prior art this port supports two modes of read operations: standard and burst. Burst according to the prior art read refers to a mode in which sequential addresses are generated in the read port interface. Burst read enables the potential for a higher data rate when accesses occur on the same wordline within the flash bank, increasing system performance. Access time within the flash bank can be faster within a wordline (also called page) because the same wordline drive is active as the previous access. If the access is from a different wordline the current wordline must be deactivated and a different wordline activated, which takes a finite amount of time which is longer than an access from the same page. Burst reads are enabled by an additional control input, BAAZ. When burst is active the flash read port interface generates subsequent addresses, including transitions across wordline boundaries (the read port interface automatically adds wait states, as needed). In the read port interface implementation there are two wait state counters, one for a normal read which may occur on any wordline randomly, and one for a page read which occurs on the same wordline as the previous read. The page read wait state count is typically smaller than the normal read wait state count.Table 1SYSCLOC KISystem ClockADVZIaddress Valid, active low. Indicates a valid and stable input address on bus AR [x:0].AR[x:0]IRead address. x = 1 - number of read address bits required for each bank.BAAZIBurst Address Advance, active low.DR[y:0] ~ORead data bus. y = 15, 31, or 63 for data bus widths of 16, 32, or 64 bits, respectively. There is no output enable for this bus, thus it is never high impedance. Note that this bus originates at the flash bank, not the read port: interface.PREADYOPipelined Ready, active high. This is a pipelined version of the READY. signal except for when in 0 wait states PREADY remains low.READYOData Ready, active high. When active the system may latch the data on the rising edge of the flash system clock. This signal is inactive while wait states are inserted during reads or when the bank is owned by the state machine for a program or erase operation, during reset, and until the module is stable upon release of system reset.As previously mentioned, standard (random access) and burst access (read port interface generates sequential accesses with address override) are supported. Module data outputs, DR(y:0) busses, are driven by the banks directly.The flash module standard read timing is shown in Figure 2. The read is synchronous, requiring the flash system clock: and an Address Valid (ADVZ) to properly time wait state generation and ready signal synchronization. The clock is input via the control port. The flash read port interface latches the address on the rising edge of the clock while ADVZ is active low. The clock rising edge triggers the wait state counter. Once the number of wait states has elapsed READY goes high for one clock cycle. READY, when active high, allows the host to latch. output data on the next rising edge of the clock. The READY signal becomes active for one cycle after the required number of wait states have been inserted. All standard reads use the WTREAD count for wait state generation. This is the normal read wait state count. The example in Figure 2 has two wait states. In the example, the latched address bus internal to the read port interface is shown for reference, as are the arrows on SYSCLOCK, which shows when addresses and/or data are latched. A pipelined version of READY called PREADY is also generated in the read port interface. PREADY is active the clock cycle before READY is active when the number of wait states is one or greater. Note that for zero wait states PREADY will always remain low.A standard burst operation, especially in slower memories such as flash or FeRAM, allows sequential accesses on the same physical row in memory to be accessed faster than a random access because the wordline does not need to be activated before the read, as previously stated. It is activated from the prior read. In the prior art this kind of address generation is done to allow sequential accesses only if the memory locations are accessed in consecutive order. It is desirable to provide a system where the addresses need not be consecutive, but only on the same wordline.Summary of InventionIn accordance with one embodiment of the present invention an interface allows, after issuance of an address, the detection of whether the current access is on the same physical memory row as the previous address, and accordingly allows a different number of wait states, depending on whether the access is on the same row or a different row. In accordance with an embodiment, the faster access time is available if the address for the current access is either generated by a counter in the interface or supplied by a host system on the address input to the interface. The host system provides a control signal to the interface to select whether the memory interface should use the address from the counter or from the interface's address input.Description of DrawingsFig. 1 is an overall block diagram of a read access system;Fig. 2 illustrates a standard read that is continuous with two wait states;Fig 3 is a block diagram of a system according to one embodiment of the present invention;Fig. 4 illustrates special burst read (Standard -2 waits,Page-1 wait);Fig. 5 illustrates special burst read (Standard-1 wait, Page-0 wait);Fig. 6 illustrates the control logic according to a second preferred embodiment of the present invention with pre-emptive access; andFig. 7 illustrates special burst read with address pre-empting (Standard -2 waits, Page-lwait).Description of Preferred EmbodimentsReferring to the block diagram of the system of Fig. 3 accordingly to one embodiment of the present invention the host 11 provides an address AIN to a multiplexer 13. The multiplexer 13 output A1 is applied to a counter 15 that increments the address. The output of the counter 15 is the other input to the multiplexer 13 such that the address output A1 from the multiplexer is provided either from the counter 15 or Address AIN directly from the host 11. The control input ADVZ (address valid) and BAAZ (burst address advance) from the host is applied to logic 17 that controls the multiplexer 13. The address A1 from the multiplexer 13 is applied to a current address latch 19 for the flash memory bank 21. The row address bits output of the current address latch 19 are applied to a comparator 23. The previous row address bits from the previous address is stored in previous address latch 25. The output from the previous row address latch 25 is also applied to comparator 23. The row address bits of the current address is compared to the row address bits of the previous address in the comparator 23 to determine if there is a match. A multiplexer gate 27 is controlled by the output of the comparator 23. The input to the multiplexer 27 is a signal indicating the need for either the normal or page number of wait states. If there is a match at comparator 23 a logic '1' corresponding to page wait state count signal condition is provided to wait state generator 29 to provide the appropriate wait time for the special bust access for the READY and PREADY signal generation. If there is not a match a logic '0' is provided to the generator 29 and appropriate wait time is provided for the full normal address for the READY and PREADY signals. This circuit provides the logic to determine if there is a row match or not, and hence, to produce READY and PREADY signals at the appropriate time.In special burst access read the read port interface synchronously generates addresses while BAAZ, Burst Address Advance, is active low, unless ADVZ is also active low during a rising edge of the system clock. If ADVZ is also low then the address present on the input address bus supersedes the address which would have been generated by the read port interface. The automatic address generation can only occur after an initial address has been latched into the read port interface read port interface. This is accomplished with the first ADVZ pulse, regardless of the state of BAAZ.Special burst access mode has an advantage over standard read in that if the sequential or input address is on the same physical row in the flash bank as the previous access then the wait states are based on the page mode, WTPAGE, number of wait states, which may be less than the wait states for a standard read. If the next access is from the subsequent row then the wait states are based on the standard read, WTREAD, number of wait states. Dividing the number of bits per wordline by the width of a word determines the number of words which may be read in a page with the page number of wait states.The first access to the bank may be either a standard read access or a special burst read access. In fact, BAAZ can be active 100% of the time. If a burst access, the read port interface determines whether it is the first or a subsequent access, to correctly set the number of wait states.If BAAZ is modulated during read operation then any transitions it makes must occur before the rising edge of clock which follows the rising edge in which the address is latched in. This is regardless of the number of wait states in either standard or page read modes. Otherwise, BAAZ may be kept high or low at all times.Figure 4 is an example of special burst read and shows a burst access crossing a wordline boundary. This example has one wait state in random access mode and no wait states in burst mode.Another example of special burst access read timing is shown in Figure 5. In this mode the addresses while burst mode is active, BAAZ is low, are generated, but, in addition, it shows a rising edge of clock occurring while both BAAZ and ADVZ are active low. In this case the address counter is superseded by the address on the input address bus. The appropriate number of wait states for the access is inserted by the read port interface depending on whether new address is on the same page or not. This allows a host to achieve page mode data rates while randomly addressing data if it is on the same wordline as the previous access. The example in Figure 5 shows a standard access (two wait states) followed by a sequential burst access from the same wordline (one wait state). This if followed by another standard access which happens to be on the same wordline (one wait state) which is, in turn, followed by a sequential burst access, still on the same wordline (one wait state).Referring to Fig. 6 there is illustrated the control logic according to a preferred embodiment of the present invention wherein the read port also supports pre-emptive data access. Pre-emptive data access means that a new address is put on the input to begin a new access before the previous access has completed. For example, the host may request data at address 10 two wait states into a four wait state address cycle in which data for address 5 was being accessed. The system includes and address input AIN 61 and latch 63 adapted to receive address signals from the interface bus. The direct address at input 61 is applied to logic 65 for accessing the memory bank 66. ADVZ and BAAZ control signals are applied to logic 65 to gate the address to the memory bank 66. The current address is compared to the next address at comparator 71. The comparator 71 determines if the next address and the current address are in the same row to generate the same row access signal. If not, gate 67 gates the normal count to the wait state logic 75 that provides the correct wait count to wait generator 77. If there is a same row access then the page count is gated from gate 67 to the logic 75 and the appropriate count is provided from the wait state generator 77. A second comparator 73 is responsive to the current address and the address from latch 63 to determines if the pre-empt access if from the same row. If not, the normal count is provided to logic 75 and with the presence of the correct ADVZ to deliver the correct wait count to wait count generator 77 when the prior read is no longer in progress as determined by the input to logic 75. Likewise, a page count is provided when the read is from the same row. While READY is low during a standard or page access, a new address may be issued by taking ADVZ low. This causes a new access to be initiated before the previous access completed. If in burst mode and at least one access has completed on the current row and the pre-empting address is on the same row then the page mode number of wait states will be inserted. If an access has not been completed on a row since the last row change and the pre-empting address is on the same row then the standard read number of wait states is inserted. If the pre-empting address is on a different row than the current row then the standard read number of wait states will be inserted. If pre-empting is utilized note that the generation of READY and PREADY signals will be affected and the host must comprehend their behavior. The behavior of READY and PREADY can be determined from the previous examples. Figure 7 illustrates access pre-empting. Pre-empting may also be used in standard read mode, but the WTREAD number of wait states will always be used for the access. |
A technique to enable information sharing among agents within different cache coherency domains. In one embodiment, a graphics device may use one or more caches used by one or more processing cores to store or read information, which may be accessed by one or more processing cores in a manner that does not affect programming and coherency rules pertaining to the graphics device. |
1.A device that includes:At least one central processing unit;Graphics processing unit;a logic unit coupled to the at least one central processing unit and the graphics unit, the logic unit allowing the graphics processing unit to access the L2 cache of the at least one central processing unit without first resorting to the main memory;Wherein the at least one central processing unit and the graphics unit maintain consistency of the L2 cache.2.The processor of claim 1 wherein said device is a processor and said graphics processing unit is integrated with said at least one central processing unit of said processor.3.A processor according to claim 1 or 2, wherein said graphics processing unit executes a first set of cache coherency rules, said at least one central processing unit executing a second set of cache coherency rules.4.A device that includes:a first cache and a second cache in the graphics logical consistency domain;A central processing unit (CPU) that uses physical addresses to access information stored in the first cache.5.A method comprising:Having information stored in the first level cache of the graphics device copied or moved to an intermediate cache in the central processing unit coherency domain;And transmitting, from the central processing unit to the intermediate cache of the graphics logic, the information requested by the central processing unit;If the requested information does not exist in the intermediate cache of the graphics logic, the snoop advances to the last level cache for the information. |
Techniques for sharing information between different cache coherency domainsThis application is PCT International Application No. PCT/US2009/038627, International Application Date March 27, 2009, China National Application No. 200980110677.8, entitled "Technology for Sharing Information Between Different Cache Coherency Domains" Divisional application.Field of inventionEmbodiments of the invention generally relate to the field of information processing. More specifically, embodiments of the present invention relate to techniques for implementing cache coherency between agents operating in at least two different cache coherency domains.Background techniqueAs more and more functions are integrated into computing platforms and microprocessors, information sharing between different functional units tends to grow. For example, integrating graphics or other input and output logic with one or more main central processing units (CPUs) or "cores" into the same computing platform, package, or integrated circuit may enable the one or more cores to be associated with the graphics logic Share information between. In some prior art examples, different functional units are integrated into the same system, package, or die, and information accessed (stored or read) by one or more cores is maintained at a corresponding cache level (eg, one In the level, intermediate, and level 2 caches, the cache hierarchy is located in a different coherency domain from the cache hierarchy of other functional units, such as graphics logic.Maintaining data in different coherency domains between different cache addresses or according to different cache coherency domains may require more cache memory, which increases system cost and power consumption. Moreover, in the case where information is shared between different functional units, maintaining different corresponding cache hierarchy domains would cause each functional unit to have to access a main memory source such as DRAM to share information between different functional units. A main memory source such as DRAM is typically slower in access speed than other memory structures such as cache. Therefore, taking main memory to share information between different functional units can degrade the performance of functional units and/or systems.BRIEF DESCRIPTION OF THE DRAWINGSThe embodiments of the invention are illustrated by way of example and not limitation, in the drawings,Figure 1 shows a block diagram of a system in which at least one embodiment of the present invention may be used;FIG. 2 illustrates a processor in which at least one embodiment of the present invention may be utilized.3 shows a block diagram of a shared bus computer system in which at least one embodiment of the present invention may be used;4 shows a block diagram of a point-to-point interconnect computer system in which at least one embodiment of the present invention may be utilized;Figure 5 is a flow chart showing operations that may be used in one embodiment.Detailed DescriptionEmbodiments of the invention relate to computer systems and information processing. More specifically, embodiments of the present invention relate to allowing at least one central processing unit (CPU) to obtain visibility of information accessed or generated by another processing logic (eg, graphics processing logic) to obtain a certain A degree of consistency technique that may operate in a coherency domain that is different from the at least one CPU. In some embodiments, one or more CPUs share a cache level with one or more processing logic (eg, graphics logic), such as a "last level cache" (LLC) or "secondary" (L2) cache, which One or more processing logic may implement different coherency protocols with the one or more CPUs or operate in different coherency domains. In one embodiment, the CPU and graphics logic are integrated in the same die, package or system, and the CPU can access at least one cache level in the cache coherency hierarchy of the graphics logic and the graphics logic is also accessible LLC, thereby allowing information to be shared between the CPU and graphics logic without accessing a main memory source such as DRAM.Figure 1 shows a system in which at least one embodiment can be used. In Figure 1, at least one CPU 101 and at least one graphics logic 105 are integrated into the same die, package or system. Moreover, in one embodiment, the CPU and graphics logic communicate with respective cache levels, which may include first level cache or "level one" (L1) caches 103, 104, intermediate cache 107. , 108, and a last level cache (LLC) or "secondary" (L2) cache 110. In one embodiment, each L1 and intermediate cache are different logical structures, and LLC is a cache that is configured to store the same information, thus including each of the L1 and MLC stored in the CPU and the MLC of the graphics logic. Information in. In one embodiment, the contents of its L1 cache 104 are moved or copied to its MLC 108 by graphical logic - the MLC 108 maintains consistency with the LLC for CPU coherency control operations, and the LLC may include L1 of graphics logic. cache. By copying or moving the information of graphics L1 cache 104 (in graphics consistency domain 111) and graphics MLC (in CPU coherency domain 109), in CPU coherency domain 109 (in one embodiment, including CPU 101, Information is shared between L1 cache 103, MLC 107, and LLC 110) and graphics coherency domain 111 (in one embodiment, including graphics logic 105 and graphics L1 cache 104). In some embodiments, the information stored in the graphics L1 cache 104 that can be virtually addressed by the graphics logic is moved or copied to the graphics MLC 108 in response to the occurrence of multiple events associated with rendering the graphics image. In one embodiment, moving/copying information from the graphics L1 cache 104 to the MLC 108 is managed and executed by a graphics driver or some other logic or software program. After the event that causes the information in the graphics L1 cache 104 to be moved or copied to the MLC 108, the information then exists in the CPU coherency domain, and in one embodiment can be used by the CPU or other lookups used by the CPU. Address schemes (eg, virtual addresses) are used to address and access.In addition to the CPU and graphics coherency domains, the system of Figure 1 also includes a display device (e.g., monitor) 115 that may be in its own coherency domain 113, which is inconsistent with the CPU and graphics coherency domains. . In one embodiment, display device 115 can communicate with main system memory 120 rather than directly with the cache in the cache hierarchy of the CPU or graphics logic.In one embodiment, graphics logic 105 may access information available or modifiable to the CPU by accessing information stored by the CPU in LLC 110 and listening to information in L1 (103) of the CPU and MLC (107) of the CPU, without With system memory 120. Moreover, in one embodiment, the CPU can access or "listen" to information stored in the intermediate cache 107 of the graphics logic without resorting to LLC 110 or system memory 120. In one embodiment, information can be shared between the CPU coherency domain and the graphics logical coherency domain without requiring the CPU or graphics device to access the main system memory, and accessing the main system memory takes a significant amount of time relative to accessing the cache hierarchy. Moreover, in one embodiment, information can be shared between the CPU and the graphics logical coherency domain without significantly changing or affecting the corresponding CPU or graphics cache coherency protocol.In one embodiment, the graphics logic generates a virtual address to access data in its cache coherency domain (111). However, some caches in the graphics coherency domain, such as those whose graphics logic is read-only or otherwise "owned" ("R/O Cache") may use only virtual tags, but in the graphics consistency domain Other caches, such as those that can be read and written by graphics logic ("R//W cache"), can use both virtual and physical tags to support both virtual and physical addresses. In one embodiment, if there is a cache miss, the graphical logical access to the cache hierarchy will be translated from the virtual address to a physical address so that the correct physical address in system memory can be generated.In the CPU coherency domain, at least two rules are applicable. First, the cache coherency rules may require that the order of access based on each location be in order with each other. Sequential consistent access requires sequential global visibility of all operations that access the cache location. Second, CPU collations typically require that the writes of a single processor be observed by all processors to be the same, but writes from different processors can be observed in a different order. However, the processor must observe its own writes in the order of execution.The graphics cache coherency domain can be different from the cache coherency domain of the master CPU in a number of ways. For example, in the graphics cache domain, consistency may only be guaranteed at certain points in the image rendering process, while consistency in a typical CPU cache domain is maintained. Furthermore, since the graphics coherency domain cache is typically virtualized and not being listened to, there is no guarantee that the LLC will include information stored in L1 or Intermediate Cache (MLC). Therefore, when a row is reclaimed from the LLC, the lower level cache may not be updated. To compensate, for the entire line of retrieving, the graphics logic can use the Invalidate ("ItoM") transaction to perform these reclaimed writeback transactions, and for partial line reclaim, use Read Ownership (RFO) transactions. Finally, graphics devices are typically non-inferential out-of-order machines connected to unordered un-core constructs. Since graphics logic typically cannot reorder these accesses after sending access to the cache hierarchy or memory, the associated access must ensure that their previous accesses have been globally observed before they are issued.Embodiments of the present invention allow for differences between graphics and CPU cache coherency domains, while allowing data to be shared between these domains without resorting to accessing main system memory. In one embodiment, the CPU cache coherency rules apply to any physical addressing structure, including intermediate cache of graphics logic, LLC, and main memory. For cross-consistent boundary access between the CPU and the graphics domain, the CPU can listen to the graphics MLC, which will behave the same as in the CPU coherency domain. Moreover, embodiments of the present invention allow data stored in graphics L1 and MLC to be included in the LLC so that graphics logic can use LLC without relying on main system memory. In one embodiment, the L1 data is copied or moved into its MLC in response to a rendering event by the graphics device, thereby placing the graphics L1 data in the CPU coherency domain and ensuring that it is included in the LLC, The data of the graphic L1 is included in the LLC. If the information cannot be found in the graphic L1 or MLC, the graphic logic can later access the information from the LLC.FIG. 2 illustrates a processor in which at least one embodiment of the present invention may be utilized. In particular, FIG. 2 illustrates a processor 200 having one or more central processing units (CPUs) 205 and 210 and corresponding at least one non-CPU functional units 207 and 213. Figure 2 also shows at least one other non-CPU functional unit 215 that can perform other operations that functional units 207 and 213 do not perform. In one embodiment, functional units 207, 213, and 215 may include graphics processing, memory control, and peripheral control functions such as audio, video, disk control, digital signal processing, and the like. In some embodiments, processor 200 may also include other logic not shown in Figure 2, such as I/O control. In one embodiment, each of the multi-processor systems or each of the multi-core processors may include or otherwise be associated with logic 219 to have one or more CPUs with one or more Information sharing between graphics logic.In some embodiments, processor 200 can be a general purpose CPU. In other embodiments, the processor may be a general purpose CPU or hardware capable of performing graphics-specific functions in a system that may include general purpose CPU integrated circuits and graphics-specific hardware or other parallel computing hardware. As general computing becomes more integrated with parallel computing hardware such as graphics engines, texture samples, etc., logic 219 becomes more generic and location independent. Thus, logic 219 can include hardware/software or any combination of the two and can be located or integrated internal or external to any portion of processor 200.In one embodiment, logic 219 includes logic for enabling the CPU to listen to the graphics MLC without significantly modifying the cache coherency rules of the CPU or graphics logic. In addition, logic 219 may include logic that allows the graphics device to access information in the LLC without first resorting to the main memory. In addition, when the information stored in the graphics L1 cache is currently present in the graphics MLC, the logic 219 can help inform the CPU so that the CPU can listen to the information.FIG. 3 illustrates a shared bus computer system in which one embodiment of the present invention may be utilized. The microprocessors 301-315 can include a variety of functional units, such as one or more CPUs (323, 327, 333, 337, 343, 347, 353, 357), graphics devices (307, 317, 327, 337), memory controllers (325, 335, 345, 355), I/O control, or control such as PCI or PCIe. Other functional units of the device (320, 330, 340, 350). The system of FIG. 3 may also include an I/O controller 365 that interfaces the microprocessor to the peripheral control device 360.In one embodiment, the system includes logic 319 for the CPU to listen to the graphical MLC without significantly modifying the cache coherency rules of the CPU or graphics logic. Moreover, logic 319 can include logic for allowing a graphics device to access information in the LLC without first resorting to main memory. In addition, when the information stored in the graphics L1 cache is currently present in the graphics MLC, the logic 219 can help inform the CPU so that the CPU can listen to it.In some embodiments, some or all of the elements shown in Figure 3 may be included in a microprocessor and include other interconnects such as a direct memory interface (DMI), a PCI high speed graphics (PEG) interconnect, and the like. Regardless of the configuration, embodiments of the invention may be included in or otherwise associated with any portion of the system of FIG. The system of Figure 3 may also include a main memory (not shown), which may include various memory structures, such as dynamic random access memory (DRAM), hard disk drive (HDD), or via a variety of storage devices and technologies. The network interface is located in a remote memory source of the computer system. The cache memory in the system of Figure 3 can be located in or near the processor, such as on the local bus of the processor.Still further, the cache memory can include relatively faster memory cells such as six-transistor (6T) cells, or other memory cells having approximately equal or faster access speeds.In addition to the shared bus computer system shown in Figure 3, other system configurations, including point-to-point (P2P) interconnect systems and ring interconnect systems, can be used in conjunction with various embodiments of the present invention. The P2P system of Figure 4 may include, for example, several processors, of which only two processors 470, 480 are shown by way of example. Processors 470, 480 can each include a local memory controller hub (MCH) 472, 482 to interface with memories 42, 44. Processors 470, 480 can exchange data via PtP interface 450 using point-to-point (PtP) interface circuits 478, 488. Processors 470, 480 can each exchange data with chipset 490 via point-to-point interface circuits 476, 494, 486, 498 via separate PtP interfaces 452, 454. Chipset 490 can also exchange data with high performance graphics circuitry 438 via high performance graphics interface 439. Embodiments of the invention may be located in any processor having any number of processing cores or in each of the PtP bus agents of FIG.In one embodiment, Figure 4 includes logic 419 for enabling the CPU to listen to the graphics MLC without significantly modifying the cache coherency rules of the CPU or graphics logic. In addition, logic 419 can include logic for allowing a graphics device to access information in the LLC without first having to rely on main memory. In addition, when the information stored in the graphics L1 cache is currently present in the graphics MLC, the logic 219 can help inform the CPU so that the CPU can listen to it.Figure 5 illustrates a flow diagram of operations that may be employed in connection with at least one embodiment of the present invention. At operation 501, the graphics device causes the information stored in its L1 cache to be copied or moved to the MLC in the CPU coherency domain, and at operation 505, the information requested by the CPU is sent from the CPU to the MLC of the graphics logic. . In operation 510, if the requested information does not exist in the MLC of the graphics logic, then in operation 515, the snoop may proceed to the LLC for the information. At operation 520, if the information does not exist in the LLC, then in operation 525, the access proceeds to the primary memory. In one embodiment, the CPU can use the physical address to listen to the MLC because the MLC includes physical address tags in addition to the virtual addresses used by the graphics logic. Moreover, in one embodiment, because the graphics logic can store and access information in the LLC, the information requested by the CPU may be in the LLC and not in the MLC.One or more aspects of at least one embodiment can be implemented by representative data stored on a machine readable medium, which data represents various logic in a processor that, when read by a machine, causes the machine to be manufactured The logic for performing the techniques described herein. These representations, referred to as "IP cores", may be stored on a tangible machine readable medium ("band") and provided to individual customers or production facilities for loading into the manufacturing machine that actually manufactures the logic or processor. .Accordingly, methods and apparatus for directing access to a micro-architectural memory region are described. The above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those skilled in the <RTIgt; The scope of the invention should, therefore, be determined by the appended claims |
The present invention provides a method and product-by-method of integrating a bias resistor in circuit with a bottom electrode of a micro-electromechanical switch on a silicon substrate. The resistor and bottom electrode are formed simultaneously by first sequentially depositing a layer of a resistor material ( 320 ), a hard mask material ( 330 ) and a metal material ( 340 ) on a silicon substrate forming a stack. The bottom electrode and resistor lengths are subsequently patterned and etched ( 350 ) followed by a second etching ( 360 ) process to remove the hard mask and metal materials from the defined resistor length. Finally, in a preferred embodiment, the bottom electrode and resistor structure is encapsulated with a layer of dielectric which is patterned and etched ( 370 ) to correspond to the defined bottom electrode and resistor. |
1. A method to form a RF switch by integrating a resistor in circuit with a bottom electrode of a micro-electromechanical switch on a substrate, said method comprising the steps of:depositing a uniform layer of a resistor material over at least one side of said substrate;depositing a uniform layer of a hard masked material over said resistor material;depositing a uniform layer of a metal material over said hard mask material,wherein said deposited layers form a stack;patterning and etching said bottom electrode of said micro-electromechanical switch and resistor lengths from said stack; andetching said hard mask and metal material from said patterned resistor length to form said RF switch.2. The RF switch of claim 1, wherein said hard mask and metal material remain substantially covering said patterned bottom electrode subsequent to said etching said hard mask and metal material from said patterned resistor length.3. The RF switch of claim 2, further comprising the step of depositing a dielectric over said patterned bottom electrode and resistor following said etching of said hard mask and metal material from said patterned resistor length.4. The RF switch of claim 3, further comprising the step of patterning and etching said deposited dielectric to correspond to said pattern bottom electrode and resistor lengths.5. The RF switch of claim 3, wherein said act of depositing a dielectric is performed immediately subsequent to etching said hard mask and metal material from said patterned resistor length.6. The RF switch of claim 1, wherein said substrate comprises a deposited uniform layer of an anchor material comprising SiO2.7. The RF switch of claim 1 wherein said resistor material comprises NiCr.8. The RF switch of claim 1, wherein said metal material comprises Al-Si.9. The RF switch of claim 1, wherein at least one of said etching acts comprises wet etching. |
This application is a DIV. of Ser. No. 09/941,031 filed on Aug. 26, 2001, U.S. Pat. No. 6,098,082.BACKGROUND OF THE INVENTION1. Technical Field of the InventionThe present invention relates generally to the field of micro-electromechanical switches, and, more particularly, to an apparatus and method of forming resistors and switch-capacitor bottom electrodes.2. Description of Related ArtRapid advances made in the field of telecommunications have been paced by improvements in the electronic devices and systems which make the transfer of information possible. Switches which allow the routing of electronic signals are important components in any communication system. Electrical switches are widely used in microwave circuits for many communication applications such as impedance matching, adjustable gain amplifiers, and signal routing and transmission. Current technology generally relies on solid state switches, including MESFETs and PIN diodes. Switches which perform well at high frequencies are particularly valuable. The PIN diode is a popular RF switch, however, this device typically suffers from high power consumption (the diode must be forward biased to provide carriers for the low impedance state), high cost, nonlinearity, low breakdown voltages, and large insertion loss at high frequencies.The technology of micro-machining enables the fabrication of intricate three-dimensional structures with the accuracy and repeatability inherent to integrated circuit fabrication offering an alternative to semiconductor electronic components. Micro-mechanical switches offer advantages over conventional transistors because they function more like mechanical switches, but without the bulk and high costs. These new structures allow the design and functionality of integrated circuits to expand in a new dimension, creating an emerging technology with applications in a broad spectrum of technical fields.Recently, micro-electromechanical (MEM) switches have been developed which provide a method of switching RF signals with low insertion loss, good isolation, high power handling, and low switching and static power requirements. Systems use single MEM switches or arrays of switches for functions such as beam steering in a phased array radar for example. The switches switch a high frequency signal by deflecting a movable element (conductor or dielectric) into or out of a signal path to open or close either capacitive or ohmic connections. An excellent example of such a device is the drumhead capacitive switch structure which is fully described in U.S. Pat. No. 5,619,061. In brief, an input RF signal comes into the structure through one of two electrodes (bottom electrode or membrane electrode) and is transmitted to the other electrode when the membrane is in contact with a dielectric covering the bottom electrode.MEM devices can also be integrated with other control circuitry to operate well in the microwave regime. For example, to operate as a single-pole double-throw switch (SPDT) for directing signals of power flow between other components in a microwave system, the MEM switch is placed in circuit with passive components (resistors, capacitors, and inductors) and at least one other switch. However a problem exist when this type circuit integration is attempted to be realized in silicon because of the diverse temperature processes of MEM components (such as the electrodes) and passive components (such as bias resistors). Therefore, there exist a need for a method of efficiently fabricating a micro-electromechanical switch by simultaneous formation of component resistors and switch electrodes.SUMMARY OF THE INVENTIONThe present invention achieves technical advantages as a method and product-by-method of integrating a resistor in circuit with a bottom electrode of a micro-electromechanical switch on a substrate. The method includes depositing a uniform layer of a resistor material over at least one side of the substrate, depositing a uniform layer of a hard mask material over the resistor material, and depositing a uniform layer of a metal material over the hard mask material forming a stack. Following the depositing acts, a bottom electrode and resistor length are patterned and etched from the deposited stack. In a second etching, the hard mask and metal materials are etched from the pattern resistor length in which the hard mask and metal materials remain substantially covering the pattern bottom electrode. Further, in a preferred embodiment, the bottom electrode and resistor structure is encapsulated with a deposited layer of dielectric which is subsequently patterned and etched to correspond to the structure.BRIEF DESCRIPTION OF THE DRAWINGSFor a more complete understanding of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawings wherein:FIG. 1 illustrates a drumhead capacitive micro-electromechanical switch;FIG. 2 illustrates a single-pole double-throw series-shunt RF switch configuration;FIG. 3 illustrates a method of fabricating, by simultaneous formation, a resistor and bottom electrode of a micro-electromechanical switch in accordance with the present invention;FIG. 4 illustrates growth deposit of silicon dioxide on a microwave quality silicon substrate wafer in accordance with the present invention;FIG. 5 illustrates a deposited stack of thin-film resistive material, hard mask material and metal on the silicon substrate wafer in accordance with the present invention;FIG. 6A illustrates a bottom electrode structure in circuit with a thin-film resistor and bond pad in accordance with the present invention;FIG. 6B illustrates a cross section of the structure illustrated in FIG. 6A;FIG. 7A illustrates a resist pattern of the structure illustrated in FIG. 6A;FIG. 7B illustrates a cross section of the structure illustrated in FIG. 7A; andFIG. 8 illustrates the deposit, pattern and etch of a primary dielectric on the structure illustrated in FIG. 7B.DETAILED DESCRIPTION OF THE INVENTIONThe numerous innovative teachings of the present applications will be described with particular reference to the presently preferred exemplary embodiments. However, it should be understood that this class of embodiments provides only a few examples of the many advantageous uses and innovative teachings herein. In general, statements made in the specification of the present application do not necessarily delimit any of the various claimed inventions. Moreover, some statements may apply to some inventive features, but not to others.Currently used MEM switches were developed with improved electrical characteristics in the RF regime. An excellent example of such a device is the drumhead capacitive switch 100 illustrated in FIG. 1. The details of the MEM switch are set forth in U.S. Pat. No. 5,619,061, the disclosure of which is incorporated herein by reference. In brief, an input RF signal enters into the structure through one of the electrodes (bottom electrode 10 or membrane electrode 20) and is transmitted to the other electrode when the movable membrane electrode 20 is in contact with a dielectric 30 covering the bottom electrode 10.The membrane electrode 20 is movable through the application of a DC electrostatic field and is suspended across an insulating spacer 60. The insulating spacer 60 can be made of various materials such as photo-resist, PMMA, etc., or can be conductive in other embodiments. Application of a DC potential between the membrane electrode 20 and the bottom electrode 10 causes the movable membrane to deflect downwards due to the electrostatic attraction between the electrodes.In the on position (membrane 20 down), the membrane electrode 20 is electrostatically deflected to rest atop the dielectric 30, and is capacitively coupled to the bottom electrode 10 with an on capacitance given by Con≈[epsilon]dieA/Ddie. In this equation, [epsilon]die is the dielectric constant of the dielectric which covers the bottom electrode 10 and Ddie is the thickness 50 of the dielectric. In an "off" (membrane 20 up) position, an "off" capacitance is given by Coff≈[epsilon]airA/Dair. In this equation, A is the cross sectional area of the electrode (i.e. area where metal is on both sides of the air dielectric), [epsilon]air is the dielectric constant of air, and Dair is defined as the distance 70 between the lower portion of the membrane and the upper portion of the dielectric. The off/on impedance ratio is given by [epsilon]dieDair/[epsilon]airDdie, and could be large (greater than 100:1) depending on the physical design of the device and the material properties of the insulator. A ratio of 100:1 is more than sufficient for effectively switching microwave signals.A single MEM switch operates as a single-pole single-throw (SPST) switch. However, switch applications used in microwave systems for directing signals and/or power flow, for example, frequently require a SPDT switch placed in circuit with passive components such as resistors, capacitors and inductors.Referring now to FIG. 2 there is illustrated a single-pole double-throw (SPDT) shunt RF switch 200 which includes multiple MEM switches and passive components. As shown, both resistors and capacitors are required for desired operation. For operation, a switch pull-down voltage is applied to the bias left pad 210 resulting in switch 201 and switch 203 being turned on. An RF signal at the RF input 220 goes through switch 201, through the coupling capacitor 211 and out of Left RF Out. The signal is blocked from going to ground by biased resistor 212, which with a typical 10K ohm resistance, is large in comparison to the typical 50 ohm T-line that Left RF Out is connected to. Any signal that may get through switch 202 is routed through switch 203 to ground, hence assuring that the signal does not go out of Right RF Out. The capacitors in the circuit act to block DC signals. The resistors are required in this circuit in order to aid in the routing of signals and to isolate the DC bias from the RF signal.However, the above-described SPDT circuit is difficult to realize in silicon because of the fabrication requirements of polysilicon resistors which are routinely used in IC technology. Because polysilicon is a relatively high temperature process (deposited @620 deg. C.), poly deposition and etch must be done before the MEM device is built. This is certainly mandatory for aluminum-based bottom electrodes. For more effective operation, MEM contacts demand a very smooth surface in order to assure that the contact area between the membrane 20 (when in the down condition) and the primary capacitor dielectric 30 is maximized. The higher temperature, etch and implantation processing required for poly resistor fabrication roughen the underlying oxide on which the bottom electrode metal is deposited. This roughness will be transmitted to the bottom electrode 10 itself, thus, reducing the effective contact area of the electrodes.The present invention uses thin-film resistors for creating bias resistors, for example, for fabrication with MEM switches to eliminate problems associated with polyresistor fabrication. Consequently, material used for fabrication of the MEM switch bottom electrode and the resistor can be deposited in the same operation. Simultaneous formation of the resistor and bottom electrode also saves the time and expense of at least one mask step. Additionally, the fabrication technique of the present invention is a low temperature process which allows for fabrication of resistors after that of any capacitors, when required.Referring now to FIG. 3 there is illustrated a method of fabricating, by simultaneous formation, a resistor and bottom electrode of a micro-electromechanical switch in accordance with the present invention. In a first step 310, of a preferred embodiment, an anchor material such as SiO2 is grown (or deposited) on a microwave quality wafer or substrate. FIG. 4 illustrates a preferred embodiment of a growth deposit of SiO2 on a silicon substrate, however, the substrate can be made of various materials, for example, silicon on sapphire, gallium arsenide, alumina, glass, silicon on insulator, etc. Formation of the switch on a thick oxide region on a silicon substrate permits control circuitry for control electrodes to be integrated on the same die as the switch. The oxide also helps reduce dielectric losses associate with the silicon substrate.Referring back to FIG. 3, in a next step 320, a thin-film resistor material is deposited. The details for the fabrication of thin-film resistors using metals such as TaN, SiCr, or NiCr are set forth in U.S. patent application Ser. No. 09/452,691 filed Dec. 2, 1999, Baiely et al., the disclosure of which is incorporated herein by reference. Use of NiCr will be considered here, although any of the other above-mentioned materials can be used. NiCr is used as the thin-film resistor material in the preferred embodiment.After the thin-film material deposit, a hard mask material, adapted from generally known micro-fabrication techniques is deposited in a subsequent act 330 over the NiCr layer. In a preferred embodiment, approximately 1000 Ȧ of TiW is deposited in deposition act 330.In a final deposition act 340, a low resistivity metal is deposited. In a preferred embodiment, Al-Si is deposited to a thickness required for optimized RF operation of the switch. Generally, approximately 4000 Ȧ of Al-Si is sufficient. The entire stack of substrate, silicon dioxide, NiCr, TiW and Al-Si will serve as the switch bottom electrode and bias resistor.Referring now to FIG. 5 there is illustrated a deposited stack of thin-film resistive material 510, hard mask material 520 and metal 530 on a silicon substrate in accordance with the present invention. In a preferred embodiment, each layer is uniform.Subsequent to stack completion, the bottom electrode, first-level interconnects, and the resistor lengths are patterned and the entire metal stack etched 350 (FIG. 3). FIG. 6A illustrates the bottom electrode 610, resistor 620, interconnect 630 and a bond pad 640 which have been patterned and etched, in accordance with the present invention, defining bottom electrode and resistor lengths and FIG. 6B illustrates a cross section view of FIG. 6A through AA. The preferred stack of Al, TiW and NiCr, the Al can be either wet or dry etched while the TiW and NiCr are wet etched in a preferred embodiment.The next step 360 (FIG. 3) is a resist pattern which exposes the resistor to an etch which removes the hard mask materials (e.g. Al and TiW in this case). FIG. 7A illustrates the bottom electrode 610 and resistor 620 after the Al and TiW have been removed and FIG. 7B illustrates a cross section view of FIG. 7A through AA. Note that the bottom electrode is not affected by this second etch step 360 (it is completely covered with resist). At this stage, a primary capacitor dielectric is deposited on the bottom electrode and patterned and etched 370. The primary dielectric is SiO2, Si3N4 or Ta2O5, for example, although the use of any suitable dielectric is foreseen.FIG. 8 illustrates the bottom electrode and resistor structure following the dielectric deposit, pattern and etch. Item 810 shows the dielectric covering the bottom electrode and item 820 shows the dielectric covering part of the resistor. It is recommended that the exposed resistor material be encapsulated as soon as possible following the removal of the hard mask material.Although a preferred embodiment of the method and system of the present invention has been illustrated in the accompanied drawings and described in the foregoing Detailed Description, it is understood that the invention is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions without departing from the spirit of the invention as set forth and defined by the following claims. |
A high-breakdown voltage transistor (30; 30') is disclosed. The transistor (30; 30') is formed into a well arrangement in which a shallow, heavily doped, well (44) is disposed at least partially within a deeper, more lightly-doped well (50), both formed into an epitaxial layer (43) of the substrate (42). The deep well (50) is also used, by itself, for the formation of high-voltage transistors, while the shallower well (44) is used by itself in low-voltage, high-performance transistors. This construction permits the use of high-performance, and precisely matching, transistors in high bias voltage applications, without fear of body-to-substrate (or "back-gate-to-substrate") junction breakdown. |
We claim: 1. An integrated circuit, including first, second and third different types of MOS transistors formed in respective first, second, and third regions in a semiconductor substrate, comprising:a. a first well formed in the first region by implanting ions of a first certain species, energy, and dosage in the region, the first well having a first certain depth and sheet resistance; b. a second well formed in the second region by implanting ions of a second certain species, energy, and dosage in the regions, the second certain energy and dosage being different from the first certain energy and dosage, the second well having a second certain depth and sheet resistance, different from the first certain depth and sheet resistance; c. a third well formed in the third- region by implanting ions in the third region of the first certain species, energy, and dosage at the time of the forming of the first well, and by implanting ions in the third region of the second certain species, energy, and dosage at the time of the forming of the second well, the implanted ions in the third well including the sum of the implanted ions in the first and the second wells, the third well having a third certain depth and sheet resistance that is different from the first and the second certain depths and sheet resistances; d. a first type of an MOS transistor formed in the first well, the first well forming the body and the channel region of the first type of an MOS transistor; e. a second type of an MOS transistor formed in the second well, the second well forming the body and the channel region of the second type of an MOS transistor; f. a third type of an MOS transistor formed in the third well, the third well forming the body and the channel region of the third type of an MOS transistor. 2. The integrated circuit in claim 1, in which the semiconductor substrate includes silicon.3. The integrated circuit in claim 2, in which the semiconductor substrate includes p-type silicon.4. The integrated circuit in claim 1, in which the MOS transistors are formed in a first epitaxial layer.5. The integrated circuit in claim 4, in which the first epitaxial layer includes silicon.6. The integrated circuit in claim 5, in which the first epitaxial layer is p-type.7. The integrated circuit in claim 1, in which a buried layer of a certain thickness and sheet resistance is formed underneath the first and the third regions.8. The integrated circuit in claim 7, in which the buried layer is n-type.9. The integrated circuit in claim 7, in which the buried layer is form in a second p-type epitaxial layer.10. The integrated circuit in claim 1, in which the first well is deeper and less heavily doped than the second well.11. The integrated circuit in claim 1, in which the first well has a depth of 4 to 6 micrometers and a sheet resistance of about 2150 ohms per square, the second well has a depth of about 2.5 micrometers and a sheet of about 850 ohms per square, and the first and the third types of MOS transistors have a body to substrate breakdown voltage of over 60 volts.12. A silicon integrated circuit, including first, second, and third different types of PMOS transistors formed in respective first, second, and third regions in a p-type silicon substrate, comprising:a. a first p-type silicon epitaxial layer formed over the surface of the substrate; b. an n-type buried layer formed in the first and the third region in the first epitaxial layer; c. a second p-type silicon epitaxial layer formed over the first epitaxial layer and the buried layers; d. a first n-well formed in the first region by implanting phosphorous ions of a first certain energy and dosage-in the region, the first n-well having a depth of 4 to 6 micrometers and a sheet resistance of about 2150 ohms per square; e. a second n-well formed in the second region by implanting phosphorous ions of a second certain energy and dosage in the region, the n-well in the second region having a depth of about 2 micrometers and a sheet resistance of about 850 ohms per square; f. a third n-well formed in the third region, by implanting phosphorous ions in the third region at the time of the forming of the first n-well of the first certain energy and dosage, and by implanting phosphorous ions in the third region at the time of the forming of the second n-well of the second certain energy and dosage, the implanted phosphorous ions in the third n-well including the sum of the implanted phosphorous ions in the first and the second n-wells, the third n-well having a third certain depth and sheet resistance that is different from the first and the second certain depths and sheet resistances; g. a first type of a PMOS transistor formed in the first n-well in which the n-well forms the body and the channel region of the transistor, having a body to substrate breakdown voltage of above 60 volts; h. a second type of a PMOS transistor formed in the second n-well in which the n-well forms the body and the channel region of the transistor having high gain and good matching capability; and i. a third type of PMOS transistor formed in the third n-well in which the n-well forms the body and the channel region of the transistor having a body to substrate breakdown voltage of above 60 volts and a gain and matching capability comparable to a second type of PMOS transistor of comparable size. |
CROSS-REFERENCE TO RELATED APPLICATIONSNot applicable.STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENTNot applicable.BACKGROUND OF THE INVENTIONThis invention is in the field of integrated circuits, and is more specifically directed to the construction of field-effect transistors used in such circuits.A continuing trend in the field of electronic integrated circuits is the reduction in transistor feature size. These smaller feature sizes enable a higher level of functionality for the integrated circuit, and also significantly reduce the manufacturing cost of the circuit. The manufacturing cost is reduced not only by increasing the number of integrated circuit dies that may be fabricated on a single wafer (and thus for substantially the same cost), but also by increasing the theoretical yield of the wafer for a given defect density by reducing the area affected by a single "killing" defect. Additionally, the performance of the integrated circuit generally improves along with the faster switching times provided by smaller transistors.The reduction in transistor feature sizes has necessitated, in many instances, a reduction in the operating voltages applied to the integrated circuit, because many of the device breakdown voltages are lower for smaller devices. For example, a smaller channel length in a metal-oxide semiconductor (MOS) transistor generally translates into a lower source-to-drain breakdown voltage. Additionally, reduction in lateral transistor feature sizes, such as channel lengths and electrode widths, generally also necessitates reduced junction depths and other vertical features.Some integrated circuit applications still require high voltage operation, however. For example, the use of integrated circuits in motor control and automotive applications may require high-voltage output signals, because of the load requirements of such devices. Additionally, some environments may also require integrated circuits to be able to withstand high bias voltages. Accordingly, modern integrated circuits utilizing extremely small active devices and transistors are not directly suitable for these applications.In the past, separate "power" integrated circuits were used in combination with low-voltage high-performance integrated circuits in high-voltage applications. In this way, the high-performance integrated circuits could control the power ICs, which in turn would sink and source the high voltage or high current signals required by the application. Of course, for purposes of cost reduction, reduced form factor, and performance, it is desirable to integrate as much functionality as possible into the same integrated circuit. As a result, many modem integrated circuits include both high-performance (or "low-voltage") and high-voltage transistors.However, the manufacturing processes required for integrating both high-performance and high-voltage transistors into the same integrated circuit can become quite complicated. It has been observed, in connection with the present invention, that the differences in construction between conventional low-voltage and high-voltage transistors do not permit optimization of both transistors in the same process. These differences are particularly dramatic in the formation of the wells into which the transistors are formed. As a result, conventional manufacturing flows utilize separate processes for the fabrication of low-voltage and high-voltage transistors.Referring now to FIGS. 1a and 1b, the construction of a conventional high-performance, or "low-voltage", p-channel MOS transistor is illustrated in plan and cross-sectional views, respectively. In this example, the transistor is formed at a surface of p-type substrate 2, on which p-type epitaxial layer 3 is formed in the conventional manner. The transistor is formed into n-well 4, which serves as the body region of the MOS transistor. Field oxide structures 5, which may be either conventional LOCOS thermal silicon oxide or silicon oxide deposited into recesses etched into the surface, define the active regions of the device. Polysilicon gate electrode 10 is disposed over a selected location of this active region, and p+ diffused regions 6 are formed into n-well 4 at locations not covered by field oxide structure 5 and gate electrode 10; as a result, p-type source and drain regions of the transistor are formed in a self-aligned manner relative to gate electrode 10. Sidewall filaments may be provided on the sides of gate electrode, if desired, to facilitate later silicidation of the structure and to permit the formation of graded source-drain junctions (typically more appropriate for n-channel devices). Following the deposition of multilevel insulator 7 (which is not shown in FIG. 1a to permit viewing of the structure) and the etching of contact openings through this film, metal conductors 8 may be formed in the conventional manner to make contact to the desired elements of the transistor. In this example, metal electrodes 8s and 8d make contact to the source and drain of the transistor, respectively, while metal electrode 8bg makes a "back-gate" contact (also referred to as a "body" contact) to well 4 via n+ diffused region 9, so that the body region of the device may be biased to a desired voltage.Several features of the transistor of FIGS. 1a and 1b are specific to low-voltage, high-performance, devices. Generally, n-well 4 will be relatively shallow, and relatively heavily doped (although not as heavily doped as source-drain regions 6). For example, in a conventional sub-micron process, n-well 4 may be on the order of two microns deep into epitaxial layer 3, and may have a doping concentration of on the order of 3*10<16 >cm<-3 >resulting in a sheet resistance of on the order of 850 [Omega]/square. By making n-well 4 to be relatively shallow and heavily-doped, short-channel-length transistors formed in well 4 can have relatively high gain values of gm (or k'), and this will have quite high performance. In addition, this construction permits excellent transistor matching behavior, as is necessary for precise applications such as current mirror circuits.However, the heavy doping of n-well 4 necessary for high transistor gain results in relatively low breakdown voltages. For example, the transistor of FIGS. 1a and 1b can have a source-drain breakdown voltage of on the order of five volts or lower. Additionally, the heavy doping of n-well 4 can limit the junction breakdown voltage at its interface with epitaxial layer 3 to as low as 25 volts or lower. While these breakdown voltages are well-suited for many high-speed circuit applications, some motor control and automotive applications cannot be implemented using such devices.FIG. 2a and 2b illustrate the construction of a high-voltage transistor, for which the breakdown voltages are significantly higher than in the case of the low-voltage transistor described above. This high-voltage transistor has many common features with the transistor of FIGS. 1a and 1b, including p+ diffused regions 16 and n+ diffused region 17, the locations of which are defined by field oxide structures 5 and gate electrode 18. Gate electrode 18 is significantly wider (from source-to-drain) than gate electrode 10 in the low-voltage transistor, providing a longer channel length and thus a higher source-drain breakdown voltage (e.g., on the order of ten to fifteen volts). This longer channel length is acceptable for this device, considering that transistor gain is not a major concern for high-voltage transistors. Metal electrodes 8bg, 8s, 8d are provided to make contact to the body node, source, and drain respectively.The high-voltage transistor is also similarly formed into substrate 2 and epitaxial layer 3. However, n-well 14 is significantly more lightly doped, and also deeper, than the corresponding n-well 4 in the low-voltage device. For example, n-well 14 may have a doping concentration of on the order of 4*10<15 >cm<-3>, resulting in a sheet resistance of on the order of 2150 [Omega]/square; the depth of n-well 14 may be on the order of 4 to 5 microns, which is approximately twice as deep as in the low-voltage device. In some applications, n-type buried layer 19 may also be provided beneath the high-voltage transistor; this region is not necessary to the operation of the high-voltage transistor, but if such a buried layer is otherwise available (e.g., as a buried collector for bipolar transistors implemented in the same integrated circuit), layer 18 may be incorporated into the high-voltage transistors as shown in FIG. 2b. The deeper and more lightly-doped n-well 14 results in a significantly higher body-to-substrate breakdown voltage than in the case of the low-voltage devices. For example, a high-voltage transistor constructed as described above may have a substrate breakdown voltage of on the order of 60 volts. However, this deep lightly-doped well significantly affects the performance of the device, greatly reducing the gain characteristics. As a result, these high-voltage devices are not suitable for use in performance-critical circuit locations. Additionally, the light doping of the well inserts a significant amount of variability into the construction of the high-voltage device, such that high-voltage devices fabricated in the same die do not match one another as well as low-voltage transistors.Because of the dichotomy between the performance and breakdown characteristics presented by conventional low-voltage and high-voltage transistors, the circuit design must be careful to not require high-performance or closely-matched transistors in locations that may receive high bias voltages (either across source-drain or between the body region and substrate). These constraints may, in some cases, only be met by sacrificing circuit performance. However, the particular circuit may not be sufficiently robust to tolerate such optimization.BRIEF SUMMARY OF THE INVENTIONIt is therefore an object of the present invention to provide a high-voltage transistor having an increased body-to-substrate breakdown voltage.It is a further object of the present invention to provide such a transistor that can be constructed using existing process operations.It is a further object of the present invention to provide such a transistor that is suitable for use in a circuit utilizing precise matched devices in a high voltage environment.Other objects and advantages of the present invention will be apparent to those of ordinary skill in the art having reference to the following specification together with its drawings.The present invention may be implemented into an integrated circuit that includes both high-voltage devices and at least one low-voltage, high-performance device. The low-voltage device is formed into a well that includes a shallow, heavily doped well formed into the deep, lightly-doped n-well. Other low-voltage devices not subject to high bias voltages, and other high-voltage devices, are formed into their own wells, namely the conventional shallow, heavily doped well for low-voltage transistors, and the deeper, more lightly-doped well for high-voltage devices.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGFIGS. 1a and 1b are a plan view and cross-sectional view, respectively, of a conventional low-voltage MOS transistor.FIGS. 2a and 2b are a plan view and cross-sectional view, respectively, of a conventional high-voltage MOS transistor.FIG. 3 is an electrical diagram, in block form, of an integrated circuit incorporating the preferred embodiment of the invention.FIG. 4 is an electrical diagram, in schematic form, of a circuit within the integrated circuit of FIG. 3, incorporating transistors constructed according to the preferred embodiment of the invention.FIG. 5 is a cross-sectional view of a low-voltage MOS transistor constructed according to a first preferred embodiment of the invention.FIG. 6 is a cross-sectional view of a low-voltage MOS transistor constructed according to a second preferred embodiment of the invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention may be realized within many types of integrated circuit, as well as various classes of transistors. As such, those skilled in the art will recognize from this specification that the present invention may be utilized in connection with a wide range of applications, and that therefore the following description is presented by way of example only.Referring now to FIG. 3, integrated circuit 20, into which the preferred embodiments of the present invention may be implemented, is illustrated at a relatively high level. It is contemplated that integrated circuit 20, in this example, is intended for use in connection with a high voltage application, such as a motor control or automotive system. Integrated circuit 20 includes data processing circuitry 22, which in this example is relatively complex high performance digital circuitry, and as such is realized by conventional low-voltage, high-performance transistors such as those described above relative to FIGS. 1a and 1b. Data processing circuitry 22 is biased by power supply voltage Vdd, which is a relatively low voltage such as on the order of 3.3 volts. Input/output circuitry 24, on the other hand, is a block of high voltage circuitry such as may be used to communicate with high voltage load circuitry external to integrated circuitry 20, and as such may therefore involve high voltage swings at its terminals. In this regard, input/output circuitry 24 is biased by power supply voltage VddHV, which is a high voltage such as on the order of sixty volts. As such, input/output circuitry 24 is realized by way of conventional high-voltage transistors such as those described above relative to FIGS. 2a and 2b. According to this preferred embodiment of the invention, integrated circuit 20 also includes high voltage analog circuit 25. High voltage analog circuit 25 performs a specific function useful by either or both of data processing circuitry 22 and input/output circuitry 24, but is biased by high power supply voltage VddHV and ground, as shown in FIG. 3. As such, high voltage analog circuit 25 includes high-voltage transistors such as those described above relative to FIGS. 2a and 2b. However, according to this preferred embodiment of the invention, certain devices within high voltage analog circuit 25 must have the properties of low-voltage transistors. For example, these certain devices may need to have high gain or rapid switching characteristics, or a pair of devices may need to be extremely closely matched relative to one another. These attributes necessitate the use of a relatively heavily doped and shallow n-well, for the case of a p-channel MOS device. However, given the high bias voltage applied by power supply VddHV, the conventional low-voltage transistor construction in FIGS. 1a and 1b would break down at such voltages.FIG. 4 illustrates an example of high voltage analog circuit 25 which is biased by high voltage power supply VddHV but yet requires close matching between transistors. In the example of FIG. 4, high voltage analog circuit 25 is controlled by a reference voltage VREF, and has two legs between high voltage power supply VddHV and ground. A reference leg includes n-channel transistor 36 having its source at ground and its gate receiving reference voltage VREF. The drain of transistor 36 is connected to the drain of p-channel transistor 34, which has its source connected to the drain of p-channel mirror transistor 301. Mirror transistor 301 has its source biased to high voltage power supply VddHV; the body node, or back gate, of mirror transistor 301 is also biased to high voltage power supply VddHV. In the mirror leg, p-channel mirror transistor 302 also has its source and body node biased to high voltage power supply VddHV . The gate of p-channel mirror transistor 302 is connected to its drain, and also to the gate of mirror transistor 301. The drain and gate of mirror transistor 302 is connected to the source of p-channel transistor 32, which in turn has its drain coupled to ground via current source 38. The gates of transistors 32, 34 are connected in common to a node between pull-down current source 40 and the anode of Zener diode 39; the cathode of Zener diode 39 is also biased to high voltage power supply VddHV.In operation, high voltage analog circuit 25 operates substantially as a current mirror, in that the current drawn through mirror transistor 301, under the control of reference transistor 36, is mirrored (either one-to-one, or by a selected multiple) through mirror transistor 302 and thus through the mirror leg of the circuit. For proper operation, it is important that mirror transistors 301, 302 match one another, in performance characteristics, as closely as possible. If the mirrored current is to be a multiple of that drawn through the reference leg, mirror transistors 301, 302 will not identically match one another, but instead must be at a very precise gain relationship relative to one another. This precision in the current relationships of mirror transistors 301, 302 necessitates that these devices be fabricated as low-voltage transistors, with relatively shallow, heavily-doped, wells, as noted above. On the other hand, because the other transistors 32, 34, 36 need not be so precisely matched and because their switching speed is not a critical factor, these transistors 32, 34, 36 may be fabricated as conventional high-voltage transistors, such as described above relative to FIGS. 2a and 2b. However, as shown in FIG. 4, the back gates, or body nodes, of mirror transistors 301, 302 are biased to high voltage power supply VddHV. Considering that the substrate of the integrated circuit containing mirror transistors 301, 302 will be biased to ground or at most to a relatively low voltage, and considering that high voltage power supply VddHV may be as high as sixty volts, mirror transistors 301, 302 must have a high substrate breakdown voltage. The source/drain breakdown voltage of mirror transistors 301, 302 is not so very high, however, as the voltage drop across these transistors in high voltage analog circuit 25 of FIG. 4 is quite low, limited by the Zener diode breakdown. According to the present invention, mirror transistors 301, 302 are constructed in such a manner as to provide excellent matching (and also high performance switching speeds) and also a high breakdown voltage to substrate, without adding to the complexity of the manufacturing process.Referring now to FIG. 5, the construction of transistors 301, 302 according to a first preferred embodiment of the present invention will now be described relative to an exemplary transistor30. Transistor 30 is formed at a surface of p-type substrate 42, on which p-type epitaxial layer 43 is formed in the conventional manner. At the surface of the device, transistor 30 is substantially identical to the conventional low voltage transistor such as described above relative to FIGS. 1a and 1b, and as used elsewhere in integrated circuit 20 (e.g., in data processing circuitry 22). P+ diffused regions 46 form source and drain regions, and n+ diffused region 49 forms a body contact region, at surface locations defined by field oxide structures 45 and, in the case of the source and drain regions, by gate electrode 40. Multilevel dielectric 47 provides insulation between these diffused regions and the overlying metal electrodes 48. Electrodes 48bg, 48s, 48d make contact to the body contact, source, and drain of transistor 30, respectively.According to this preferred embodiment of the invention, transistor 30 is formed into two n-type wells 44, 50 in epitaxial layer 43. N-type well 44 is a relatively shallow heavily-doped well, as is used elsewhere in integrated circuit 20 for the formation of low-voltage high performance transistors, such as in data processing circuitry 22. For example, n-well 44 may be on the order of two microns deep. According to this first embodiment of the invention, n-well 44 is formed within deep n-well 50, which is a deeper, more lightly-doped n-well, as used elsewhere in integrated circuit 20 for the formation of high voltage transistors, such as transistors 32, 34 of high voltage analog circuit 25. For example, n-well 50 may extend to a depth of four to five microns, and in its portion beyond well 44, may have a doping concentration of on the order of 4*10<15 >cm<-3>, with a sheet resistance of on the order of 2150 [Omega]/square. At locations in integrated circuit 20 away from transistors 30, n-well 44 is formed to have a doping concentration of on the order of 3*10<16 >cm<-3>, and thus a sheet resistance of on the order of 850 [Omega]/square. At the locations of transistor 30 within deep n-well 50, however, the doping concentration of n-well 44 will be slightly higher, as dopant from both wells 44, 50 will be present; it is contemplated, however, that transistor body regions will be only slightly affected by the double-well doping, effectively providing the same transistor operation as if n-well 50 were not presentThese wells 44, 50 into which transistor 30 is formed are formed in the same process steps as the formation of the corresponding wells used in low-voltage and high-voltage transistors elsewhere in integrated circuit 20. As shown in FIG. 5 and described above, however, transistor 30 is formed in the combination of these wells. On the other hand, the low-voltage and high-voltage transistors elsewhere in integrated circuit 20 are formed into only one or the other of wells 44, 50, respectively.While not shown in the example of FIG. 5, a buried n+ layer may be provided underlying deep n-well 50 for transistor 30. This buried layer would be similar to layer 18 shown above in FIG. 2b, and would serve to even out the local potentials within well 50; such buried layers are typically used as buried collectors for bipolar transistors, and as such would typically be used in connection with transistor 30 only if otherwise available. It is contemplated that the benefits of the present invention would be attained either with or without such a buried layer.Referring back to FIG. 5, because n-well 44 is relatively shallow and heavily-doped, short-channel-length transistors 30 according to this preferred embodiment of the invention have relatively high gain values of gm (or k'), and thus rapid switching times. More importantly, for applications such as that of high voltage analog circuit 25 shown in FIG. 4, these attributes of well 44 permit excellent transistor matching behavior, as is necessary for precise applications such as that described above for high voltage analog circuit 25.The addition of deep n-well 50 into transistor 30 provides excellent improvement to the breakdown voltage from the body region of transistor 30 to substrate 2. This improvement is due not only to the depth of n-well 50, but is also due to the significantly lighter doping concentration of n-well 50.Therefore, transistor 30 according to this preferred embodiment of the invention provides the benefits of excellent device characteristic matching, and high performance, but with greatly improved substrate breakdown voltages. The combination of these factors is obtained, according to the preferred embodiments of the invention, at no added manufacturing cost, considering that both wells 44, 50 are otherwise present in the device. As a result, the circuit designer is able to rely on low-voltage transistors even in a high bias environment, such as transistors 301, 302 in high voltage analog circuit 25, as long as drain-to-source voltage limits are not exceeded.For most purposes, the construction of transistor 30 according to this first preferred embodiment of the invention is adequate. However, considering that the ion implant doses for forming regions 46 in transistor 30 will be identical to that used for low-voltage transistors elsewhere in integrated circuit 20, the actual net doping concentration of regions 46 in transistor 30 will slightly differ from that in the other low-voltage transistors, considering that the actual well doping of transistor 30 will include both the dopant of well 44 and also the dopant for deep n-well 50. This could cause a drop in performance of these transistors 30, depending upon the process and the specific well concentrations.According to a second preferred embodiment of the invention, the deleterious effects of such additional doping in the source-drain regions are avoided. FIG. 6 illustrates transistor 30', formed according to this second preferred embodiment of the invention. Transistor 30' is similarly suitable for use as transistors 301, 302 in high voltage analog circuit 25, and in other similar applications for which either high performance or precise matching is required, but where high bias voltages are also necessary.As shown in FIG. 6, transistor 30' is similarly constructed as transistor 30 in FIG. 5, except that deep n-well 50' is limited to the edges of n-well 44. As is known in the art, junction breakdown tends to occur at sharp corners, as the electric fields are maximized at these locations. Accordingly, n-well 50' in this embodiment of the invention is formed to cover the corners of n-well 44, and is pulled away from the flat portion under the active source/drain regions of p+ diffusions 46 near gate electrode 40. As a result, the portions of n-well 44 near the source/drain regions have not received the additional n-type doping for forming deep n-well 50', and will thus have exactly the same net doping concentration as the other low-voltage transistors in integrated circuit 20, and thus the same performance.The use of n-well portions 50' according to this second preferred embodiment of the invention will generally require an increase in the size of the active region, to ensure that lateral diffusion of the well portions 50' does not encroach into the active transistor region. This increased size is evident from a comparison of FIGS. 5 and 6, particularly along the drain side of transistor 30'.In either case, the present invention provides high-performance transistors that have increased substrate junction breakdown voltage. This permits the use of low-voltage transistors in high voltage applications, thus taking advantage of the high gain and precise matching provided by those transistors. In addition, this ability is provided without adding to the manufacturing cost of the integrated circuit, as only existing well diffusions are necessary.While the present invention has been described according to its preferred embodiments, it is of course contemplated that modifications of, and alternatives to, these embodiments, such modifications and alternatives obtaining the advantages and benefits of this invention, will be apparent to those of ordinary skill in the art having reference to this specification and its drawings. It is contemplated that such modifications and alternatives are within the scope of this invention as subsequently claimed herein. |
The present invention is directed to enhancing the analysis and modification of a flip chip integrated circuit die having silicon on insulator (SOI) structure. According to one example embodiment, an optical nanomachining arrangement is adapted to direct an optical beam, such as a laser, at a selected portion of the flip chip SOI structure. The optical beam performs device edits to modify the circuitry contained in the SOI selected portion without necessarily damaging surrounding circuitry. The ability to make such device edits is advantageous for various applications, such as in dies of complex, circuitry containing multiple stacked layers of components, and for dies having densely packed circuitry. |
What is claimed is: 1. A method for analyzing and modifying a flip chip integrated circuit die having a silicon on insulator (SOI) structure, the method comprising;providing a die having a thinned backside; and nanomachining the die to access a selected region of circuitry of the SOI structure, and to perform device edits on the accessed selected region of circuitry. |
FIELD OF THE INVENTIONThe present invention relates generally to semiconductor devices and their fabrication and, more particularly, to semiconductor devices and their manufacture involving techniques for analyzing and debugging circuitry within an integrated circuit.BACKGROUND OF THE INVENTIONThe semiconductor industry has recently experienced technological advances that have permitted dramatic increases in circuit density and complexity, and equally dramatic decreases in power consumption and package sizes. Present semiconductor technology now permits single-chip microprocessors with many millions of transistors, operating at speeds of hundreds of millions of instructions per second to be packaged in relatively small, air-cooled semiconductor device packages. A by-product of such high-density and high functionality in semiconductor devices has been the demand for increased numbers of external electrical connections to be present on the exterior of the die and on the exterior of the semiconductor packages which receive the die, for connecting the packaged device to external systems, such as a printed circuit boardAs the manufacturing processes for semiconductor devices and integrated circuits increase in difficulty, methods for testing and debugging these devices become increasingly important. Not only is it important to ensure that individual chips are functional, it is also important to ensure that batches of chips perform consistently. In addition, the ability to detect a defective manufacturing process early is helpful for reducing the number of defective devices manufactured.To increase the number of pad sites available for a die, different chip packaging techniques have been used. One technique is referred to as a dual in-line package (DIP) in which bonding pads are along the periphery of the device. Another technique, called controlled-collapse chip connection or flip chip packaging, uses the bonding pads and metal (solder) bumps. The bonding pads need not be on the periphery of the die and hence are moved to the site nearest the transistors and other circuit devices formed in the die. As a result, the electrical path to the pad is shorter. Electrical connections to the package are made when the die is flipped over the package with corresponding bonding pads. Each bump connects to a corresponding package inner lead. The resulting packages have a lower profile and have lower electrical resistance and a shortened electrical path. The output terminals of the package may be ball-shaped conductive-bump contacts (usually solder or other similar conductive material) and are typically disposed in a rectangular array. These packages are occasionally referred to as "Ball Grid Array" (BGA). Alternatively, the output terminals of the package may be pins, and such a package is commonly known as the pin grid array (PGA) package.For BGA, PGA and other types of packages, once the die is attached to the package, the backside portion of the die remains exposed. The transistors and other circuitry are generally formed in a very thin epitaxially grown silicon layer on a single crystal silicon wafer of which the die is singulated from. In a structural variation, a layer of insulating silicon dioxide is formed on one surface of a single crystal silicon wafer followed by the thin epitaxially grown silicon layer containing the transistors and other circuitry. This wafer structure is termed "silicon on insulator" (SOI) and the silicon dioxide layer is called the "buried oxide layer" (BOX). The transistors formed on the SOI structure show decreased drain capacitance, resulting in a faster switch transistor.The side of the die including the epitaxial layer, containing the transistors and the other active circuitry, is often referred to as the circuit side of the die or front side of the die. The circuit side of the die is positioned very near the package. The circuit side opposes the backside of the die. Between the backside and the circuit side of the die is single crystalline silicon and, in the case of SOI circuits, also a buried oxide layer. The positioning of the circuit side provides many of the advantages of the flip chip.In some instances the orientation of the die with the circuit side face down on a substrate may be a disadvantage or present new challenges. For example, when a circuit fails or when it is necessary to modify a particular chip, access to the transistors and circuitry near the circuit side is typically obtained only from the backside of the chip. This is challenging for SOI circuits since the transistors are in a very thin layer (about 10 micrometers) of silicon covered by the buried oxide layer (less than about 1 micrometer) and the bulk silicon (greater than 500 micrometers). Thus, the circuit side of the flip chip die is not visible or accessible for viewing using optical or scanning electron microscopy.Additionally, as designers work to reduce dimensions of circuitry components to increase speed and fit more circuitry on a die, the resulting submicron structure of tightly spaced components presents increased challenges to debugging or modifying the die circuitry. The presence of a buried oxide layer adds to the difficulty. Existing machining and/or milling methods, such as FIB or laser etching, do not provide the needed accuracy and precision required to debug or modify new smaller circuitry components. Damage to surrounding circuitry occurs when attempting to access and modify a particular component using current methods. Thus, any circuit modification requires precise and accurate nanomachining for success.Initially, modification of a flip chip SOI die requires removal of the majority of the bulk silicon layer from the backside. The die receives two or three steps of thinning in the process. First the die receives global thinning across the whole die surface. Mechanical polishing is one method for global thinning. Local thinning techniques, such as laser microchemical etching, thin the silicon in an area to a level that is thinner than the die size. One method for laser microchemical etching of silicon focuses a laser beam on the backside of the silicon surface to cause local melting of silicon in the presence of chlorine gas. The molten silicon reacts very rapidly with chlorine and forms silicon tetrachloride gas, which leaves the molten (reaction) zone. A specific example silicon-removal process uses the 9850 SiliconEtcher(TM) tool by Revise, Inc. (Burlington, Mass.). This laser process provides for both local and global thinning by scanning the laser over a part of, or the whole, die surface. The thinning stops short of the buried oxide layer (BOX) of the SOI integrated circuit.Substrate removal can present difficulties. For instance, removal of too much substrate damages the BOX layer and the circuitry of the die. Further, it is desirable to perform precision operations on suspect circuitry portions without damaging surrounding circuitry. Presently, focused ion beam (FIB) systems are capable of removing substrate, but FIB systems also remove all circuitry components in their path to access suspect circuitry regions. Thus, there is an unmet need for a method and system to perform precision device edits on internal circuitry of flip chip BOX dies without affecting surrounding circuitry of the die.SUMMARY OF THE INVENTIONThe present invention is directed to a method and system for nanomachining a semiconductor device having SOI structure using an optical beam. The present invention is exemplified in a number of implementations and applications, some of which are summarized below.According to an example embodiment, the present invention applies to a thinned backside integrated circuit die having silicon on insulator (SOI) structure. An optical nanomachining system performs device edits to a selected region of the integrated circuit die. The edits can be made, for example, to regions of the circuit buried beneath circuitry or other die structures. In this manner, edits can be made without affecting surrounding circuitry, thereby enhancing semiconductor manufacturing and analysis.The above summary of the present invention is not intended to describe each illustrated embodiment or every implementation of the present invention. The figures and detailed description which follow more particularly exemplify these embodiments.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be more completely understood in consideration of the following detailed description of various embodiments of the invention in connection with the accompanying drawings, in which:FIG. 1 illustrates a SOI flip chip, according to an example embodiment of the present invention;FIG. 2 illustrates a thinned SOI flip chip, according to another example embodiment of the present invention;FIG. 3 illustrates a nanomachined SOI flip chip, according to another example embodiment of the present invention;FIG. 4 illustrates another nanomachined SOI flip, according to another example embodiment of the present invention; andFIG. 5 illustrates a SOI flip chip receiving nanomachining by an optical nanomachining system, according to another example embodiment of the present invention.While the invention is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not necessarily to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTIONThe present invention is believed to be applicable for a variety of different types of semiconductor devices, and the invention has been found to be particularly suited for performing precision operations upon silicon on insulator (SOI) integrated circuits. The SOI die structure includes at least one layer of silicon dioxide between the bulk silicon substrate and an epitaxial silicon layer containing die circuitry. While the present invention is not necessarily limited to such SOI devices, various aspects of the invention may be appreciated through a discussion of various examples using this context.According to a particular embodiment of the present invention, a conventional flip chip type SOI die having a thinned backside is analyzed. An optical nanomachining system is used to perform device edits to a selected region of the integrated circuit die without necessarily affecting surrounding circuitry. The nanomachining system includes a laser beam capable of removing remaining substrate and buried oxide layer at a target area of the selected region of circuitry, and performing device edits to accessed circuitry. The laser beam also is capable of performing device edits to a target area of a selected region of circuitry covered by one or more layers of overlying circuitry. In this manner, edits can be made without affecting surrounding circuitry, thereby enhancing semiconductor manufacturing and analysis.FIGS. 1 and 2 provide cross sectional views of an SOI die 15, having a circuit side 20 and a backside 30. In FIG. 2 the die 15 received backside thinning according to an example embodiment of the present invention. Backside thinning removes a selected portion of bulk silicon layer 40 without disrupting a buried oxide layer 24. The circuit side 20 includes a number of circuit devices formed in a portion of the die referred to as an epitaxial layer 22. Thinning can be global where the whole die backside 30 receives thinning, or it can be localized in a selected region, as depicted in FIG. 2. Backside thinning provides improved access to the selected region of circuitry for making edits thereto. FIGS. 1 and 2 include a representative portion of circuitry contained in the epitaxial layer 22. The circuitry portion shown is a typical transistor 70 having source and drain regions 72, 74 and a control gate 76 powered by interconnect electrical lines 78. Once the backside 30 of the die 15 has been thinned as in FIG. 2, optical nanomachining is used to perform device edits to a selected region of accessed circuitry of the die, without affecting surrounding circuitry.According to an example embodiment of the invention, FIG. 3 illustrates nanomachining using a laser beam 240 capable of removing remaining substrate 40 and buried oxide layer 24 at a target area of the selected region of circuitry. The laser beam 240 performs device edits to the accessed circuitry, for instance, the transistor 70. The laser beam 240 provides improved nanomachining compared to a FIB. For example, controlling the laser beam pulses determines the depth of material removed by the laser beam from the die 15. Additionally, the half angle taper of the walls of the laser cut 80 in the die material ranges from 3.5 to 5.0 degrees, reflecting nearly vertical walls. The editing of a transistor 70 by disconnecting an interconnect line 78 is depicted in FIG. 3. The depth control and wall cut characteristics of optical laser beam nanomachining prevent damage to surrounding circuitry.According to an example embodiment of the present invention, the laser beam 240 performs device edits to a target area of a selected region of circuitry covered by one or more layers of overlying circuitry, as illustrated in FIG. 4 where two transistors 70 are stacked. In this case the laser beam 240 focuses on a selected buried region of circuitry and, for instance, severs a buried interconnect line 78 by producing a void 90 at an appropriate depth in stacked circuitry structures. Various other device edits available via optical beam nanomachining include reconnecting circuitry components, forming new connections between circuitry components and adding a dopant to a circuitry component.In another example embodiment of the present invention, an optical nanomachining system 200 directs an optical beam 240 at the thinned backside 30 of a SOI flip chip die 15, as depicted in FIG. 5. The thinning is stopped short of the buried oxide layer (BOX) 24 adjacent the epitaxial layer 22 and associated circuitry. The optical nanomachining system 200 includes a laser beam generating device 210 controlled by a central processor unit (CPU) 220, such as a computer. The system 200 also includes a navigation platform 230 that supports and positions the flip chip die 15.In another example embodiment of the invention, the die 15 contains landmark indica (not shown), used in manufacturing, for determining location of circuitry components. The landmark indica serve to position the die 15 in a predetermined location on the platform 230. The CPU 220 employs a stored map of the die circuitry to navigate to a selected region of circuitry and pinpoint a location for nanomachining. The CPU 220 also employs the stored map of the die 15 to control the laser beam generating device 210 for nanomachining the die 15 and editing the circuitry contained thereon.The optical laser beam 240, directed at a selected region of the thinned die backside 30, accesses a circuitry selected region of interest. The laser beam 240 is an ultraviolet laser beam produced by the laser-generating device 210. The laser beam 240 provides improved nanomachining compared to presently used systems. As outlined above, the system 200 performs device edits on tightly packed circuitry components and on regions of circuitry covered by one or more layers of overlying circuitry, thereby enhancing semiconductor manufacture and analysis.While the present invention has been described with reference to several particular example embodiments, those skilled in the art will recognize that many changes may be made thereto without departing from the spirit and scope of the present invention, which is set forth in the following claims. |
A method of manufacturing a semiconductor structure, including etching an opening in a hard mask layer including a trench pattern width a first portion having a first width and a second portion being an oversized trench portion having a second width greater than a width of the first portion, the second portion being formed over a predetermined via location. Also including are steps of depositing a resist and patterning a via pattern in the predetermined via location, etching a via corresponding to the via pattern through the resist and at least partially through a dielectric layer, and etching an oversized trench portion corresponding to a second portion opening in the hard mask. |
What is claimed is: 1. A method of manufacturing a semiconductor structure, comprising the steps of:etching a trench pattern opening in a hard mask layer, the trench pattern including a first portion having a first width and a second portion being an oversized trench portion having a second width greater than a width of the first portion, the second portion being formed over a predetermined via location; after etching the trench pattern opening, depositing a resist on said hard mask layer and the trench pattern opening, and patterning a via pattern in the predetermined via location; etching a via corresponding to the via pattern through the resist and at least partially through a dielectric layer; and etching an oversized trench portion corresponding to a second portion opening in the hard mask. 2. A method of manufacturing a semiconductor structure according to claim 1, wherein the width of the oversized trench portion is greater than the width of the trench by an amount equal to an overlay budget, wherein the overlay budget is applied to at least one side of the oversized trench portion pattern across a width of the oversized trench portion pattern.3. A method of manufacturing a semiconductor structure according to claim 2, wherein the overlay budget is applied to at least one side of the oversized trench portion pattern across a width of the oversized trench portion pattern.4. A method of manufacturing a semiconductor structure according to claim 3, wherein the overlay budget is applied to each side of the oversized trench portion pattern across a width of the oversized trench portion pattern.5. A method of manufacturing a semiconductor structure according to claim 4, wherein the overlay budget applied to each side of the oversized trench portion pattern is between approximately 40-60% of a calculated overlay budget.6. A method of manufacturing a semiconductor structure according to claim 4, wherein the overlay budget is between approximately 30 and 100 nanometers.7. A method of manufacturing a semiconductor structure according to claim 4, wherein the overlay budget is between approximately 50 and 70 nanometers.8. A method of manufacturing a semiconductor structure according to claim 4, wherein the overlay budget is applied to at least one side of the oversized trench portion pattern along a length of the oversized trench portion pattern.9. A method of manufacturing a semiconductor structure according to claim 3, wherein the overlay budget is applied to each side of the oversized trench portion pattern along a length of the oversized trench portion pattern.10. A method of manufacturing a semiconductor structure according to claim 3, wherein the dielectric layer is a low-k dielectric material.11. A method of manufacturing a semiconductor structure according to claim 3, further comprising depositing a conductor comprising copper within at least the via and oversized trench portion.12. A method of manufacturing a semiconductor structure, comprising the steps of:sequentially forming a metallization layer, an etch stop layer, a dielectric layer, and a hard mask layer; etching a trench pattern opening in a hard mask layer, the trench pattern including a first portion having a first width and a second portion being an oversized trench portion having a second width greater than a width of the first portion, the second portion being formed over a predetermined via location; after etching the trench pattern opening, depositing a resist on said hard mask layer and the trench pattern opening, and patterning a via pattern in the predetermined via location; etching a via corresponding to the via pattern through the resist and at least partially through a dielectric layer; and etching an oversized trench portion corresponding to the oversized opening in the hard mask. 13. A method of manufacturing a semiconductor structure according to claim 12, wherein the width of the oversized trench portion is greater than the width of the trench by an amount equal to an overlay budget.14. A method of manufacturing a semiconductor structure according to claim 13, wherein an overlay budget is applied to each side of the oversized trench portion pattern along a length of the oversized trench portion pattern.15. A method of manufacturing a semiconductor structure according to claim 13, wherein said etching a via step further comprises etching the via through the dielectric layer to reach the etch stop layer.16. A method of manufacturing a semiconductor structure according to claim 15, wherein said etching an oversized trench portion step further comprises etching the oversized trench portion using a timed etch.17. A method of manufacturing a semiconductor structure according to claim 13, wherein the overlay budget is between approximately 30 and 100 nanometers.18. A method of manufacturing a semiconductor structure according to claim 13, wherein the overlay budget is between approximately 40-60% of a calculated overlay budget.19. A method of manufacturing a semiconductor structure according to claim 13, wherein the dielectric layer is a low-k dielectric material.20. A method of manufacturing a semiconductor structure according to claim 19, further comprising the steps of:forming a diffusion barrier layer over sidewalls of the oversized trench portion and via; and depositing a conductor comprising copper into the oversized trench portion and via. |
FIELD OF THE INVENTIONThe present invention relates to the manufacturing of semiconductor devices, and more particularly, to a dual inlaid structure.BACKGROUND OF THE INVENTIONThe escalating requirements for high density and performance associated with ultra large scale integration (ULSI) semiconductor device wiring are difficult to satisfy in terms of providing sub-micron-sized, low resistance-capacitance (RC) metallization patterns. This is particularly applicable when the sub-micron-features, such as vias, contact areas, lines, trenches, and other shaped openings or recesses have high aspect ratios (depth-to-width) due to miniaturization.Conventional semiconductor devices typically comprise a semiconductor substrate, usually of doped monocrystalline silicon (Si), and a plurality of sequentially formed inter-metal dielectric layers and electrically conductive patterns. An integrated circuit is formed therefrom containing a plurality of patterns of conductive lines separated by inter-wiring spacings, and a plurality of interconnect lines, such as bus lines, bit lines, word lines and logic interconnect lines. Typically, the conductive patterns of vertically spaced metallization levels are electrically interconnected by vertically oriented conductive plugs filling via holes formed in the inter-metal dielectric layer separating the metallization levels, while other conductive plugs filling contact holes establish electrical contact with active device regions, such as a source/drain region of a transistor, formed in or on a semiconductor substrate. Conductive lines formed in trench-like openings typically extend substantially parallel to the semiconductor substrate. Semiconductor devices of such type according to current technology may comprise five or more levels of metallization to satisfy device geometry and microminiaturization requirements.A commonly employed method for forming conductive plugs for electrically interconnecting vertically spaced metallization levels is known as "damascene" or "inlaid"-type processing. Generally, this process involves forming a via opening in the inter-metal dielectric layer or interlayer dielectric (ILD) between vertically spaced metallization levels. The via opening is subsequently filled with metal to form a via electrically connecting the vertically spaced apart metal features. The via opening is typically formed using conventional lithographic arid etching techniques. After the via opening is formed, the via is filled with a conductive material, such as tungsten (W), using conventional techniques, and the excess conductive material on the surface of the inter-metal dielectric layer is then typically removed by chemical mechanical planarization (CMP).A variant of the above-described process, termed "dual inlaid" processing, involves the formation of an opening having a lower contact or via opening section which communicates with an upper trench section. The opening is then filled with a conductive material to simultaneously form a contact or via in contact with a conductive line. Excess conductive material on the surface of the inter-metal dielectric layer is then removed by CMP. An advantage of the dual inlaid process is that the contact or via and the upper line are formed simultaneously.High performance microprocessor applications require rapid speed of semiconductor circuitry, and the integrated circuit speed varies inversely with the resistance and capacitance of the interconnection pattern. As integrated circuits become more complex and feature sizes and spacings become smaller, the integrated circuit speed becomes less dependent upon the transistor itself and more dependent upon the interconnection pattern. If the interconnection node is routed over a considerable distance, e.g., hundreds of microns or more, as in submicron technologies, the interconnection capacitance limits the circuit node capacitance loading and, hence, the circuit speed. As integration density increases and feature size decreases, in accordance with submicron design rules, the rejection rate due to integrated circuit speed delays significantly reduces manufacturing throughput and increases manufacturing costs.Copper (Cu) and Cu-based alloys are becoming increasingly attractive for use in VLSI and ULSI semiconductor devices, which require multi-level metallization levels. Cu and Cu-based alloy metallization systems have very low resistivities which are significantly lower than W and even lower than those of previously preferred systems utilizing Al and its alloys. Additionally, Cu has a higher resistance to electromigration. Furthermore, Cu and its alloys enjoy a considerable cost advantage over a number of other conductive materials, notably silver (Ag) and gold (Au). Also, in contrast to Al and refractory-type metals (e.g., titanium (Ti), tantalum (Ta) and W), Cu and its alloys can be readily deposited at low temperatures formed by well-known "wet" plating techniques, such as electroless and electroplating techniques, at deposition rates fully compatible with the requirements of manufacturing throughput.Another technique to increase the circuit speed is to reduce the capacitance of the inter-metal dielectric layers. Dielectric materials such as silicon oxide (SiO2) have been commonly used to electrically separate and isolate or insulate conductive elements of the integrated circuit from one another. However, as the spacing between these conductive elements in the integrated circuit structure has become smaller, the capacitance between such conductive elements because of the dielectric being formed from silicon oxide is more of a concern. This capacitance negatively affects the overall performance of the integrated circuit because of increased power consumption, reduced speed of the circuitry, and cross-coupling between adjacent conductive elements.A response to the problem of capacitance between adjacent conductive elements caused by use of silicon oxide dielectrics has led to the use of other dielectric materials, commonly known as low-k dielectrics. Whereas silicon oxide has a dielectric constant of approximately 4.0, many low-k dielectrics have dielectric constants less than 3.5. Examples of low-k dielectric materials include organic or polymeric materials. Another example is porous, low density materials in which a significant fraction of the bulk volume contains air, which has a dielectric constant of approximately 1. The properties of these porous materials are proportional to their porosity. For example, at a porosity of about 80%, the dielectric constant of a porous silica film, i.e. porous SiO2, is approximately 1.5.A problem associated with the use of many low-k dielectric materials is that these materials can be damaged by exposure to oxidizing or "ashing" systems, which remove a resist mask used to form openings, such as vias, in the low-k dielectric material. This damage can cause the surface of the low-k dielectric material to become a water absorption site, if and when the damaged surface is exposed to moisture. Subsequent processing, such as annealing, can result in water vapor formation, which can interfere with subsequent filling with a conductive material of a via/opening or a inlaid trench formed in the dielectric layer. For this reason, the upper surface of the low-k dielectric material is typically protected from damage during removal of the resist mask by a capping layer, such as silicon oxide, disposed over the upper surface.A number of different variations of an inlaid process using low-k dielectrics have been employed during semiconductor manufacturing.FIGS. 1A-1J depict a first dual inlaid process for forming vias and a second metallization level over a first metallization level, according to conventional techniques.In FIG. 1A, a first etch stop layer 12 is deposited over a first metallization level 10. The first etch stop layer 12 acts as a passivation layer that protects the first metallization level 10 from oxidation and contamination and prevents diffusion of material from the first metallization level 10 into a subsequently formed dielectric layer. The first etch stop layer 12 also acts as an etch stop during subsequent etching of the dielectric layer. A typical material used as an etch stop is silicon nitride, which may be deposited by PECVD.In FIG. 1B, a first dielectric layer 14 is deposited over first etch stop layer 12, typically by spinning a liquid dielectric material onto the first etch stop layer 12 surface under ambient conditions to a desired depth. This is typically followed by a heat treatment to evaporate solvents present within the liquid dielectric material and to cure the film to form the dielectric layer 14.In FIG. 1C, a second etch stop layer 40, also known as a middle stop layer or hard mask layer, is deposited over the first dielectric layer 14. The second etch stop layer 40 acts as an etch stop during etching of a dielectric layer subsequently formed over the second etch stop layer 40. As with the first etch stop layer 12, the second etch stop layer 40 may comprise a silicon nitride or silicon oxynitride deposited by PECVD. A via pattern 41 is etched into the second etch stop layer 40 using conventional photolithography and appropriate anisotropic dry etching techniques, such as an O2 or (H2+N2) etch. These steps are not depicted in FIG. 1C and only the resulting via pattern 41 is depicted therein. The photoresist used in the via patterning is removed by an oxygen plasma, for example.In FIG. 1D, a second dielectric layer 42 is deposited over the second etch stop layer 40. After formation of the second dielectric layer 42, a capping layer or hard mask 13 can be formed over the second dielectric layer 42. The function of the capping layer 13 is to protect the second dielectric layer 42 from the process that removes a subsequently formed resist layer. The capping layer 13 can also be used as a mechanical polishing stop to prevent damage to the second dielectric layer 42 during subsequent polishing away of conductive material that is deposited over the second dielectric layer 42 and in a subsequently formed via and trench. Examples of materials used as a capping layer 13 include silicon oxide and silicon nitride.In FIG. 1E, the pattern of the trenches are formed in the capping layer 13 using conventional lithographic and etch techniques. The lithographic process involves depositing a resist 44 over the capping layer 13 and exposing and developing the resist 44 to form the desired pattern of the trench. The first etch, which is an anisotropic etch highly selective to the material of the capping layer and exposed portions of the resist 44, such as a reactive ion plasma dry etch, removes the exposed portions of the resist and underlying exposed portions of capping layer 13.In FIG. 1F, a second etch, which is highly selective to the material of the first dielectric layer 14 and second dielectric layer 42, anisotropically removes the dielectric material until the first etch stop layer 12 is reached. In this way, a trench 50 and via 51 are formed in the same etching operation. The second etch is typically an anisotropic etch, such as a reactive ion plasma dry etch, that removes only the exposed portions of the first low-k dielectric layer 14 directly below the opening in the second etch stop layer 40 and the exposed portions of the low-k dielectric materials. By using an anisotropic etch, the via 51 and the trench 50 can be formed with substantially perpendicular sidewalls.The thickness of the trench photoresist is selected to be completely consumed by the end of the etch operation, to eliminate the need for photoresist stripping. This results in the structure depicted in the top portion of FIG. 1G, wherein all of the photoresist has been stripped. Another etch, which is highly selective to the material of the first etch stop layer 12, then removes the portion of the etch stop layer 12 underlying via 51 until the etchant reaches the first metallization level 10. This etch is also typically a dry anisotropic etch chemistry designed not to attack any other layers in order to expose a portion of the metallization.In FIG. 1H, an adhesion/barrier material, such as tantalum, titanium, tungsten, tantalum nitride, or titanium nitride, is deposited. The combination of the adhesion and barrier material is collectively referred to as a diffusion barrier layer 20. The diffusion barrier layer 20 acts to prevent diffusion into the first and second dielectric layers 14, 42 of the conductive material subsequently deposited into the via 51 and trench 50.In FIG. 11, a layer 22 of a conductive material for example, a Cu or Cu-based alloy, is deposited in the via 51 and trench 50 and over the capping layer 13. A typical process initially involves depositing a "seed" layer on the barrier layer 20 subsequently followed by conventional plating techniques, e.g., electroless or electroplating techniques, to fill the via 51 and trench 50. So as to ensure complete filling of the via 51 and trench 50, the Cu-containing conductive layer 22 is deposited in trench 50 and via 51 and over the upper surface of the capping layer 13.In FIG. 1J, the entire excess thickness of the metal overburden layer 24 over the upper surface of the capping layer 13 is removed using a CMP process. A typical CMP process utilizes an alumina (Al2O3)-based slurry, which leaves a conductive plug in the via 51 and a second metallization level in the trench 50. The second metallization level has an exposed upper surface which is substantially co-planar with the upper surface of the capping layer 13.One problem associated with the above-identified process is overlay error. Since integrated circuits are fabricated by patterning a plurality of layers in a particular sequence to generate features that require a particular spatial relationship with respect to one another, as shown in FIGS. 1A-1J, above, each layer must be properly aligned with respect to previously patterned layers to minimize the size of individual devices and thus maximize the packing density on the substrate. A perfect overlap is not easily achieved and some misalignment is common. Excessive misalignment between successive masks used in the manufacture of the semiconductor integrated circuit can produce an overlay error that may ultimately result in the failure of the circuit to operate properly. For example, this overlay error may cause a reduction in the final via size with a corresponding increase in via resistance. Therefore, an overlay tolerance or overlay "budget", as defined by the particular tools and processes employed, is required between two layers to ensure reliability in the construction of the resulting device.An example of overlay error in the fabrication of a dual inlaid semiconductor structure in accord with the above process is depicted in plan view in FIG. 2A and in cross-section in FIGS. 2B and 2C. FIG. 2A shows a step in the fabrication process wherein a trench 200 is etched to reach a previously formed via hole pattern 250. FIG. 2B depicts in cross-section trench pattern 200, resist layer 202, hard mask layer 204, dielectric layer 206, middle stop layer 208, dielectric layer 210, etch stop layer 212, metallization layer 214, and via hole pattern 250. This structure, and the method for forming the structure, comports with the method and structure for forming a dual inlaid structure, discussed above, and a detailed discussion is therefore omitted. As shown, an overlay error exists between the via hole pattern 250 formed in middle stop layer 208 and the trench pattern 200 formed in hard mask layer 204. Upon formation of a trench and a via hole, in a manner as described above, the width of the resulting via contact W2 is less than the intended width W1. To overcome this problem, an overlay budget is conventionally applied to the via hole pattern 250 wherein approximately half of the overlay budget is applied to each side of the via hole pattern 250 width. However, this process requires numerous steps to form the dual inlaid structure and is complex.Another conventional dual inlaid process for forming vias and a second metallization level over a first metallization level is shown in FIGS. 3A-3D.In FIG. 3A, an etch stop layer 310 comprising a suitable etch stop material, such as silicon nitride, is deposited over a metallization level 300. The etch stop layer 310 acts as a passivation layer that protects the metallization level 300 from oxidation and contamination and prevents diffusion of material from the metallization level 300 into a subsequently formed dielectric layer. The etch stop layer 310 also acts as an etch stop during subsequent etching of the dielectric layer. A dielectric layer 320 is deposited over etch stop layer 310. The dielectric layer may comprise a conventional dielectric or a low-k dielectric material. A hard mask layer 330 is deposited over dielectric layer 320 and may comprise, for example, a silicon carbide or silicon oxynitride. A resist 340 is deposited over hard mask layer 330.As shown in FIG. 3B, a trench pattern 355 is lithographically formed in the resist 340 using conventional photolithography and appropriate anisotropic dry etching techniques. These steps are not depicted in FIG. 3B, and only the resulting structures are depicted. The patterning of resist 340 may be enhanced by use of an antireflective hard mask layer 330, such as silicon oxynitride. Portions of the hard mask layer 330 exposed by removing the exposed portions of the resist are then etched using conventional etching methods.In FIG. 3C, the trench 360 is formed by anisotropically etching through dielectric layer 320 to an appropriate depth, determined by use of a closely timed etch. Alternatively, a middle stop layer (not shown) could be used. Subsequently, resist 340 used in the trench patterning is removed by an oxygen plasma, for example, and another resist 370 is applied over the hard mask layer 330 and trench 360.As shown in FIG. 3D, a via 390 is formed using conventional photolithography and appropriate anisotropic dry etching techniques. These steps are not depicted in FIG. 3D, and only the resulting structure of the selective anisotropic etch of resist 370, dielectric layer 320, and etch stop layer 310 is shown. Subsequent to formation of the via 390 and trench 360, the resist 370 is removed and an adhesion/barrier material (not shown) is formed in the via 390 and trench 360. A conductive material such as Cu or Cu-based alloy is then deposited over the via 390 and trench 360, followed by chemical mechanical polishing.However, the dual inlaid process illustrated in FIGS. 3A-3D patterns the via over a substantial step, which seriously degrades pattern fidelity due to the different thicknesses across the surface causing, for example, significant light scatter. Thus, lithography is made more difficult. Accordingly, a need exists for a method of forming a dual inlaid structure while minimizing the aforementioned disadvantages of conventional dual inlaid schemes and a need exists for a simplified dual inlaid scheme that minimizes a required number of process steps to form the dual inlaid structure.SUMMARY OF THE INVENTIONThe need in the art for a simplified method of forming a dual inlaid structure which accounts for overlay error and minimizes the required number of steps while overcoming some of the deficiencies of the conventional dual inlaid techniques is met by embodiments of the present invention.These embodiments provide, in one aspect, a method of manufacturing a semiconductor structure including etching an opening in a hard mask layer including a trench pattern with a first portion having a first width and a second portion being an oversized trench portion having a second width greater than a width of the first portion, the second portion being formed over a predetermined via location. Also included are steps of depositing a resist and patterning a via pattern in the predetermined via location, etching a via corresponding to the via pattern through the resist and at least partially through a dielectric layer, and etching an oversized trench portion corresponding to the second portion opening in the hard mask.In another aspect, the invention includes a method of manufacturing a semiconductor device including the steps of sequentially forming a metallization layer, an etch stop layer, a dielectric layer, and a hard mask layer, followed by etching an opening in a hard mask layer including a trench pattern with a first portion having a first width and a second portion being an oversized trench portion having a second width greater than a width of the first portion, the second portion being formed over a predetermined via location. The method also includes depositing a resist and patterning a via pattern in the predetermined via location, etching a via corresponding to the via pattern through the resist and at least partially through a dielectric layer; and etching an oversized trench portion corresponding to the oversized opening in the hard mask. In this aspect of the invention, an overlay budget is applied to each side of the oversized trench portion pattern across a width of the oversized trench portion pattern and to each side of the oversized trench portion pattern along E length of the oversized trench portion pattern.In still another aspect, the invention includes a semiconductor device including at least one dielectric layer, a trench formed in the dielectric layer including a first portion having a first width and a second portion being an oversized trench portion having a second width greater than the first width of the first portion, wherein the second portion overlies a predetermined via location. A via is formed in the dielectric layer substantially in or adjacent the predetermined via location and circumferential edges of the oversized trench portion are displaced from corresponding edges of the via opening by at least a predetermined overlay budget.Additional advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein only the preferred embodiment of the present invention is shown and described, simply by way of illustration of the best mode contemplated for carrying out the present invention. As will be realized, the present invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSReference is made to the attached drawings, wherein elements having the same reference numeral designations represent like elements throughout, and wherein:FIGS. 1A-1J illustrate sequential phases of a conventional dual inlaid process.FIGS. 2A-2C illustrate via alignment problems associated with conventional dual inlaid processes.FIGS. 3A-3D illustrate sequential phases of a dual inlaid process in accord with another conventional dual inlaid process.FIGS. 4A-4I depict sequential phases of a dual inlaid process in accord with the invention.The figures referred to herein are drawn for clarity of illustration and are not necessarily drawn to scale and are not necessarily inclusive of every feature or aspect of the integrated circuits formed in conjunction with the aspects of the invention disclosed herein. Elements having the same reference numerals refer to elements having similar structure and function.DETAILED DESCRIPTION OF THE INVENTIONThe present invention addresses and provides a solution to some of the problems associated with overlay error between successive layers in semiconductor integrated circuits. Additionally, the present invention seeks to balance and minimize lithographic and etching disadvantages characteristic of conventional dual inlaid processes while in a preferred aspect minimizes a requisite number of steps required to realize the completed dual inlaid structure.A method in accord with this embodiment of the invention is illustrated in FIGS. 4A-4J. The dual inlaid process to be described is illustrative of one sequence of steps, but is not limited to the particular sequence of steps described to provide the dual inlaid structure, as other sequences of steps capable of providing the dual inlaid structure can be used to practice the invention.As illustrated in FIG. 4A, an etch stop layer 410 is deposited over a metallization layer 400. The etch stop layer 410 acts as an etch stop during etching of a subsequently formed dielectric layer. The etch stop layer 410 may be formed from silicon nitride, although the invention is not limited in this manner and may include any conventional etch stop material. Any process capable of depositing the etch stop layer 410 is acceptable for use with the invention, and an illustrative process for depositing silicon nitride is PECVD.In FIG. 4B, a dielectric layer 420 is deposited over etch stop layer 410. The dielectric layer 420 can be formed from any material capable of acting as a dielectric, such as silicon oxide; fluorosilicate glass (FSG or SiOF); hydrogen silsesquioxane (HSQ); hydrogenated diamond-like carbon (DLC); polystyrene; nanoporous silica; fluorinated polyimides; parylene (AF-4); poly(arylene) ether; polytetrafluoro-ethylene (PTFE); divinyl siloxane bisbenzocyclobutene (DVS-BCB); aromatic hydrocarbons, hybrid-silsesquioxanes; and siloxanes, silsesquioxanes, aerogels, and xerogels having varying degrees of porosity. Other dielectric materials, such as low-k dielectric materials, may also be used in accord with the invention. These dielectric materials can be applied via conventional spincoating, dip coating, spraying, meniscus coating methods, in addition to other coating methods that are well-known in the art.After formation of the dielectric layer 420, a middle stop layer (not shown) may optionally be deposited by conventional methods over dielectric layer 420, followed by deposition of another dielectric layer (not shown) similar to conventional dual inlaid schemes wherein the middle stop layer demarks the regions defined by the trench, formed in an upper dielectric, and the via, formed in a lower dielectric. In the present aspect of the invention, however, a hard mask layer 430 is deposited on dielectric layer 420. This hard mask layer may be of silicon nitride (SiN) or silicon oxynitride (SiON), for example. In a preferred aspect, a SiON hard mask layer is used, as SiON is relatively antireflective to light, providing a low reflectivity and greater photolithographic contrast, as compared to other hard mask materials.A resist 440, such as a photoresist, is then deposited over the hard mask layer 430 and patterned by conventional lithographic techniques, which include, for example, optical lithography (including, for example, I-line and deep-UV), X-ray. and E-beam lithography. The pattern formed in the resist 440 contains the features that are to be etched into the hard mask layer 430. The pattern is subsequently etched to expose underlying portions of hard mask 430. The exposed portions of the hard mask 430 are then etched using a selective anisotropic dry etch to form a trench pattern 445 in hard mask layer 430, as illustrated in FIG. 4C. Unlike conventional dual inlaid processes, this trench pattern comprises oversized trench portions 450. The oversized trench portions 450, which are merely openings in hard mask layer 430 at this stage of the process, overlie or are adjacent predetermined via locations 460. In other words, at these predetermined via locations 460, the width WT of the trench 455 is increased to an increased width WOT to compensate for overlay errors, as described below.This oversized trench portions 450 include, in areas of the trench 455 overlying or adjacent predetermined via locations 460, an overlay budget 500. The overlay budget 500 itself varies in accord with the device geometry and tooling in a manner known to those skilled in the art. In a preferred aspect of the invention, an overlay budget 500 between about 30-100 nanometers per side is applied for 0.13 micron technology using a ASML (of Tempe, Ariz.) stepper. In another aspect of the invention, the overlay budget is between 40-70 nanometers per side. Generally, about 40-60% of a total overlay budget, calculated in a manner known to those skilled in the art for a specified tool and device geometry, is applied to each side of the trench 455 adjacent a predetermined via location 460 to form an oversized trench portion 450 pattern, as illustrated in FIG. 4C. For example, if the calculated or total overlay budget for a 0.13 micron technology is 100 nanometers, 40-60 nanometers is applied to each side of trench 455. Thus, if the trench 455 width WT is approximately 180 nanometers and 50 nanometers is applied to each side of the trench 455, the width WOT of the oversized trench portion 450 is about 280 nanometers.In addition to application of the overlay budget 500 substantially equally to both sides of the trench 455 across a width of the trench to form oversized trench portion 460, as shown in FIG. 4C, the overlay budget 500 may be partially or totally biased toward one side of the trench 455 along a length of the trench 455 adjacent a predetermined via location 460. Thus, using the above example, a 25 nanometer overlay budget could be applied to one side of the trench and a 75 nanometer overlay budget could be applied to the other side of the trench. Other aspects of the invention include selective application of the overlay budget 500 to only one side of the oversized trench portion 450 pattern along a width or length of the trench 455. Additionally, the overlay budget 500 does not have to equally applied to the length or width of oversized trench portion 460 and the overlay budget 500 may be applied asymmetrically.FIG. 4D shows the device structure after resist 440 is stripped away, such as by an oxygen plasma etch or by other conventional methods such as solvents, and another resist 510 is deposited. A via pattern 465, represented by the dashed lines centered about center line C2, is then formed in resist 510 by conventional lithographic techniques, rioted above. Although this via pattern is desired to be formed in a predetermined via location 460, represented by the dashed lines centered about center line C1, the actual via pattern 465 may be inadvertently displaced from predetermined via location 460 by an amount [delta] due to overlay error. As shown in FIGS. 4D and 4E, displacement amount [delta] is the distance from a center line C1 of predetermined via location 460 to a center line C2 of the actual via pattern 465.A via 520 is then etched, as shown in FIG. 4F, through via pattern 465 and dielectric layer 420 until etch stop layer 410 is reached. In accord with the invention, due to the oversized trench portion 450 pattern etched through hard mask layer 430, the displaced via 520 is still positioned entirely within the trench 455, as shown in the top-down view of FIG. 4E, in contrast to conventional methods wherein the trench side wall would intersect via 520, reducing the size of the via or requiring additional processing steps to address the deficiency. Additionally, in accord with this aspect of the invention, since the via pattern 465 is formed in a resist 510 deposited over a small step having a height equal to the thickness of hard mask layer 430, photolithographic pattern fidelity is improved over conventional methods which form the via pattern in a resist deposited over a large step having a height equal to the height of the trench.In FIG. 4G, a trench 600 and one or more oversized trench portions 650 corresponding to the oversized opening in the hard mask 430 are formed by a selective anisotropic etch of dielectric layer 420 through corresponding openings in hard mask 430. In a preferred aspect of the invention, the depth of the trench 600 and oversized trench portions 650 are controlled by closely timing the trench etch. Other methods of etch control to control a trench depth are also contemplated as being within the scope of the invention. As noted above, for example, a middle stop layer (not numbered or shown) may be deposited between a first via dielectric layer and a second trench dielectric layer to serve as an etch stop. An anisotropic etch is then performed to remove the exposed portion of etch stop layer 410 and expose the underlying metallization layer 400. For a SiN etch stop layer 410, a CHF3+Ar+N2 plasma may be used, although many other gases, etching methods, and combinations of gases may be used in accord with the process parameters and the particular etch stop layer material selected. Resist 510 may be removed prior to or subsequent to the trench etch or etch stop etch, such as by an oxygen plasma etch.FIG. 4H shows deposition of an adhesion/barrier material, such as tantalum, titanium, tungsten, tantalum nitride, or titanium nitride, in via 500, trench 600, and oversized trench portions 650 and over the hard mask layer 430. The combination of the adhesion and barrier material is collectively referred to as a diffusion barrier layer 675, which acts to prevent diffusion of conductive material deposited in via 500, trench 600, and oversized trench portion 650 into the dielectric layer 420. A layer of a conductive material 700, for example, a Cu or Cu-based alloy, is then deposited over the diffusion barrier layer 675. A typical process initially involves depositing a "seed" layer on the diffusion barrier layer 675, followed by conventional plating techniques such as electroless or electroplating techniques, to fill the via 500, trench 600, and oversized trench portion 650. The resulting structure is then planarized, as necessary, to remove any overburden using a conventional CMP process employing a slurry, such as an alumina (Al2O3)-based slurry. The resulting structure is shown in FIG. 4J, which depicts a conductive element 750 in the via 500 and oversized trench portion 650.Thus, the above described aspects of the present invention provides an effective and simplified solution to some of the problems associated with overlay error between successive layers in semiconductor integrated circuits by balancing and minimizing lithographic and etching disadvantages characteristic of conventional dual inlaid processes.The present invention can be practiced by employing conventional materials, methodology and equipment. Accordingly, the details of such materials, equipment and methodology are not set forth herein in detail. In the previous descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., in order to provide a thorough understanding of the present invention. However, it should be recognized that the present invention can be practiced without resorting to the details specifically set forth. In other instances, well known processing structures have not been described in detail, in order not to unnecessarily obscure the present invention.Only the preferred embodiment of the present invention and but a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the present invention is capable of use in various other combinations and environments and is capable of changes or modifications within the scope of the inventive concept as expressed herein. |
A method and apparatus for mapping graphics data of a texture map into virtual two-dimensional (2D) memory arrays implemented in a one-dimensional memory space. The texture map is partitioned into 2u+v two-dimensional arrays having dimensions of 2m bytesx2n rows. The graphics data is then mapped from a respective two-dimensional array into the one-dimensional memory space by calculating an offset value based on the coordinates of a respective texel of the texture map and subsequently reordering the offset value to produce a memory address value. The order of the offset value is, from least to most significant bits, a first group of m bits, a second group of u bits, a third group of n bits, and a fourth group of v bits. The order of the offset value is reordered to, from least to most significant bits, first, third, second, and fourth groups. The resulting value produces a memory address for the one-dimensional memory space. |
I claim: 1. A method of determining an address for graphics data of texels arranged according to a coordinate system having at least first and second coordinates, the method comprising:calculating a first value from the first and second coordinates of a respective texel, the first value represented by characters arranged in a first format; and arranging the characters of the first value into a memory address format determined from a stride of the graphics data, the graphics memory m bytes in length and the first value=(first coordinate*m)+(second coordinate*the stride of the graphics data)+a base address. 2. The method according to claim 1 wherein m=4.3. The method according to claim 1 wherein the stride of the graphics data is 128 bytes, and the memory address format into which the first value is arranged comprises: characters[23:12], characters[6:4], characters[11:7], characters[3:0].4. The method according to claim 1 wherein the stride of the graphics data is 256 bytes, and the memory address format into which the first value is arranged comprises: characters[23:13], characters[7:4], characters[12:8], characters[3:0].5. The method according to claim 1 wherein the stride of the graphics data is 512 bytes, and the memory address format into which the first value is arranged comprises: characters[23:14], characters[8:4], characters[13:9], characters[3:0].6. The method according to claim 1 wherein the stride of the graphics data is 1024 bytes, and the memory address format into which the first value is arranged comprises: characters[23:15], characters[9:4], characters[14:10], characters[3:0].7. The method according to claim 1 wherein the stride of the graphics data is 2048 bytes, and the memory address format into which the first value is arranged comprises: characters[23:16], characters[10:4], characters[15:11], characters[3:0].8. The method according to claim 1 wherein the stride of the graphics data is 4096 bytes, and the memory address format into which the first value is arranged comprises: characters[23:17], characters[11:4], characters[16:12], characters[3:0].9. The method according to claim 1 wherein the stride of the graphics data is 8192 bytes, and the memory address format into which the first value is arranged comprises: characters[23:18], characters[12:4], characters[17:13], characters[3:0].10. A method of addressing graphics data of texels arranged in a first coordinate space, the method comprising:partitioning the graphics data of the texels into 2<u+v >two-dimensional arrays, each array having dimensions of 2<m >bytes*2<n >rows; calculating a first value based on coordinates of a respective texel in the first coordinate space, the first value represented by a first group of m bits, a second group of u bits, a third group of n bits, and a fourth group of v bits, arranged so that the first group represents the least significant bits and the fourth group represents the most significant bits; and reordering the groups of the first value to produce a respective memory address, the order of the groups for the respective memory address is first, third, second, and fourth groups, from least to most significant bit. 11. The method of claim 13 wherein each texel is represented by Q bytes and the first value=(a first coordinate*Q)+(a second coordinate*2<u+m>)+a base address.12. The method of claim 11 wherein Q=4.13. The method of claim 11 wherein Q+2.14. The method of claim 10 wherein (v+u+n+m)=24.15. The method of claim 14 wherein m=4 and n=5.16. The method of claim 15 wherein u=3 and v=12.17. The method of claim 15 wherein u=4 and v=11.18. The method of claim 15 wherein u=5 and v=10.19. The method of claim 15 wherein u=6 and v=9.20. The method of claim 15 wherein u=7 and v=8.21. The method of claim 15 wherein v=8 and v=7.22. The method of claim 15 wherein v=9 and v=6.23. A method of addressing graphics data of texels arranged in a first coordinate space, the graphics data used for calculating color values of pixels and stored in a memory partitioned into memory pages, the method comprising:calculating a first value based on coordinates of a respective texel in the first coordinate space, the first value represented by a first group of m bits, a second group of u bits, a third group of n bits, and a fourth group of v bits, arranged so that the first group represents the least significant bits and the fourth group represents the most significant bits; and reordering the groups of the first value into a memory address having the groups arranged in an order of first, third, second, and fourth groups, from least to most significant bit, to store graphics data that would be on separate memory pages according to the first coordinate space into an arrangement in the memory where the graphics data used to calculate a color value of a pixel on the same memory page. 24. The method of claim 23 wherein each texel is represented by Q bytes and the first value (a first coordinate*Q)+(a second coordinate*2<u+m>)+a base address.25. The method of claim 24 wherein Q=4.26. The method of claim 24 wherein Q=2.27. The method of claim 23 wherein (v+u+n+m)=24.28. The method of claim 27 wherein m=4 and n=5.29. The method of claim 28 wherein u=3 and v=12.30. The method of claim 28 wherein u=4 and v=11.31. The method of claim 28 wherein u=5 and v=10.32. The method of claim 28 wherein u=6 and v=9.33. The method of claim 28 wherein u=7 and v=8.34. The method of claim 28 wherein v=8 and v=7.35. The method of claim 28 wherein v=9 and v=6.36. The method of claim 23 wherein each texel has a first and second coordinate in the first coordinate space, and reordering the first value comprises:calculating a first value from the first and second coordinates of a respective texel, the first value represented by characters arranged in a first format; and arranging the characters of the first value into a memory address format determined from a stride of the graphics data. 37. The method according to claim 36 wherein the graphics memory are m bytes in length.38. The method according to claim 37 wherein the first value =(first coordinate*m)+(second coordinate*the stride of the graphics data) +a base address.39. The method according to claim 37 wherein m=4.40. An apparatus for addressing graphics data of texels arranged in a first coordinate space, each texel represented by Q bytes of graphics data, the apparatus comprising:an address generator to generate a first value for a respective texel from its coordinates in the first coordinate space, the first value=(a first coordinate*Q)+(a second coordinate*2<u+m>)+a base address; and a mapping circuit coupled to the address generator to partition the graphics data into 2<u+v >two-dimensional arrays and to map the graphics data from the two-dimensional arrays into a one-dimensional memory space, each two-dimensional array having dimensions of 2mbytes*2<n >rows. 41. The apparatus of claim 40 wherein Q=4.42. The apparatus of claim 40 wherein Q=2.43. The apparatus of claim 40 wherein the mapping circuit is adapted to map the graphics data by reordering the characters of the first value from the first format to a second format to produce a memory address.44. The apparatus of claim 43 wherein the characters of the first value are bits, the first format comprises bits arranged, from least to most significant, in a first group of m bits, a second group of u bits, a third group of n bits, and a fourth group of v bits, and the second format comprises bits arranged so that the group order is, from least to most significant, the first group, the third group, the second group, and the fourth group.45. The apparatus of claim 44 wherein (v+u+n+m)=24.46. The apparatus of claim 45 wherein m=4 and n=5.47. The apparatus of claim 46 wherein u=3 and v=12.48. The apparatus of claim 46 wherein u=4 and v=11.49. The apparatus of claim 46 wherein u=5 and v=10.50. The apparatus of claim 46 wherein u=6 and v=9.51. The apparatus of claim 46 wherein u=7 and v=8.52. The apparatus of claim 46 wherein v=8 and v=7.53. The apparatus of claim 46 wherein v=9 and v=6.54. A computer system, comprising:a central processing unit (CPU); a system memory coupled to the CPU to store graphics data of texels arranged in a first coordinate space, each texel represented by Q bytes of graphics data, the system memory having a one-dimensional memory space; a bus coupled to the CPU; a graphics processor coupled to the bus to process the graphics data; and an apparatus for addressing the graphics data, comprising: an address generator to generate a first value for a respective texel from its coordinates in the first coordinate space, the first value=(a first coordinate*Q)+(a second coordinate*2<u+m>) a base address; and a mapping circuit coupled to the address generator to partition the graphics data into 2<u+v >two-dimensional arrays and to map the graphics data from the two-dimensional arrays into the one-dimensional memory space, each two-dimensional array having dimensions of 2<m >bytes*2<n >rows. 55. The computer system of claim 54 wherein Q=4.56. The computer system of claim 54 wherein Q=2.57. The computer system of claim 51 wherein the mapping circuit is adapted to map the graphics data by reordering the characters of the first value from the first format to a second format to produce a memory address.58. The computer system of claim 57 wherein the characters of the first value are bits, the first format comprises bits arranged, from least to most significant, in a first group of m bits, a second group of u bits, a third group of n bits, and a fourth group of v bits, and the second format comprises bits arranged so that the group order is, from least to most significant, the first group, the third group, the second group, and the fourth group.59. The computer system of claim 58 wherein (v+u+n+m)=24.60. The computer system of claim 59 wherein m=4 and n=5.61. The computer system of claim 60 wherein u=3 and v=12.62. The computer system of claim 60 wherein u=4 and v=11.63. The computer system of claim 60 wherein u=5 and v 10.64. The computer system of claim 60 wherein u=6 and v=9.65. The computer system of claim 60 wherein u=7 and v=8.66. The computer system of claim 60 wherein v=8 and v=7.67. The computer system of claim 60 wherein v=9 and v=6.68. An apparatus for addressing graphics data of texels arranged in a first coordinate space, the apparatus comprising:an address generator to generate a first value for a respective texel from its coordinates in the first coordinate space, the first value having a first format; and a mapping circuit coupled to the address generator to partition the graphics data into 2<u+v >two-dimensional arrays and to map the graphics data from the two-dimensional arrays into a one-dimensional memory space by reordering bits of the first value from the first format to a second format to produce a memory address, each two-dimensional array having dimensions of 2<m >bytes*2<n >rows, the first format having the bits arranged, from least to most significant, in a first group of m bits, a second group of u bits, a third group of n bits, and a fourth group of v bits, and the second format having the bits arranged so that the group order is, from least to most significant, the first group, the third group, the second group, and the fourth group. 69. The apparatus of claim 68 wherein (v+u+n +m) =24.70. The apparatus of claim 69 wherein m =4 and n =5.71. The apparatus of claim 70 wherein u=3 and v=12.72. The apparatus of claim 70 wherein u=4 and v=11.73. The apparatus of claim 70 wherein u=5 and v=10.74. The apparatus of claim 70 wherein u=6 and v=9.75. The apparatus of claim 70 wherein u=7 and v=8.76. The apparatus of claim 70 wherein v=8 and v=7.77. The apparatus of claim 70 wherein v=9 and v=6.78. The apparatus of claim 68 wherein a texel is represented by Q bytes of graphics data, and the first value =(a first coordinate*Q) +(a second coordinate*2<u+m>) a base address.79. The apparatus of claim 78 wherein Q=4.80. The apparatus of claim 78 wherein Q=2.81. A computer system, comprising:a central processing unit (CPU); a system memory coupled to the CPU to store graphics data of texels arranged in a first coordinate space, the system memory having a one-dimensional memory space; a bus coupled to the CPU; a graphics processor coupled to the bus to process the graphics data; and an apparatus for addressing the graphics data, comprising: an address generator to generate a first value for a respective texel from its coordinates in the first coordinate space, the first value having a first format; and a mapping circuit coupled to the address generator to partition the graphics data into 2<u+v >two-dimensional arrays and to map the graphics data from the two-dimensional arrays into the one-dimensional memory space by reordering bits of the first value from the first format to a second format to produce a memory address, each two-dimensional array having dimensions of 2<m >bytes*2<n >rows, the first format having the bits arranged, from least to most significant, in a first group of m bits, a second group of u bits, a third group of n bits, and a fourth group of v bits, and the second format having the bits arranged so that the group order is, from least to most significant, the first group, the third group, the second group, and the fourth group. 82. The computer system of claim 81 wherein (v+u+n+m) =24.83. The computer system of claim 82wherein m=4 and n=5.84. The computer system of claim 83 wherein u=3 and v=12.85. The computer system of claim 83 wherein u=4 and v=11.86. The computer system of claim 83 wherein u=5 and v=10.87. The computer system of claim 83 wherein u=6 and v=9.88. The computer system of claim 83 wherein u=7 and v=8.89. The computer system of claim 83 wherein v=8 and v 7.90. The computer system of claim 83 wherein v=9 and v=6.91. The computer system of claim 81 wherein a texel is represented by Q bytes of graphics data, and the first value (a first coordinate*Q) +(a second coordinate*2<u+m>)+a base address.92. The computer system of claim 91 wherein Q=4.93. The computer system of claim 91 wherein Q=2. |
TECHNICAL FIELDThe present invention is related generally to the field of computer graphics, and more particularly, to addressing graphics data, such as texture data, in a computer graphics processing system.BACKGROUND OF THE INVENTIONA graphics accelerator is a specialized graphics processing subsystem for a computer system that relieves a host processor from performing all the graphics processing involved in rendering a graphics image on a display device. The host processor of the computer system executes an application program that generates geometry information used to define graphics elements on the display device. The graphics elements that are displayed are typically modeled from polygon graphics primitives. For example, a triangle is a commonly used polygon for rendering three dimensional objects on the display device. Setup calculations are initially performed by the host processor to define the triangle primitives. The application program then transfers the geometry information from the processor to the graphics processing system so that the triangles may be modified by adding shading, hazing, or other features before being displayed. The graphics processing system, as opposed to the processor, has the task of rendering the corresponding graphics elements on the display device to allow the processor to handle other system requests.Some polygon graphics primitives also include specifications to map texture data, representative of graphic images, within the polygons. Texture mapping refers to techniques for adding surface detail, or a texture map, to areas or surfaces of the polygons displayed on the display device. A typical texture map is represented in a computer memory as a bitmap or other raster-based encoded format, and includes point elements, or "texels," which reside in a (s, t) texture coordinate space. The graphics data representing the texels of a texture map are stored in a memory of the computer system and used to generate the color values of point elements, or "pixels" of the display device which reside in an (x, y) display coordinate space. As illustrated in FIG. 1, the memory in which the texture data is stored is typically implemented using a one-dimensional memory space partitioned into several memory pages. The memory is allocated by addressing the texture data in a sequential fashion. That is, the resulting physical memory address for the texture data of a particular texel is an offset value that corresponds to the first byte of the texture data for the particular texel.Generally, the process of texture mapping occurs by accessing the texels from the memory that stores the texture data, and transferring the texture data to predetermined points of the graphics primitive being texture mapped. The (s, t) coordinates for the individual texels are calculated and then converted to physical memory addresses. The texture map data are read out of memory and applied within the respective polygon in particular fashions depending on the placement and perspective of their associated polygon. The process of texture mapping operates by applying color or visual attributes of texels of the (s, t) texture map to corresponding pixels of the graphics primitive on the display. Thus, color values for pixels in (x, y) display coordinate space are determined based on sampled texture map values. Where the original graphics primitives are three dimensional, texture mapping often involves maintaining certain perspective attributes with respect to the surface detail added to the graphics primitive. After texture mapping, a version of the texture image is visible on surfaces of the graphics primitive with the proper perspective.The color value for a pixel is usually determined by using a method of bilinear filtering. In bilinear filtering, the color values of the four texels closest to the respective location of the pixel are weighted and a resulting color value for the pixel is interpolated therefrom. For example, illustrated in FIG. 1 is a portion of four rows of texels from a texture map 10. The color value for pixel Pa,a is determined from the color value of texels C0,0, C0,1, C1,0, and C1,1. Similarly, the color value for pixel Pb,b is determined from the color value of texels C0,1, C0,2, C1,1, and C1,2.As illustrated by FIG. 1, using bilinear filtering to determine the color value of pixels in an (x, y) display coordinate space requires texture data from two different rows of texels. Where a memory paging scheme is employed for the memory in which the texture data is stored, it is often the case that the memory pages are large enough to contain data for only one row of the texture map. Consequently, when retrieving the texture data to calculate the color of a pixel, an average of two page misses will occur because two different memory pages must be accessed to retrieve the required texture data. Page misses result in inefficient data access of the texture data.A conventional approach to the problem of multiple page misses is dividing the memory space in which the texture data is stored into several two-dimensional (2D) segments. As illustrated in FIG. 2, although the width of the texture map is divided into several 2D segments, the texture data for texels of several adjacent rows may be stored on a common memory page. Thus, the number of page misses occurring during texture application is reduced. However, allocating memory in a 2D manner is difficult to accomplish, and results in inefficient use of available memory. For example, memory fragmentation issues are exacerbated when attempting to allocate memory in 2D segments. Furthermore, it is generally difficult to find memory allocation algorithms that can optimally allocate memory in 2D manner.Therefore, there is a need for a system and method for allocating memory in a one-dimensional memory space that results in fewer page misses than conventional approaches.SUMMARY OF THE INVENTIONA method and apparatus for mapping graphics data of a texture map into virtual two-dimensional (2D) memory arrays implemented in a one-dimensional memory space. The texture map is partitioned into 2<u+v >two-dimensional arrays where each of the arrays has a dimension of 2<m >bytes*2<n >rows, and the graphics data is then mapped from a respective two-dimensional array into the one-dimensional memory space. Mapping occurs by calculating an offset value based on the coordinates of a respective texel of the texture map. The offset value is represented by a first group of m bits, a second group of u bits, a third group of n bits, and a fourth group of v bits, arranged so that the first group represents the least significant bits and the fourth group represents the most significant bits. The arrangement of the groups of the offset value are then reordered to produce a respective memory address for the one-dimensional memory space. The order of the groups for the respective memory address is first, third, second, and fourth groups, from least to most significant bits.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram illustrating a portion of a texture map and a memory space allocated in a conventional one-dimensional manner.FIG. 2 is a block diagram of a memory space allocated in a conventional two-dimensional manner.FIG. 3 is a block diagram of a computer system in which embodiments of the present invention are implemented.FIG. 4 is a block diagram of a graphics processing system in the computer system of FIG. 3.FIG. 5 illustrates a portion of a texture map.FIG. 6 is a block diagram of a memory space allocated according to an embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONEmbodiments of the present invention map texels from a texture map coordinate space into virtual two-dimensional (2D) memory arrays implemented in a one-dimensional memory space. Where texels of two different rows of the texture map are required for bilinear filtering, the arrangement of the texels in the virtual 2D memory arrays facilitates texel data processing and minimizes the occurrences of page misses, as encountered with conventional addressing schemes. FIG. 3 illustrates a computer system 18 in which embodiments of the present invention are implemented. The computer system 18 includes a processor 20 coupled to a host memory 22 through an addressing unit 23 that translates texture map coordinates into physical memory addresses, and a through memory/bus interface 24. A memory paging scheme is implemented in the host memory 22 by partitioning the memory space into memory pages. The memory/bus interface 24 is coupled to an expansion bus 26, such as an industry standard architecture (ISA) bus or a peripheral component interconnect (PCI) bus.The computer system 18 also includes one or more input devices 28, such as a keypad or a mouse, coupled to the processor 20 through the expansion bus 26 and the memory/bus interface 24. The input devices 28 allow an operator or an electronic device to input data to the computer system 18. One or more output devices 30 are coupled to the processor 20 to provide output data generated by the processor 20. The output devices 30 are coupled to the processor 20 through the expansion bus 26 and memory/bus interface 24. Examples of output devices 30 include printers and a sound card driving audio speakers. One or more data storage devices 32 are coupled to the processor 20 through the memory/bus bridge interface 24, and the expansion bus 26 to store data in or retrieve data from storage media (not shown). Examples of storage devices 32 and storage media include fixed disk drives, floppy disk drives, tape cassettes and compact-disk read-only memory drives.The computer system 18 further includes a graphics processing system 40 coupled to the processor 20 through the expansion bus 26 and memory/bus interface 24. Optionally, the graphics processing system 40 may be coupled to the processor 20 and the host memory 22 through other architectures. For example, the graphics processing system 40 may be coupled through the memory/bus interface 24 and a high speed bus 44, such as an accelerated graphics port (AGP), to provide the graphics processing system 40 with direct memory access (DMA) to the host memory 22. That is, the high speed bus 44 and memory bus interface 24 allow the graphics processing system 40 to read and write host memory 22 without the intervention of the processor 20. Thus, data may be transferred to, and from, the host memory 22 at transfer rates much greater than over the expansion bus 26. A display 46 is coupled to the graphics processing system 40 to display graphics images. The display 46 may be any type, such as a cathode ray tube (CRT), a field emission display (FED), a liquid crystal display (LCD), or the like, which are commonly used for desktop computers, portable computers, and workstation or server applications.FIG. 4 illustrates circuitry included within the graphics processing system 40, including circuitry for performing various three-dimensional (3D) graphics finctions. As shown in FIG. 4, a bus interface 60 couples the graphics processing system 40 to the expansion bus 26. Where the graphics processing system 40 is coupled to the processor 20 and the host memory 22 through the high speed data bus 44 and the memory/bus interface 24, the bus interface 60 will include a DMA controller (not shown) to coordinate transfer of data to and from the host memory 22 and the processor 20. A graphics processor 70 is coupled to the bus interface 60 and is designed to perform various graphics and video processing finctions, such as, but not limited to, generating vertex data and performing vertex transformations for polygon graphics primitives that are used to model 3D objects. In a preferred embodiment, the graphics processor 70 is a reduced instruction set computing (RISC) processor. The graphics processor 70 further includes circuitry for performing various graphics functions, such as clipping, attribute transformations, rendering of graphics primitives, and generating texture coordinates from a texture map. An address generator 74 receives the texture map coordinates from the graphics processor 70, and translates them into the memory addresses where the texture data for the texels are stored. The memory addresses are provided to a pixel engine 78. The pixel engine 78 contains circuitry for performing various graphics functions, such as, but not limited to, texture application or mapping, bilinear filtering, fog, blending, and color space conversion.A memory controller 80 coupled to the pixel engine 78 and the graphics processor 70 handles memory requests to and from the host memory 22, and a local memory 84. The local memory 84 stores both source pixel color values and destination pixel color values. Destination color values are stored in a frame buffer (not shown) within the local memory 84. In a preferred embodiment, the local memory 84 is implemented using random access memory (RAM), such as dynamic random access memory (DRAM), or static random access memory (SRAM). A display controller 88 coupled to the local memory 84 and to a first-in first-out (FIFO) buffer 90 controls the transfer of destination color values stored in the frame buffer to the FIFO 90. Destination values stored in the FIFO 90 are provided to a digital-to-analog converter (DAC) 92, which outputs red, green, and blue analog color signals to the display 46 (FIG. 3).In operation, a graphics application executing on the processor 20 (FIG. 3) writes graphics data, such as a texture map, from the data storage device 32 to the host memory 22 or the local memory 84 in preparation for texture application by the pixel engine 78. As will be explained in greater detail below, texture data of the texture map is written into virtual 2D memory arrays that are implemented in the host memory 22 and the local memory 84. Calculation of a physical memory address that arranges the texture data into the virtual 2D memory arrays is accomplished by the addressing unit 23. An offset value is calculated for each texel based on its respective texture coordinate (s, t), and is subsequently reordered into a format that produces the physical memory address that maps the texture data into a virtual 2D arrays. As a result, a one-dimensional memory space may be used for the host memory 22 and the local memory 84.When texture application begins, the graphics application executing on the processor 20 communicates with the graphics processor 70, and provides it with information that will be used by the graphics processor 70 to determine the coordinates of the texels needed for texture application. These texture coordinates are provided to the address generator 74 for translation into the memory addresses of the requested texels. The address generator 74 determines the memory address of the requested texels in a manner similar to that used by the addressing unit 23 when writing the texture data into the host memory 22 or the local memory 84. That is, an offset value is calculated for each texel based on its texture map coordinates, and the format of the resulting offset values are reordered to produce the physical memory address at which the texture data is stored.The memory addresses generated by the address generator 74 are then provided to the pixel engine 78. The pixel engine 78 uses the physical memory addresses to request the texture data from either the host memory 22 or the local memory 84. The requested texture data is eventually provided to the pixel engine 78 for texture application.As mentioned previously, the process of translating the texture coordinates into a memory address that maps the corresponding texture data into virtual 2D memory arrays is accomplished by calculating an offset value for a requested texel and then reordering the resulting offset value to produce a physical memory address. The offset value for a texel is based on its texture coordinates (s, t) and the size of the texture map. The offset value for a texel in a texture map having two coordinate axes (s, t) is calculated from the following equation:offset value=(s*bytes/texel)+(t*bytes/stride).The stride is the width of a texture map in bytes. A typical texture map is 256*256 texels. Assuming a 32-bit texture, that is, 4 bytes/texel, the resulting stride is 1,024 bytes. For a 16-bit texture, which has 2 bytes/texel, the resulting stride is 512 bytes. The resulting offset value is essentially a byte index value for a particular texel in a texture map. In the case where more than one texture map is stored in memory concurrently, a base value is added to the offset value in order to index the correct texture map.The bits of the calculated offset value are reordered to produce a physical memory address that maps the texels of a texture map into virtual 2D memory arrays. That is, the reordered offset value produces a physical address for a one dimensional memory space, but the resulting physical address positions the texels in the memory at a location that is, relatively speaking, similar to being arranged in a 2D memory array.A 256*256 texture map having 32-bit textures will be used for purposes of describing embodiments of the present example. Each texel is represented by 32-bits, or 4 bytes/texel. Consequently, the stride of the texture map is 1,024 bytes, or 1 kB. The size of the total texture map is 1,024 bytes/row*256 rows, or 256 kB. The offset value for the described texture map is:offset value=(s*4 bytes/texel)+(t*1,024 bytes/stride)For purposes of the present example, the offset values will be represented by 20-bit numbers. However, it will be appreciated that using offset values of different lengths will remain within the scope of the present invention. The notation used to describe the 20 bits of the offset value is "bits[19:0]." The least significant bit, or the LSB, is the right-most bit.In one embodiment of the present invention, the bits of the offset values for the texels of a texture map having a 1 kB stride are reordered to map the texture data into virtual 2D memory arrays of 16 bytes*32 rows. For the 256 kB texture map described above, each of the virtual 2D memory arrays contain texture data for 4 texels/row*32 rows, or 128 texels. Thus, 64*8, or 512, virtual 2D memory arrays of 256 kB are required to represent the complete 256*256 texture map.To map the texture map into these arrays, the bits of the offset value, that is, bits [19:0], are reordered into the following format to provide a physical memory address:virtual 2D address=bits[19:15]; bits[9:4]; bits[14:10]; bits[3:0]Bits[3:0] represent the 16 byte width, and bits[14:10]represent the 32 rows of each of the virtual 2D arrays. Bits[9:4] represent the sixty-four 2D arrays required to cover the 1,024 byte stride and bits[17:15] represent the 8 rows of 2D arrays to make up the 256*256 texture map. The resulting reordered offset value can be used as a physical memory address that maps the texture map into virtual 2D memory arrays. Consequently, a one-dimensional memory space may be used. As mentioned previously, a memory paging scheme is implemented in the memories, and are consequently partitioned into memory pages. The resulting physical memory addresses organize the texel data into a 2D arrays to reduce the number of page misses that occur when bilinear filtering is applied.Illustrated in FIG. 5 is a portion of a texture map 100 having the stride and bytes/texel as described above. That is, the texture map 100 is 256*256 texel and has 32-bit textures. Thus, the resulting stride is 1 kB. Assuming that the memories are partitioned into 1 kB memory pages, only one row of the 256*256 texture map would be stored per memory page using a conventional addressing method. and C3 will be necessary to calculate the color value for the pixel PC,C. The offset values for each of the texels will be calculated below, and the bits of the resulting value will be reordered into the format previously described. Only 16 bits of each offset value are illustrated in the example below. The bits [19:16], which would be zero for the texels C2,2,C2,3, C3,2, and C3,3, have been omitted to minimize complexity of the explanation. The offset values for the texels defined by C2,2, C2,3, C3,2 and C3,3 are as follows:<tb> <sep>C2,2 offset = (2 * 4 bytes/texel) + (2 * 1,024 bytes/row) =<tb> <sep>2,056 = 0000 1000 0000 1000<tb> <sep>reordered C2,2 offset = 0000 0000 0010 1000 = 40<tb> <sep>C2,3 offset = (3 * 4 bytes/texel) + (2 * 1,024 bytes/row) =<tb> <sep>2,060 = 0000 1000 0000 1100<tb> <sep>reordered C2,3 offset = 0000 0000 0010 1100 = 44<tb> <sep>C3,2 offset = (2 * 4 bytes/texel) + (3 * 1,024 bytes/row) =<tb> <sep>3,080 = 0000 1100 0000 1000<tb> <sep>reordered C3,2 offset = 0000 0000 0011 1000 = 56<tb> <sep>C3,3 offset = (3 * 4 bytes/texel) + (3 * 1,024 bytes/row) =<tb> <sep>3,084 = 0000 1100 0000 1100<tb> <sep>reordered C3,3 offset = 0000 0000 0011 1100 = 60As shown by the example above, if the offset values for each of the texels were used as the physical memory addresses, texels C2,2 and C2,3 and texels C3,2 and C3,3 would be in two different 1 kB memory pages. As a result, two page misses would occur when obtaining the texel data to. calculate the color value of the pixel PC,C. However, using the reordered offset values as the physical memory addresses places all four texels in the same 1 kB memory page. As illustrated in FIG. 6, the texture data for texel C2,2 is stored at memory address 40, for texel C2,3 at memory address 44, for texel C3,2 at memory address 56, and for texel C3,3 at memory address 60. Consequently, the texture data for these four texels may be stored in one 1 kB memory page, and no page misses will occur when obtaining the color values for the texels C2,2, C2,3, C3,2, and C3,3.Although 16 byte*32 row virtual 2D arrays are described above, larger or smaller 2D arrays may be also implemented by reordering the bits of the offset value in a manner similar to that explained above. For example, if 32 byte*32 row virtual 2D arrays are desired, a possible format for the reordered offset value is:virtual 2D address=bits[19:15]; bits[9:5]; bits [14:10]; bits[4:0]The 32 byte width is represented by bits [4:0] and the 32 row depth is represented by bits [10:14]. Some considerations for selecting the size of the 2D array are the number of bytes representing each texel, the size of each memory page, and the size of the texture map. Thus, it will be appreciated that the specific values used above are provided by way of example, and that different values may be used and still remain within the scope of the present invention.Similarly, some or all of the principles of the present invention may be used for larger or smaller texture maps than that previously described. That is, the format of the reordered offset value may change depending on the stride of the texture map. For example, the following are suggested formats for reordering the offset values as a function of the stride of the texture map:<tb> <sep>Stride<sep>Reordered offset format<tb> <sep> 128 bytes<sep>bits[19:12]; bits [6:4]; bits [11:7]; bits[3:0]<tb> <sep> 256 bytes<sep>bits[19:13]; bits [7:4]; bits [12:8]; bits[3:0]<tb> <sep> 512 bytes<sep>bits[19:14]; bits [8:4]; bits [13:9]; bits[3:0]<tb> <sep>1,024 bytes<sep>bits[19:15]; bits [9:4]; bits [14:10]; bits[3:0]<tb> <sep>2,048 bytes<sep>bits[19:16]; bits [10:4]; bits [15:11]; bits[3:0]<tb> <sep>4,096 bytes<sep>bits[19:17]; bits [11:4]; bits [16:12]; bits[3:0]<tb> <sep>8,192 bytes<sep>bits[19:18]; bits [12:4]; bits [17:13]; bits[3:0]The virtual 2D arrays of the formats provided above use 16 byte*32 row arrays, as illustrated by the first and second groups of bits from the right-hand side of the format.The third group of bits represents the number of arrays across the width of the stride. For example, for a texture map having a 4,096 stride, there are 256 (i.e., bits [11:4] provide 8 bits, 2<8≥256) virtual 2D memory arrays across the stride of the texture map. The fourth group of bits represents the number of rows of the virtual 2D arrays. For example, where the texture map has a 4,096 stride, the number of bits greater than the seventeenth bit define the number of rows of virtual 2D arrays of 16 byte*32 rows are present. The reordered offset format provided above are merely examples of possible formats. It will be appreciated, that other formats may also be used to map the texture data into a virtual 2D array.From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. For example, the embodiments of the present invention have been described with respect to two-dimensional texture maps where the texels have (s, t) coordinates. However, some or all of the principles of the present invention may be applied to three-dimensional texture maps as well. Accordingly, the invention is not limited except as by the appended claims. |
The invention relates to a die-level error recovery scheme. Methods, apparatuses, and systems for error recovery in memory devices are described. A die-level redundancy scheme may be employed in whichparity data associated with particular die may be stored. An example apparatus may include a printed circuit board that has memory devices each disposed on a planar surface of the printed circuit board. Each memory device may include two or more memory die, channels communicatively coupled the two or more memory die, and a memory controller communicatively coupled to the plurality of channels. The memory controller may deterministically maintain a die-level redundancy scheme via data transmission through the plurality of channels. The memory controller may also generate parity data associatedwith the two or more memory die in response to a data write event. |
1.A device comprising:A printed circuit board;A plurality of memory devices, each of which is disposed on a planar surface of the printed circuit board, wherein each of the plurality of memory devices includes two or more memory dies;A plurality of channels communicatively coupled to each of the plurality of memory devices, wherein each of the plurality of channels is associated with the two or more memory dies of each memory device Associated with one of them; andA memory controller communicatively coupled to the plurality of channels, wherein the memory controller is configured to maintain a die-level redundancy scheme via data transmission through the plurality of channels, wherein the memory controller is Configured to generate parity data in response to a data write event.2.The apparatus of claim 1, wherein one of the plurality of memory devices includes a memory die to be represented by the parity data and a memory die dedicated to storing the parity data.3.The device of claim 1, wherein the data write event comprises at least one of a memory read operation, a memory write operation, or an indication or trigger generated in response to the passage of a time period, or any combination thereof.4.The device of claim 1, wherein the memory controller is configured to be based at least in part on transmitting on one of the channels associated with one of the two or more memory dies Data to generate the parity data.5.The device of claim 1, wherein the memory controller is configured to write the parity data to one of the two or more memory dies.6.The device of claim 1, wherein the two or more memory dies include at least one of flash memory, NAND memory, phase change memory, 3D XPointTM memory, or ferroelectric random access memory, or any of them combination.7.The apparatus of claim 1, wherein the memory controller is configured to use an exclusive-OR XOR logic operation to determine parity data representing data stored in each of the plurality of memory devices.8.The apparatus of claim 7, wherein the memory controller is configured to:Determining that a first memory die of the two or more memory dies includes defective data;XOR the parity data with a subset of the two or more memory dies excluding the first memory die; andTransmitting the XOR result as re-created data corresponding to the first memory die, wherein the re-created data is equal to data stored in the first memory die after the data write event.9.The apparatus of claim 1, wherein the memory controller is configured to write the parity data on a first memory die of a memory device in the plurality, and wherein the plurality of The second memory die of the memory device is configured as a backup device among the plurality of memory devices.10.A method comprising:Identifying a data write event at a first memory die of a memory device on a module including a plurality of multi-die packaged memory devices;In response to the identification data write event, using an exclusive-OR XOR logic operation on data corresponding to the first memory die of the memory device to generate an XOR result;Writing the XOR result as parity data to a second memory die of the memory device;Determining that the first memory die of the memory device is experiencing a data error; andRecovering the data corresponding to the first memory die of the memory device using an inverse of the XOR logical operation of the parity data from the second memory die of the memory device .11.The method of claim 10, wherein the data write event comprises at least one of a data read event, a data write event, a periodic refresh event, or any combination thereof.12.The method of claim 10, wherein the XOR logic operation comprises XORing the first memory die and the third memory die to create the XOR result.13.The method of claim 10, wherein the first memory die is configured as a phase change memory, a 3DXPointTM memory, or any combination thereof.14.The method of claim 10, wherein each is the first memory die, and the second memory die is coupled to a memory controller through one or more channels, the one or more channels passing Configured to convey data fragments during memory operations.15.The method of claim 10, wherein the first memory die and the second memory die are disposed on the memory device.16.A memory module includes:A plurality of memory devices including a first subset of memory dies and a second subset of memory dies;A plurality of channels communicatively coupled to each of the plurality of memory devices; andA memory controller communicatively coupled to each of the plurality of channels, wherein the memory controller is configured to:Performing a data read / write operation via the multiple channels;Determining data changes stored in the first subset of the memory die;Determining parity data indicating data stored in the first subset of the memory die based at least in part on determining the data change stored in the first subset of the memory die;Storing the parity data in the second subset of a memory die;Identify data loss events; andBased at least in part on determining that the data loss event occurs, the parity data and data stored in the first subset of the memory die are used to re-create the lost data.17.The memory module of claim 16, wherein the memory controller is configured to re-create lost data associated with a corresponding die of the first subset of memory dies via a logical operation to cause storage in the memory bare The data in the first subset of the slice is XORed with the parity data.18.The memory module of claim 16, wherein the plurality of channels include channels communicatively coupled to a memory die, and wherein the second subset of the memory die includes a spare memory die and is configured to store the memory die. A memory die for parity data.19.The memory module of claim 16, wherein each memory die of the first subset of memory dies and the second subset of memory dies includes a 3D XPoint ™ memory.20.The memory module of claim 16, wherein the memory controller is configured to:Comparing two XOR results to determine whether the two XOR results are the same; andWhen the XOR results are the same, it indicates that the data stored in the first subset of the memory die is correct. |
Bare-level error recovery schemeTechnical fieldThis application relates to a die-level error recovery scheme.Background techniqueThis section is intended to introduce the reader to various aspects of technology, which may be related to various aspects of the present invention that are described and / or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the invention. Therefore, it should be understood that these statements should be read from this perspective and not as an acknowledgement of prior art.Generally, a computing system includes a processing circuit (such as one or more processors or other suitable components) and a memory device (such as a chip or integrated circuit). One or more memory devices may be implemented on a memory module such as a dual in-line memory module (DIMM) to store data accessible by the processing circuit. For example, based on user input to the computing system, the processing circuit may request the memory module to retrieve data corresponding to the user input from its memory device. In some examples, the retrieved data may include instructions executable by the processing circuit to perform an operation and / or may include data to be used as an input to the operation. In addition, in some cases, the data output from the operation may be stored in a memory, for example, to enable subsequent retrieval.In addition, the data stored in the memory device may include specific data that is desired to be saved, retained, or re-created in the event of data loss or memory device failure. Resources dedicated to storing such data may not be available for other uses, and therefore may restrict device operability.Summary of the inventionIn one aspect, the present application provides a device comprising: a printed circuit board; a plurality of memory devices, each of which is disposed on a planar surface of the printed circuit board, wherein each of the plurality of memory devices The device includes two or more memory dies; a plurality of channels communicatively coupled to each of the plurality of memory devices, wherein each of the plurality of channels and each memory device Associated with one of the two or more memory dies; and a memory controller communicatively coupled to the plurality of channels, wherein the memory controller is configured to pass through the plurality of channels The data transmission of the channels maintains a die-level redundancy scheme, wherein the memory controller is configured to generate parity data in response to a data write event.In another aspect, the present application provides a method comprising: identifying a data write event at a first memory die of a memory device on a module including a plurality of multi-die packaged memory devices; and in response to the identifying A data write event, which uses an exclusive-or (XOR) logical operation on data corresponding to the first memory die of the memory device to generate an XOR result; and writes the XOR result to all of the data as parity data A second memory die of the memory device; determining that the first memory die of the memory device is experiencing data errors; and using the parity data for the second memory die from the memory device The inverse of the XOR logic operation to recover the data corresponding to the first memory die of the memory device.In another aspect, the present application provides a memory module including: a plurality of memory devices including a first subset of memory dies and a second subset of memory dies; a plurality of channels communicatively coupled To each of the plurality of memory devices; and a memory controller communicatively coupled to each of the plurality of channels, wherein the memory controller is configured to: via the plurality of channels Perform data read / write operations; determine data changes stored in the first subset of memory dies; determine an indication based at least in part on determining the data changes stored in the first subset of memory dies Parity data of the data stored in the first subset of the memory die; storing the parity data in the second subset of the memory die; determining that a data loss event occurred; and based at least in part on It is determined that the data loss event occurs, and the parity data and data stored in the first subset of the memory die are used to re-create the lost data.BRIEF DESCRIPTION OF THE DRAWINGSVarious aspects of the invention can be better understood by reading the following detailed description and referring to the accompanying drawings, in which:1 is a block diagram of a computing system including a client device and one or more remote computing devices according to an embodiment;2 is a block diagram of a memory module that can be implemented in the remote computing device of FIG. 1 according to an embodiment;3 is a block diagram of the memory module of FIG. 2 arranged in a first non-volatile memory arrangement according to an embodiment;4 is a block diagram of the memory module of FIG. 2 arranged in a second non-volatile memory arrangement according to an embodiment;5 is a block diagram of the memory module of FIG. 2 arranged in a third non-volatile memory arrangement according to an embodiment;FIG. 6 is a flowchart of a process for operating the memory modules of FIGS. 4 to 5 to perform a die-level redundancy operation according to an embodiment.detailed descriptionA memory device may be designated to store the parity data. Parity data can be stored or backed up in non-volatile memory or volatile memory powered by an additional power supply, for example to prevent data loss due to power loss or component defects. In some cases, the memory device may store parity data for restoring the data of the additional memory device as a way to back up the data of the additional memory device. However, in many cases, backing up the entire memory device may result in over-provisioning of memory and waste of resources. Therefore, as described herein, a die-level redundancy scheme may be employed in which parity data associated with a particular die, rather than the entire memory device, may be stored.Generally, the hardware of a computing system includes processing circuits and memories, which are implemented, for example, using one or more processors and / or one or more memory devices (e.g., chips or integrated circuits). During operation of the computing system, the processing circuit may perform various operations (e.g., tasks) by executing corresponding instructions based on user input to determine output data by performing operations on the input data. To facilitate the operation of the computing system, data accessible by the processing circuit may be stored in a memory device such that the memory device stores input data, output data, data indicating executable instructions, or any combination thereof.In some examples, multiple memory devices may be implemented on a memory module, thereby enabling the memory device to be communicatively coupled to a processing circuit as a unit. For example, a dual in-line memory module (DIMM) may include a printed circuit board (PCB) and multiple memory devices. The memory module is responsive to commands from a memory controller communicatively coupled to a client device or a host device via a communication network. Or in some cases, a memory controller may be implemented on the host side of the memory-host interface; for example, a processor, microcontroller, or ASIC may include a memory controller. This communication network enables data communication between them, and thus the client device can utilize hardware resources accessible through the memory controller. Based at least in part on user input to the client device, the processing circuit of the memory controller may perform one or more operations to facilitate retrieval or transmission of data between the client device and the memory device. The data communicated between the client device and the memory device can be used for a variety of purposes, including (but not limited to) presenting visualizations, processing operations, calculations, or the like to a user through a graphical user interface (GUI) at the client device.Additionally, in some examples, memory devices may be implemented using different memory types. For example, the memory device may be implemented as a volatile memory, such as a dynamic random access memory (DRAM) or a static random access memory (SRAM). Alternatively, the memory device may be implemented as a non-volatile memory, such as a flash memory (eg, NAND, NOR) memory, a phase change memory (eg, 3D XPoint ™), or a ferroelectric random access memory (FeRAM). In any case, a memory device typically contains at least one memory die (i.e., an array of memory cells configured on a portion of a semiconductor wafer or "die") to store transfers to memory via channels (e.g., data channels, communication couplings) The device's data bits (eg, "0" bits or "1" bits), and can be similar in functionality from the point of view of the processing circuit even when implemented using different memory types.However, different memory types can provide different compromises that affect the implementation-related costs of a computing system. For example, volatile memory may provide faster data transfer (e.g., read and / or write) speeds than non-volatile memory. On the other hand, non-volatile memory provides higher data storage density than volatile memory. Therefore, a combination of non-volatile memory cells and volatile memory cells can be used in a computing system to balance the costs and benefits of each type of memory. In contrast to volatile memory, non-volatile memory cells can also maintain their stored values or data bits in an unpowered state. Therefore, implementing a combination of non-volatile memory cells and volatile memory cells can change the way data redundancy operations are managed in a computing system.In particular, the data of the non-volatile or volatile memory unit may be backed up by the non-volatile memory to protect the data of the computing system. However, in some cases, various redundancy schemes can be used to protect the memory from data loss. Examples of redundancy schemes are redundant arrays of independent disks, DIMMs, DRAM, 3D XPointTM, or any suitable form of memory, through subsequent digital logic verification and / or protection techniques (such as XOR verification and XOR protection), Memory cells are protected from data loss by the redundancy scheme. In XOR protection technology, data stored in non-volatile memory undergoes XOR logic operations. The result of the XOR logic operation (commonly referred to as parity data or parity bits) is stored as an XOR result indicating the correct data originally stored across the non-volatile memory. In the case of data loss, parity data can be used to recreate the data of the defective non-volatile memory as a replacement for moving lost or lost data.Similar to the redundancy scheme described above, the redundancy scheme provides a reliable means of protecting memory from data loss. Various conditions may cause data loss, including memory failure, power loss (for example, power loss that causes data stored in non-volatile memory not to be refreshed to save data values), or other similar hardware defects that cause data loss. A redundancy scheme similar to the redundancy scheme described above can be used to restore data to the smallest granularity data used in XOR logic operations. Therefore, if a memory device performs XOR logic operations with other memory devices and parity data is used for recovery, XOR recovery can recover data from the entire memory device after a data loss event.Generally, the redundancy scheme operates to protect the entire memory device, that is, a package-level redundancy scheme that uses the data of the entire memory device without considering smaller, more realistic data granularity. This can lead to oversupply, as failure of the entire memory device is uncommon and unlikely. In some cases, this over-provisioning results in the use of a larger memory to store the parity data, and thus may increase the cost of providing data protection. Therefore, implementing a die-level redundancy scheme to provide protection to individual memory dies of a memory device rather than to the entire memory device or channel may have particular advantages. A die-level redundancy scheme reduces overall oversupply while also providing one or more spare memory dies. For the purposes of the present invention, a redundant array of independent 3D XPoint ™ memory (RAIX) is used as an example redundancy scheme that can be improved by die-level redundancy operations.To facilitate improved RAIX schemes, the present invention provides techniques for implementing and operating memory modules to provide die-level RAIX schemes (ie, die-level redundancy schemes). In particular, the die-level RAIX approach enables memory modules to access increased amounts of spare memory. Regardless of the number of memory devices contained on the memory module, the die-level RAIX scheme enables the memory module to back up data stored in individual memory dies. These memory dies receive data from the memory controller through channels, or in some embodiments, through channels that provide data to multiple individual memory dies located on the same or different memory devices. In this way, the memory die can be through a dedicated channel (for example, a 1: 1 channel to memory die ratio) or through a channel shared with an additional memory die (for example, M: N channel M to memory die N ratio) To receive data. In this manner, several channels may be assigned to a memory device containing two or more memory dies, and one or more memory dies may be associated with one or more channels. The die-level RAIX solution is operable to back up the data stored in individual memory dies, thus corresponding to the data transmitted to the memory dies through the channel, and in this way can reduce oversupply and production costs, while simultaneously providing data for memory modules Provide adequate protection.According to embodiments described herein, various computing systems may implement a die-level RAIX scheme that includes one or more client devices communicatively coupled to one or more remote computing devices. In these devices, certain computing processes are separated from each other to improve the operational efficiency of the computing system. For example, in addition to merely controlling data access (eg, storage and / or retrieval), the memory processing circuit may be implemented to perform data processing operations, such as data processing operations that would otherwise be performed by a host processing circuit. For ease of description, die-level RAIX is described below as being implemented in a computing system using these remote computing devices; however, it should be understood that various effective embodiments may implement a die-level RAIX scheme. For example, a computing system that does not use a remote computing device but combines components of a client device with a memory module and a processing circuit of the remote computing device may be employed.To help illustrate, FIG. 1 depicts an example of a computing system 10 that includes one or more remote computing devices 11. As in the depicted embodiment, the remote computing device 11 may be communicatively coupled to one or more client devices 12 via a communication network 14. It should be understood that the depicted embodiments are intended to be illustrative only, and not restrictive. For example, in other embodiments, the remote computing device 11 may be communicatively coupled to a single client device 12 or more than two client devices 12.In any case, the communication network 14 may enable data communication between the client device 12 and the remote computing device 11. In some embodiments, the client device 12 may be physically remote (eg, separated) from the remote computing device 11, for example, such that the remote computing device 11 is located in a centralized data center. Therefore, in some embodiments, the communication network 14 may be a wide area network (WAN), such as the Internet. To facilitate communication via the communication network 14, the remote computing device 11 and the client device 12 may each include a network interface 16.In addition to the network interface 16, the client device 12 may include an input device 18 and / or an electronic display 20 to enable a user to interact with the client device 12. For example, the input device 18 may receive user input, and thus may include buttons, a keyboard, a mouse, a trackpad, and / or the like. Additionally or alternatively, the electronic display 20 may include a touch-sensing component that receives user input by detecting the presence and / or location of an object touching its screen (eg, the surface of the electronic display 20). In addition to enabling user input, the electronic display 20 may facilitate providing a visual representation of information by displaying a graphical user interface (GUI), application program interface, text, still images, video content, and the like of the operating system.As described above, the communication network 14 may enable data communication between the remote computing device 11 and one or more client devices 12. In other words, the communication network 14 may enable user input to be communicated from the client device 12 to the remote computing device 11. Additionally or alternatively, the communication network 14 may enable the results of operations performed by the remote computing device 11 based on user input to be communicated back to the client device 12, such as image data to be displayed on its electronic display 20.In fact, in some embodiments, data communications provided by the communication network 14 may be utilized to enable multiple users to use centralized hardware so that the hardware at the client device 12 may be reduced. For example, the remote computing device 11 may provide a data storage device for a plurality of different client devices 12, thereby enabling the reduction of data storage devices (eg, memory) provided locally on the client device 12. Additionally or alternatively, the remote computing device 11 may provide processing for a plurality of different client devices 12, thereby enabling reduction of processing capabilities provided locally at the client device 12.Therefore, in addition to the network interface 16, the remote computing device 11 may include a processing circuit 22 and one or more memory modules 24 (eg, subsystems) communicatively coupled via a data bus 25. In some embodiments, the processing circuit 22 and / or the memory module 24 may be implemented across multiple remote computing devices 11, such as making the first remote computing device 11 include a portion of the processing circuit 22 and the first memory module 24A, and the Mth remote The computing device 11 includes a processing circuit 22 and another part of the M-th memory module 24M. Additionally or alternatively, the processing circuit 22 and the memory module 24 may be implemented in a single remote computing device 11.In any case, the processing circuit 22 may generally execute instructions to perform operations indicated, for example, by a user input received from the client device 12. Therefore, the processing circuit 22 may include one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more processor cores, or any combination thereof. In some embodiments, the processing circuit 22 may additionally perform operations based on circuit connections formed (eg, programmed) in the processing circuit 22. Therefore, in such embodiments, the processing circuit 22 may additionally include one or more application specific integrated circuits (ASICs), one or more field programmable logic arrays (FPGAs), or both.In addition, the memory module 24 may provide a data storage device accessible by the processing circuit 22. For example, the memory module 24 may store data received from the client device 12, data generated by operations performed by the processing circuit 22, data to be input to operations performed by the processing circuit 22, and may be performed by the processing circuit 22 to perform operations Instructions or any combination thereof. To facilitate the provision of data storage, the memory module 24 may include one or more memory devices 26 (eg, a chip or integrated circuit). In other words, the memory devices 26 may each be a tangible, non-transitory computer-readable medium that stores data accessible by the processing circuit 22.Since the hardware of the remote computing device 11 can be utilized by multiple client devices 12, the memory module 24 can store data corresponding to different client devices 12, at least in some cases. To facilitate identification of appropriate data, the data may be grouped and stored as data blocks 28 in some embodiments. Indeed, in some embodiments, the data corresponding to each client device 12 may be stored as a separate data block 28. For example, the memory device 26 in the first memory module 24A may store a first data block 28A corresponding to the first client device 12A and an N-th data block 28N corresponding to the N-th client device 12N. One or more data blocks 28 may be stored within a memory die of the memory device 26.In addition, in some embodiments, the data block 28 may correspond to a virtual machine (VM) provided to the client device 12. In other words, as an illustrative example, the remote computing device 11 may provide the first virtual machine to the first client device 12A via the first data block 28A, and provide the Nth client device 12N via the Nth data block 28N. virtual machine. Therefore, when the first client device 12A receives a user input for the first virtual machine, the first client device 12A may communicate the user input to the remote computing device 11 via the communication network 14. Based at least in part on user input, the remote computing device 11 may retrieve the first data block 28A, execute instructions to perform corresponding operations, and communicate the results of the operations back to the first client device 12A via the communication network 14.Similarly, when the N-th client device 12N receives a user input for the N-th virtual machine, the N-th client device 12N may communicate the user input to the remote computing device 11 via the communication network 14. Based at least in part on user input, the remote computing device 11 may retrieve the Nth data block 28N, execute instructions to perform corresponding operations, and communicate the results of the operations back to the Nth client device 12N via the communication network 14. Accordingly, the remote computing device 11 may access (eg, read and / or write) various data blocks 28 stored in the memory module 24.To facilitate improved access to the storage data block 28, the memory module 24 may include a memory controller 30 that controls data storage in its memory device 26. In some embodiments, the memory controller 30 may operate based on a circuit connection formed (eg, programmed) in the memory controller 30. Thus, in such embodiments, the memory controller 30 may include one or more application specific integrated circuits (ASICs), one or more field programmable logic gate arrays (FPGAs), or both. In any case, as described above, the memory module 24 may include memory devices 26 that implement different memory types, such as different memory types providing different compromises between data access speed and data storage density. Thus, in such embodiments, the memory controller 30 may control data storage across multiple memory devices 26 to facilitate the utilization of various compromises, such as enabling the memory module 24 to provide fast data access speeds and high data storage capacity.To help illustrate, FIG. 2 depicts an example of a memory module 24 containing different types of memory devices 26. Specifically, the memory module 24 includes one or more non-volatile memory devices 32 and one or more volatile memory devices 34. In some embodiments, the volatile memory device 34 may be implemented as a dynamic random access memory (DRAM) and / or a static random access memory (SRAM). In other words, in such embodiments, the memory module 24 may include one or more DRAM devices (eg, chips or integrated circuits), one or more SRAM devices (eg, chips or integrated circuits), or both.Additionally, in some embodiments, the non-volatile memory device 32 may be implemented as a flash (e.g., NAND) memory, a phase change (e.g., 3D XPointTM) memory, and / or a ferroelectric random access memory (FeRAM). In other words, in such embodiments, the memory module 24 may include one or more NAND memory devices, one or more 3D XPointTM memory devices, or both. In fact, in some embodiments, the non-volatile memory device 32 may provide a storage-level memory (SCM), which may at least in some cases facilitate reduction of implementation-related costs, such as by avoiding others in the computing system 10 Non-volatile data storage device.In any case, in some embodiments, the non-volatile memory device 32 and the volatile memory device 34 may be placed on a flat (e.g., front and / or After) the memory module 24 is implemented on the surface. To facilitate data communication via the data bus 25, the memory module 24 may include a bus interface 36. For example, the bus interface 36 may include data pins (eg, contacts) formed along the (eg, bottom) edge of the printed circuit board. Therefore, in some embodiments, the memory module 24 may be a single in-line memory module (SIMM), a dual in-line memory module (DIMM), or the like.Additionally, in some embodiments, the bus interface 36 may include logic that enables the memory module 24 to communicate via a communication protocol implemented on the data bus 25. For example, the bus interface 36 may control the timing of data output from the memory module 24 to the data bus 25 and / or interpret the data input from the data bus 25 to the memory module 24 according to a communication protocol. Therefore, in some embodiments, the bus interface 36 may be a double data rate fourth generation (DDR4) interface, a double data rate fifth generation (DDR5) interface, a peripheral component interconnect high speed (PCIe) interface, a non-volatile Dual in-line memory module (eg, NVDIMM-P) interface and the like.In any case, as described above, the memory controller 30 may control data storage within the memory module 24, such as by facilitating various data compromises provided by the type of memory implemented in the memory module 24 to facilitate improved data access speeds and / Or data storage efficiency. Thus, as in the depicted example, the memory controller 30 may be coupled between the bus interface 36 and the memory device 26 via one or more internal buses 37, such as via conductive traces formed on a printed circuit board Implementation. For example, the memory controller 30 may control whether the data block 28 is stored in the non-volatile memory device 32 or the volatile memory device 34. In other words, the memory controller 30 may transfer the data blocks 28 from the non-volatile memory device 32 into the volatile memory device 34 or vice versa.To facilitate data transfer, the memory controller 30 may include a buffer memory 38, for example to provide temporary data storage. In some embodiments, the buffer memory 38 may include static random access memory (SRAM), and thus may provide faster data access speeds than the volatile memory device 34 and the non-volatile memory device 32. The buffer memory 38 may be DRAM or FeRAM in some cases. In addition, to facilitate access to the storage data block 28, the memory module 24 may include, for example, stored in a buffer memory 38, a non-volatile memory device 32, a volatile memory device 34, a dedicated address mapped memory device 26, or any combination thereof. Address mapping.In addition, the remote computing device 11 may be in communication with a service processor and / or service bus included in the processing circuit 22 and / or the data bus 25 or separate from the processing circuit 22 and / or the data bus 25. The service processor, the processing circuit 22, and / or the memory controller 30 may perform an error detection operation and / or an error correction operation (ECC), and may be placed outside the remote computing device 11 so that if the power supply to the remote computing device 11 is lost , Then error detection and error correction operations can continue. To simplify the description, the functions of the service processor are described as being contained in and executed by the memory controller 30, however, it should be noted that in some embodiments, an error correction operation or a data recovery operation may be implemented as a service by the service A processor, a processing circuit 22 or a function performed by an additional processing circuit located inside or outside the remote computing device 11 or the client device 12.The memory module 24 is depicted in FIG. 2 as a single device containing various components or sub-modules. However, in some examples, the remote computing device may include one or more discrete components equivalent to the various devices, modules, and components that make up the memory module 24. For example, a remote computing device may include non-volatile memory, volatile memory, and a controller located on one or several different chips or substrates. In other words, the features and functions of the memory module 24 need not be implemented in a single module to achieve the benefits described herein.To help illustrate, FIG. 3 depicts a block diagram of an example of a package-level RAIX scheme. In general, FIG. 3 depicts an embodiment of a memory module 24, a memory module 24A, which includes nine non-volatile memory devices 32 arranged to form a symmetric RAIX scheme, where the complete non-volatile memory device 32I is used for storage Parity data corresponding to the other eight nonvolatile memory devices 32A to 32H. Each non-volatile memory device 32 may store a data segment corresponding to a memory address in a package 52. The data fragment may be smaller than the overall size of the package 52. For example, the data fragment may be 512 bytes, and the package 52 may store several gigabytes. It should be understood that the depicted examples are intended to be illustrative and not restrictive. Indeed, in some embodiments, the RAIX scheme may be implemented using more or less than nine non-volatile memory devices 32 with any suitable sized components.In any case, with respect to the depicted embodiment shown in FIG. 3, each non-volatile memory device 32 stores a specific amount of data accessible to the client device 12. The processing circuit 22 and / or the memory controller 30 may facilitate communication via the channel between the non-volatile memory device 32 and the client device 12. In the event of data loss, it may be desirable to be able to recover the data stored in the package 52. Therefore, the package-level RAIX scheme can be used to protect the data of the package 52 stored in the non-volatile memory device 32.As depicted, the package-level RAIX scheme is implemented in the memory module 24A, which means that in the event of data loss of the package 52, the data transmitted to each non-volatile memory device 32 via the corresponding channel and stored in the package 52 can be recovered. data. The package-level RAIX scheme uses XOR logic operations to back up the data for each package 52. That is, the data of the package 52A and the data of the package 52B are XORed, and the XOR result is XORed with the data of the package 52C, and so on, until the penultimate XOR result is XORed with the package 52H. The final XOR result is considered as parity data and stored in the package 52I. Since each bit of the packages 52A to 52H is XORed with its corresponding bit of the subsequent package 52, the end size of the parity data is the same as the size of the data segment stored in the package 52. Therefore, in this example, the parity data stored on the package 52I may be equal to 512 bytes (equal to the size of the individual data fragments backed up by the package-level RAIX scheme), and the package 52 may have a capacity to store 512 bytes- Same as other packages 52. As previously described, if any portion of the corresponding non-volatile memory device 32 fails and data loss occurs, the parity data stored in the package 52 can be used to recreate the lost data (e.g., by replacing the XOR logic operation Parity data to recreate lost data).To help illustrate, the basic logical attributes of XOR are understood to represent XOR, or XOR logical functions, and if the first input is logic low and the second input is logic high (for example, 0 is the first input and 1 is the second input , 1 is the first input and 0 is the second input), which results in a logic high (for example, 1), but if both the first and second inputs are logic high or logic low (for example, 0 is the first and The second input, 1 is the first and second inputs), then results in a logic low output. As described above, this output relationship can be utilized to back up data stored in various non-volatile memory devices 32. As a simplified example, if package 52A stores 111 and package 52B stores 000, the package-level RAIX scheme operates to back up packages 52A and 52B with parity data. Therefore, the package 52A and the package 52B are XORed to create parity data. The XOR result of 111XOR000 is 111. In the case where the data of the package 52A is lost, this parity data 111 may be XORed with the data of the package 52B to re-create the data of the package 52A-that is, 111XOR 000 is equal to 111. If package 52A stores 101 and package 52B stores 110, then the parity data is equal to 011. If package 52B experiences data loss, then 011XOR101 recreates the data of package 52B and equals 110.However, since the data of the package 52 can be the smallest granularity used in XOR logic operations, any smaller data packets that create the package 52 (such as individual memory dies of the non-volatile memory device 32) may not be able to be individually re-created create. For example, the memory die may fail and the rest of the package 52 may function as needed, but because the parity data represents the XOR result of the package 52, the entire package 52 is recreated from the parity data to save the missing data Protected from physical failures of the memory die. In practice, the entire package 52 of the non-volatile memory device 32 is unlikely to experience data loss. In fact, this depicts that the package-level RAIX scheme over-provisions and uses more memory than the amount of memory sufficient to protect the data of the memory module 24 to store the parity data in the package 52I.The depicted package-level RAIX scheme follows an 8: 1 protection ratio (for example, eight packages 52A to 52H store data backed up by one package 52I that stores parity data). This protection ratio translates into a 12.5% oversupply of the package 52 (eg, 1/8). Generally speaking, the amount of excess supply is related to the efficiency of the RAIX scheme-in other words, the lower the percentage of excess supply, the less memory is used to provide data protection for the memory module 24. In contrast, the non-volatile memory device 32 is more likely to experience data loss at the memory die level (not depicted in FIG. 3). Therefore, the RAIX scheme for preventing memory die-level data loss is more suitable for normal operation of the computing system 10.To help illustrate the differences between package-level and die-level RAIX solutions, Figure 4 depicts a block diagram of an example of a die-level RAIX solution. Generally, FIG. 4 depicts a second embodiment of a memory module 24, a memory module 24B, which includes nine non-volatile memory devices 32, each of which is shown as being stored in a memory die 58 A specific amount of data. It should be understood that the depicted examples are intended to be illustrative, and not restrictive. In fact, in some embodiments, the RAIX scheme may be implemented using more than or less than nine non-volatile memory devices 32, using more than or less than eighteen channels, and may include any suitable sized components.The memory module 24B follows a die-level RAIX scheme in which each package 52 is divided into memory dies 58 to store data segments of 256 bytes in size. Using individual memory dies 58 instead of individual packages 52 to determine parity data reduces the oversupply from 12.5% (eg, 1/8) to approximately 5.8% (eg, 1/17). However, this separation may increase circuit complexity because an increased amount of signal routing, components, and / or pins may be used to provide an increased number of channels. Increased design complexity may also increase manufacturing and / or design costs associated with the production of the memory module 24. In addition, increasing the number of signal routes (e.g., channels) can lead to reduced signal integrity, such as due to signal interference. Therefore, for some embodiments, a solution that balances the overall level of these trade-offs with oversupply may be needed, while other embodiments may implement the memory module 24B.To illustrate this compromise, FIG. 5 depicts a block diagram of a second example of a die-level RAIX scheme. The third embodiment of the memory module 24 (memory module 24C) includes Z number of non-volatile memory devices 32, each of which is represented as storing a specific amount of data in a package 52, where the package 52 is separated into multiple memory bare Tablet 58. It should be understood that the depicted examples are intended to be illustrative, and not restrictive. In fact, in some embodiments, any number of memory dies 58 per non-volatile memory device 32 may be used to implement a die-level RAIX scheme.In the depicted die-level RAIX scheme, the package 52 from FIG. 3 is typically divided into separate memory dies 58. For example, the memory dies 58A1, 58B1, ..., 58X1 are stored on the same non-volatile memory device 32A and on the same package 52A. During operation, the memory controller 30 and / or the processing circuit 22 are operable to protect the memory module 24C data via the depicted asymmetric die-level RAIX scheme. In a die-level RAIX scheme, each memory die 58 undergoes XOR logic operations separately, instead of the entire package 52 undergoing XOR logic operations to create parity data. The resulting parity data is stored in a memory die 58XZ of the non-volatile memory device 32Z. It should be noted that although the parity data is stored in a die depicted as the last memory die 58XZ, there is no restriction on the memory die 58 in which the parity data will be stored. That is, for example, the parity data may be stored in or on the memory die 58AZ. Parity data can be stored on memory die 58, so less memory can be allocated for the purpose of storing parity data-so the memory die 58XZ can be allocated for use with the package used to support FIG. 3 The entire package 52 of the level RAIX scheme has all the dies for the same purpose. The remaining memory dies of the non-volatile memory device 32Z can be allocated as spare memory, where the spare memory dies 58AZ, 58BZ, ..., 58CZ can be used for operation overflow, additional data storage, by the memory controller 30 and / or The processing circuit 22 is used to convert logical addresses into physical address information and the like. Therefore, the memory module 24C is an improvement on the memory module 24A having a relatively high oversupply and without a spare memory, and an improvement on the memory module 24B without a spare memory and high design complexity.For redundancy purposes, partitioning the package 52 into memory dies 58 produces an oversupply of approximately 6.25% (eg, 1/16), which is an oversupply from 12.5% (eg, 1/8) of the memory module 24A A reduction, and an increase of 5.8% (eg, 1/17) of oversupply from the memory module 24B. Although the oversupply has increased slightly, the die-level RAIX scheme is an improvement over the package-level RAIX scheme due to the simplicity of the design and the minimum oversupply of redundant or protected memory.Generally, during a computing operation, the client device 12 receives input from a user or other component, and in response to the input, requests the memory controller 30 of the memory module 24C to facilitate the execution of the memory operation. The client device 12 may issue these requests as commands and may indicate a logical address from which to retrieve or store corresponding data. However, the client device 12 does not know the actual physical address where the corresponding data is stored, because sometimes the data is divided and stored in multiple locations referenced via one logical address. The memory controller 30 may receive these commands and translate logical addresses into physical addresses to appropriately access the stored data.After determining the physical address of the corresponding data, the memory controller 30 is operable to read the data stored in each respective memory die 58 or is operable to write the data to be written into each respective memory die 58 . As part of this read / write operation, the memory controller 30 may also parse or interpret the data stored in each respective memory die 58 to complete the operation requested by the client device 12. These operations are performed by transmitting data segments through a channel that communicatively couples the non-volatile memory device 32 to the memory controller 30.The memory controller 30 or other suitable processing circuitry may facilitate the updating of parity data stored in the memory die 58. To this end, the data to be stored in each memory die 58 is XORed with the data of subsequent memory dies 58 until each memory die 58 is reflected in the parity data. The memory controller 30 or other suitable processing circuitry may also facilitate verifying the quality of the data stored in the memory die 58. In some embodiments, the memory controller 30 may perform an XOR of the data in the memory die to verify that the resulting parity data is the same. If an error is detected (eg, the parity data is not the same and is therefore determined based on the defective data), this may mean that the memory die 58 has physically failed, a data read or write error has occurred, or the like. In response to an event or control signal, in response to performing a read or write operation, in response to a defined amount of time elapsed (e.g., periodically refreshing data in the memory die 58 including parity data) or Any other suitable indication or event, the memory controller 30 may perform these redundant operations.As described above, the depicted components of the computing system 10 may be used to perform memory operations. In some embodiments, the die-level RAIX scheme is integrated into the memory operation control flow. In other embodiments, a die-level RAIX scheme is performed in response to specific indications, signals, events, at periodic or defined time intervals, or the like. However, in some embodiments, the die-level RAIX scheme is performed at certain times during memory operations and in response to control signals. Therefore, it should be understood that the die-level RAIX scheme can be incorporated into the memory operation in various ways.To help illustrate, FIG. 6 depicts an example of a process 74 for controlling memory operations of the memory module 24 and a die-level RAIX backup scheme. Generally, the process 74 includes the memory controller 30 waiting for a memory operation request from the host (e.g., the processing circuit 22 and / or the client device 12) (process block 76), receiving a memory operation request from the host (process block 78), It is determined whether the memory operation request corresponds to a data read event (decision block 80). In response to the memory operation request not corresponding to a data read event, the memory controller 30 may update the parity data, append the parity data to the data segment for writing, and write the data segment (process block 82), Wherein after the writing is completed, the memory controller 30 may wait for an additional memory operation request from the host (process block 76). However, in response to the memory operation request corresponding to a data read event, the memory controller 30 may read a data segment from the corresponding memory address (process block 84) and determine whether a data error has occurred (decision block 86). In response to determining that no data error has occurred, the memory controller 30 may wait for an additional memory operation request from the host (process block 76), however, in response to determining that a data error does occur, the memory controller 30 may attempt to use an error correction code (ECC) The technique resolves the error (process block 88) and determines whether to eliminate the data error (decision block 90). In response to determining to eliminate the data error, the memory controller 30 may send the read data to the host (process block 92) and continue to wait for additional memory operation requests from the host (process block 76). However, in response to determining that the resolved error is not zero, the memory controller 30 may determine the faulty memory die 58 (process block 94) and use XOR logic operations to recover the missing data based on the faulty memory die 58 (process block 96). The recovery data is sent to the host (process block 92) and continues to wait for additional memory operation requests from the host (process block 76).In any case, as described above, the memory controller 30 may wait for a memory operation request from its host device (process block 76). In this manner, the memory controller 30 may be idle and not perform memory operations (eg, read, write) between read or write access events initiated by the host device.The memory controller 30 may receive a memory operation request from the host (process block 78), and may perform a memory operation in response to receiving the memory operation request. In some embodiments, the memory operation request may identify the request data block 28 or data segment by a corresponding logical address. As described above, when identified by a logical address, the memory controller 30 may convert the logical address into a physical address. This physical address indicates where the data is actually stored in the memory module 24. For example, the memory controller 30 may convert a logical address to a physical address using an address map, a lookup table, an equation conversion, or any suitable method. The processing circuit 22 receives various memory operation requests via communication with the client device 12, however, in some embodiments, the processing circuit 22 may initiate various memory operation requests independently of the client device 12. These memory operation requests may include requests to retrieve or read data from one or more of the non-volatile memory devices 32, or store or write data to one or more of the non-volatile memory devices 32 Request. In this manner, during a memory operation, the memory controller 30 may receive a logical address from the host, and may convert the logical address to indicate where the corresponding data is to be stored (e.g., a write operation) or stored (e.g., a read operation). A physical address, and is operable to read or write corresponding data based on the corresponding physical address.In response to the memory operation request, the memory controller 30 may determine whether the memory operation request corresponds to a data read event (decision block 80). The memory controller 30 may check for changes in the data stored in the non-volatile memory device 32 and / or may operate by assuming that the data stored in the non-volatile memory device 32 changes after each data write. Therefore, the memory controller 30 generally determines whether a data write event has occurred, where the data write event changes the data stored in any of the memory dies 58. This determination is performed to facilitate keeping the parity data stored in the memory die 58 relevant and / or accurate.If the memory operation request corresponds to a data write event (for example, not a data read event), the memory controller 30 may append a parity bit to the data segment to be written, and may write the data segment to the memory (Process block 82). These parity bits can be used in future error correction code operations to resolve minor transmission errors (e.g., process block 88). In addition, the memory controller 30 may update the parity data to reflect the changed data segment. The memory controller 30 of the memory module 24 may perform XOR logic operations on each of the memory dies 58 and may store the XOR results as updated parity data to the parity data memory dies 58 (e.g., memory bare Tablet 58XZ). In some embodiments, the memory controller 30 may include data of the backup memory in the XOR logic operation such that the XOR result represents the XOR of each memory die 58 and the data stored in the backup memory. It should be noted that in some embodiments, the memory controller 30 is updated in response to receiving an instruction created in response to a timer tracking the minimum parity data update interval or an instruction to request parity data update transmitted from the client device 12 Parity data. In these embodiments, it may be desirable for the memory controller 30 to update the parity data more frequently, not just in response to a data write operation, and therefore to determine by the memory controller 30 whether a memory operation request corresponds to a data read event The memory controller 30 may update the parity data in response to each memory operation request, except for a request corresponding to a data read event, which includes, for example, a request based on a tracking time interval. While appending and writing data fragments to the memory, the memory controller 30 may wait to receive an additional memory operation request from the host (process block 76).However, in response to determining that the memory operation request corresponds to a data read event, the memory controller 30 may read a data segment at a corresponding memory address (decision block 84). The memory operation request contains the logical address of the required memory segment. The memory controller 30 may retrieve a desired memory segment at the indicated logical address in response to a memory operation request (eg, convert a physical address by reference and operate to retrieve a data segment from the corresponding memory die 58).After reading the data segment, the memory controller 30 may determine whether the data is correct (e.g., without defects) (decision block 86). The memory controller 30 may perform various data verification techniques to verify that the data is correct by verifying that the data read is the same as originally indicated by the parity data stored on the memory die 58. These data verification techniques can facilitate the detection of physical and digital defects associated with the memory module 24. These defects may include problems such as data write errors, mechanical defects associated with the physical memory die 58, mechanical defects associated with the non-volatile memory device 32, and the like. To verify the data, for example, the memory controller 30 may continue to use XOR verification to determine whether the data read in response to the data read event is undamaged and correct. To this end, the memory controller 30 of the memory module 24 may XOR the data of each memory die 58 and, in some embodiments, XOR the data of each memory die 58 and the spare memory to determine an additional XOR result. After calculating the additional XOR results, the memory controller 30 may determine whether the XOR results are the same. The memory controller 30 of the memory module 24 may compare the additional XOR results with parity data stored in the memory die 58 to determine whether the XOR results are equal or substantially similar (e.g., within a similarity threshold such that the results are considered to be equal).In response to determining whether the XOR results are the same and therefore whether the data was read correctly (eg, no data errors were found), the memory controller 30 may continue to wait for additional memory operation requests from the host (process block 76). However, in response to determining that the XOR results are not the same and thus reading the data incorrectly (eg, finding a data error), the memory controller 30 may attempt to resolve the data error using an error correction code (ECC) technique (process block 88). Error correction code technology can include adding redundant parity data to the data segments so that when read, even if minor data corruption occurs, the original data segments can still be recovered. There are various effective ways to perform this preliminary quality control step to verify that data errors are not caused by minor transmission problems such as convolutional and block code methods.After attempting to resolve data errors using error correction code techniques, the memory controller 30 may determine whether the data errors have been eliminated from the correction (decision block 90). If the memory controller 30 determines that the error is equal to zero after implementing the error correction code technique, the memory controller 30 may send the read data to the host device for further processing and / or for computing activities. After transmitting the read data, the memory controller 30 waits for an additional memory operation request from the host (process block 78).However, if the memory controller 30 determines that the data error has not been eliminated (eg, the error is not equal to zero), the memory controller 30 may continue to determine which of the memory dies 58 is defective or faulty (process block 94). The memory controller 30 may perform various determination activities to determine which memory die 58 is faulty, such as performing a system test on the memory die 58 in response to a test write or read operation. Further, in some embodiments, the memory controller 30 may communicate data errors to the client device 12 and receive instructions from the host, such as instructions originating from the user of the client device 12, which communicate which memory die 58 is defective or malfunction.When the memory controller 30 determines which memory die 58 is faulty, the memory controller 30 may use parity data to recover data that was lost in response to the failed memory die 58 (process block 96). The memory controller 30 may recover lost data by performing an inverse of an XOR logic operation. That is, the memory controller may XOR each of the memory dies 58 without XORing the faulty memory die 58 data and including parity data. For example, suppose the memory die 58A2 is faulty-in this example, the memory controller 30 XORs all the memory dies 58 to determine the missing data of the memory die 58C without XORing the data of the failed memory die 58A2, And replace the data of the memory die 58A2 with the parity data to recreate the lost data of the memory die 58A2 (for example, the data of the memory die 58A1 and the data of the memory die 58B2 that is XORed with the parity data of the memory die 58 The data is XORed to determine the missing data of memory die 58A2). Further, in some embodiments, the memory controller 30 performs this recovery operation in response to receiving an instruction from the processing circuit 22 or other suitable processing circuit. In this manner, in these embodiments, the memory controller 30 may wait to recover lost data until a physical repair is performed.After recovering the lost data, the memory controller 30 may transmit the recovered data to the host (process block 92) and continue to wait for additional memory operation requests (process block 76). The memory controller 30 may continue the process 74 to keep the parity data up to date to monitor the quality of the data stored in the non-volatile memory device 32 and / or perform a recovery operation in the event of data loss.Therefore, the technical effects of the present invention include promoting improved redundant operations to prevent data loss at the die level or memory die size granularity. These techniques describe systems and methods for performing XOR logic operations to create parity data, verify data integrity or quality, and recover data in the event of data loss, all at the die level rather than the package level. These technologies also provide one or more additional spare memory dies, which is an improvement over package-level redundancy operations.The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may tolerate various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.The techniques proposed and claimed herein are referred to and applied to material objects and specific examples of practical nature, which obviously improve the field of technology and are therefore not abstract, intangible, or purely theoretical. In addition, if any claim appended to the end of this specification contains one or more elements designated as "means for [performing [function] ..." or "steps for performing [function] ..." , Then we hope to explain such elements in terms of 35U.SC112 (f). However, for any claim containing elements specified in any other manner, it is not desirable to interpret such elements in terms of 35U.S.C.112 (f). |
A test structure provides defect information rapidly and accurately. The test structure includes a plurality of lines provided in a parallel orientation, a decoder coupled to the plurality of lines for selecting one of the plurality of lines, and a sense amplifier coupled to the selected line. To analyze an open, a line in the test structure is coupled to the sense amplifier. A high input signal is provided to the line. To determine the resistance of the open, a plurality of reference voltages are then provided to the sense amplifier. A mathematical model of the resistance of the line based on the reference voltage provided to the sense amplifier is generated. Using this mathematical model, the test structure can quickly detect and characterize defect levels down to a few parts-per-million at minimal expense. |
What is claimed is: 1. A method to analyze an open in a line of an integrated circuit, the method comprising:coupling a sense amplifier to the line; providing a predetermined input signal to the line; providing a plurality of reference voltages to the sense amplifier, wherein a reference voltage controls a sensitivity of the sense amplifier; and determining output signals of the sense amplifier based on the plurality of reference voltages. 2. The method of claim 1, further including generating a mathematical model of a resistance of the line based on the reference voltage provided to the sense amplifier.3. The method of claim 2, wherein the mathematical model is generated using a simulation program.4. The method of claim 3, wherein an output of the simulation program is examined using a graphical analysis program.5. The method of claim 2, further including traversing the line in a first path, the first path comprising predetermined sections of the line.6. The method of claim 5, further including traversing the line in a second path, the second path comprising other predetermined sections of the line.7. The method of claim 6, further including comparing a resistance associated with the first path and a resistance associated with the second path.8. The method of claim 7, wherein comparing determines a resistance of the open.9. The method of claim 2, further including traversing predetermined sections of the line without traversing at least one other predetermined section of the line.10. A test system for identifying defects in an integrated circuit, the test system comprising:a sense amplifier; a first line; a second line; a decoder coupled to the amplifier, the first line, and the second line; and a plurality of transistors, each transistor having a source, a drain, and a gate, the source and the drain respectively connected to the first line and the second line, and the gate coupled to selection circuitry. 11. The test system of claim 10 further including:a plurality of pairs of test strips provided in parallel orientation on either side of the first line; and a third line positioned in perpendicular orientation to the first line and the second line, wherein at least one test strip is coupled to the third line. 12. The test system of claim 10, wherein the first line and the second line are formed from the same process features in the integrated circuit.13. The test system of claim 11, wherein the first line and the second line are formed from different process features in the integrated circuit.14. The test system of claim 10, wherein the test system is provided on a production wafer.15. The test system of claim 10, wherein the test system is provided on a test chip.16. The test system of claim 10, wherein the selection circuitry forms part of the decoder.17. A test system for identifying defects in an integrated circuit, the test system comprising:a first inverter; a first line; a second line; a decoder coupled to the first inverter, the first line, and the second line; and a plurality of transistors, each transistor having a source, a drain, and a gate, the source and the drain respectively connected to the first line and the second line, and the gate coupled to selection circuitry. 18. The test system of claim 17 further including a second inverter having a trigger point different than the first inverter, wherein the decoder is selectively coupled to the second inverter.19. A method of determining segment resistance comprising:forming an alternate path based on a segment; measuring the resistance when the alternate path is included; and calculating the resistance of the segment. 20. A method of localizing a high resistance line portion comprising:testing a line, wherein if the line is found to have a high resistance, then; testing an adjacent line; and forming alternative paths through a combination of the high resistance line and the adjacent line until a high resistance portion is isolated. |
BACKGROUND OF THE INVENTION1. Field of the InventionThe invention relates to debugging of advanced wafer-processing technologies, and specifically to quantifying the magnitude of and localizing defects on wafers.2. Description of the Related ArtDuring the fabrication process, a wafer receives a number of doping, layering, and patterning steps. Each of these steps must meet exacting physical requirements. However, all steps have some variation from perfect calibration, thereby resulting in some variation on the wafer surface.To minimize these variations, numerous inspections and tests are performed to detect undesirable defects. Once detected, these defects are analyzed in a process called failure analysis. During failure analysis, valuable information regarding problems with fabrication materials, process recipes, ambient air, personnel, process machines, and process materials can be discovered. Therefore, detection of defects on an integrated circuit is critical to high yields and process control.When a new manufacturing process is being developed, a test structure may advantageously be manufactured specifically for testing the new manufacturing process. Alternatively, a wafer primarily including desired integrated circuit devices may also include test structures interspersed between the desired devices.FIG. 1 illustrates two standard test structures 100: a fork 101 and a serpentine 102. To identify defects using one of these structures, a user would provide an input signal on one end of the structure and determine if an appropriate output signal was generated at the other end. These test structures can be placed on test chips or on actual production chips to test manufacturing processes.Test structures 100 allow for the testing of "opens" and "shorts". An open is a failure in the connectivity or an excessively high resistance between two allegedly connected points. Serpentine 102 is typically used to detect opens. A short is a failure when connectivity exists between allegedly unconnected points. An open can be in a metal wire (line), a polysilicon line, a diffusion line, a contact, or a via. A short can be metal-to-metal, polysilicon-to-polysilicon, diffusion-to-diffusion, or contact-to-polysilicon. Fork 101 is typically used to detect shorts.The above-referenced test structures, i.e. fork 101 and serpentine 102, have distinct drawbacks. For example, locating and analyzing failures using either structure is difficult and time consuming. Specifically, detecting an open or short condition tells the user nothing about exactly where on the fork or serpentine the defect is located.Determining the location of the defect requires an inspection of the structure by the user. In the current art, visual inspection is a major method of determining chip failure. A visual inspection is a tedious process, which requires considerable time of an experienced product engineer. Moreover, to complicate matters, not all visual defects result in electrical failures. Therefore, to more closely analyze the visual defects, the user must typically perform both optical and scanning electron microscope (SEM) examinations. Furthermore, many defects are not visible by initial inspection, thereby making localization of the defects with a SEM extremely difficult if not impossible.Of importance, even when defects are localized, current technology provides no means to quantify the magnitude of the defect. Both the location and the magnitude of the defect provide valuable information to the user for failure analysis and may even indicate the nature of the defect without performing failure analysis. Because of its expense and complexity, users try to minimize the use of failure analysis. As known by those skilled in the art, an extremely large defect is probably the result of particle contamination rather than incomplete etching. However, the identification of other types of defects is less clear. Therefore, even after localization, many types of defects must still be subjected to failure analysis.Therefore, a need arises for a cost-effective method and test structure to quantify the magnitude of and localize defects on a wafer.SUMMARY OF THE INVENTIONIn accordance with the present invention, a test structure used for testing a manufacturing process provides defect information rapidly and accurately. The test structure is designed to mimic structures that will be present in a commercial device. The test structure includes a first plurality of lines provided in a first parallel orientation, a first decoder coupled to the first plurality of lines for selecting one of the first plurality of lines, and a first sense amplifier coupled to the output of the first decoder. To analyze an open, a line in the test structure is coupled to a sense amplifier. A high input signal is provided to the line. To determine the resistance of the open, a plurality of reference voltages are then provided to the sense amplifier.In the present invention, a mathematical model of the resistance of the line based on the reference voltage provided to the sense amplifier is generated. In one embodiment, the mathematical model is generated using a simulation program such as HSPICE. Using this mathematical model, the test structure of the present invention can quickly detect defect levels down to a few defects-per-million locations tested at minimal expense.The test structure can also determine the location of the defect(s) on the line. To achieve this, the test structure further includes a plurality of transistors, each transistor having a source, a drain, and a gate, the source and drain connected respectively to the selected line and an adjacent, non-selected line, and the gate coupled to selection circuitry. Using the selection circuitry, the transistors are selectively turned on/off, thereby creating predetermined paths through the test structure. The resistances associated with various paths are then compared to determine the location of the open(s). In this manner, the location of the open(s) can be determined within a few micrometers.If the opens are substantially distributed across the tested line, then failure analysis can still be tedious, time-consuming, and sometimes non-conclusive. However, if one segment of the tested line has a significantly higher resistance than other segments, then failure analysis can be done quickly and yield much more certain conclusions. Thus, the present invention facilitates better failure analysis.In accordance with the present invention, the test structure further includes a second plurality of lines provided in a second parallel orientation, a second decoder coupled to the second plurality of lines for selecting one of the second plurality of lines, and a second sense amplifier coupled to the output of the second decoder. In one embodiment, the second parallel orientation is perpendicular to the first parallel orientation. The first plurality of lines is formed from one layer and the second plurality of lines is formed from another layer in the integrated circuit. In this manner, separate feedback can be provided for each process layer.To determine a short, a plurality of test strips are formed parallel to each of the first plurality of lines in the test structure. Each test strip is coupled to one of the second plurality of lines. By providing a high signal to the tested line in the first plurality of lines and monitoring the output signal of the appropriate one of the second plurality of lines, the present invention rapidly and accurately identifies a short between the tested line and the corresponding test strip.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates standard yield structures placed on integrated circuits used for testing a manufacturing process.FIG. 2A illustrates a simplified test structure for locating opens in an integrated circuit to which the inventive test structure may be added.FIG. 2B illustrates exemplary detection circuitry that can be used in the present invention.FIG. 2C illustrates one sense amplifier that can be used in the detection circuitry of FIG. 2B.FIG. 3 illustrates a graph that provides a mathematical model of the resistance of the tested line based on the reference voltage provided to the sense amplifier.FIG. 4 illustrates a plurality of location transistors included in the structure of FIG. 2A, which facilitate identifying the location of the open (i.e., high resistance element) on the tested line.FIGS. 5A-5E illustrate the various signal paths of the test signal during one test method of the present invention.FIGS. 6A-6E illustrates the various signal paths of the test signal during another embodiment of the test method of the present invention.FIG. 7 illustrates a flow chart of the test method of the present invention.FIG. 8 illustrates a test structure for locating shorts in an integrated circuit.FIG. 9 illustrates one layout of the test structure of the present invention.FIGS. 10A-10C illustrates wafers including a plurality of integrated circuits and various test structures in accordance with the present invention.DETAILED DESCRIPTION OF THE DRAWINGSAn integrated circuit is formed from multiple layers including semiconductor layers, conductive layers, and insulation layers. In accordance with the present invention, test lines are formed from the semiconductor and conductive layers to facilitate identifying defects, i.e. opens and shorts, in the integrated circuit. Therefore, the term "layer" herein will refer to one of the semiconductor or conductive layers.An actual test structure in accordance with the present invention would typically include lines formed in each metal (conductive) layer as well as in a layer comprising semiconductor materials. Therefore, an actual test structure would include multiple layers, all stacked based on relative locations in the integrated circuit. For example, assuming the integrated circuit has five metal layers, layer one could include n- and p-type diffusion areas, polysilicon, and associated contacts (n-diffusion, p-diffusion, and polysilicon). Layer two could include metal 1 and vias formed with metal 1. Layer three could include metal 2 and vias formed with metal 2. Layer four could include metal 3 and vias formed with metal 3. Layer five could include metal 4 and vias formed with metal 4. Finally, layer six could include metal 5 and vias formed with metal 5. In the present invention, each layer includes either horizontal or vertical lines formed from the material present in that layer. Adjacent layers have different line orientations.FIG. 2A is a simplified schematic of a test structure 200 located on a chip for determining the presence of opens. Test structure 200 includes a plurality of horizontal lines 208A-208D formed from one layer in the integrated circuit and a plurality of vertical lines 201A-201D formed from an adjacent layer in the integrated circuit. Thus, an actual test structure would include multiple test structures 200 stacked according to relative layers in the integrated circuit.Note that although only four lines are shown in each orientation, i.e. horizontal or vertical, any number of lines (typically hundreds or even thousands of lines) can be provided to accurately replicate layout conditions on the integrated circuit. Therefore, the four horizontal and vertical lines are shown for illustration purposes only and are not meant to limit the present invention.To detect for any open in the lines of test structure 200, each horizontal and vertical line must be tested. Circuit 200 can be used to isolate each such line for testing. Specifically, a vertical decoder 202, having an input decoder section 202(1) and an output decoder section 202(2), is used to turn on the appropriate decoder transistors to isolate a vertical line 201. (Note that the circuitry for turning on and off specific decoder transistors is well known in the art and therefore, is not described in detail herein.) In a similar manner, a horizontal decoder 205, having an input decoder section 205(1) and an output decoder section 205(2), is used to turn on the appropriate decoder transistors to isolate a horizontal test line 208.For example, to test for an open in a vertical line 201C, decoder transistors 203C and 204C (part of input decoder section 202(1) and output decoder section 202(2), respectively) are turned on by providing an appropriate high voltage to their gates. Decoder transistors 203A, 203B, and 203D as well as decoder transistors 204A, 204B, and 204D are turned off by providing an appropriate low voltage to their gates. In this manner, vertical line 201C is isolated from other vertical lines in test structure 200.A high input test signal in_ver is then provided to circuit 200. If an output test signal out_ver is also high, then vertical line 201C has no opens (i.e. highly resistive elements) and is characterized as "passing". On the other hand, if the output test signal out_ver is low, then vertical line 201C has an open and is characterized as "failing".A similar procedure can be performed to test for opens in a horizontal line 208. For example, to test for an open in a horizontal line 208B, decoder transistors 206B and 207B (part of input decoder section 205(1) and output decoder section 205(2)) are turned on by providing an appropriate high voltage to their gates. Decoder transistors 206A, 206C, and 206D as well as transistors 207A, 207C, and 207D are turned off by providing an appropriate low voltage to their gates. In this manner, horizontal line 208B is isolated from other horizontal lines in test structure 200. Then, a high input test signal in_hor is provided to test structure 200. If an output test-signal out_hor is also high, then horizontal line 208B has no opens.(i.e. highly resistive elements) and is characterized as "passing". On the other hand, if the output test signal out_hor is low, then horizontal line 208B has an open and is characterized as 37 failing".Note that one pair of decoder transistors is provided for each line. Thus, an actual test structure would include hundreds or even thousands of pairs of decoder transistors, each pair corresponding to one line in the test structure.Using test structure 200 instead of yield structures 100 significantly reduces the time to locate opens. For example, in seconds, test structure 200 can locate an open, which might take a user performing a visual inspection of a yield structure hours to locate. Moreover, test structure 200 detects an open without the requisite skill of an experienced product engineer or the expense of a SEM, thereby significantly reducing the cost of human and equipment resources.In accordance with the present invention, to detect an open, a sense amplifier compares an output signal (i.e., a signal out_ver or out_hor transferred through a tested line) with a reference voltage vref. Voltage vref controls the sensitivity of the sense amplifier. If the input signal is greater than voltage vref, then no open is present and the. sense amplifier outputs a logic one signal (characterized as passing). In contrast, if the input signal is less than voltage vref, then at least one open must be present and the sense amplifier outputs a logic zero signal (characterized as failing).If a number of opens are identified on the integrated circuit and the user wants to perform failure analysis on those opens, then knowing the magnitude of the resistances associated with the opens would be extremely helpful. Specifically, applicant has determined that the magnitude of the resistances in large part depends on the process problem involved. Therefore, knowing the magnitude of the resistances may provide valuable clues to identify and correct the process problem. This is particularly true of "immature" processes in which process controls are not fully developed. Thus, even for a well-known process, such as the CMOS process, a technology shrink using this process will require its own process controls.FIG. 2B illustrates an exemplary detection circuit 210 that can be used in the present invention. Two vertical decoder transistors 203N and 204N have had their gates coupled to voltage Vdd and therefore are turned on. In this manner, a vertical line 201N is selected for testing. Detection circuit 210 includes a sense amplifier 219 that receives an input signal "in" that has been buffered and passed through vertical line 201N, represented by a resistor, and generates an output signal "out" based on the reference voltage vref.An illustrative sense amplifier 219 is shown in FIG. 2C. In the embodiment of FIG. 2C, sense amplifier 219 includes two PMOS transistors 230 and 231 having their gates coupled, their sources coupled to a common voltage source Vdd, and their drains respectively coupled to the drains of two NMOS transistors 232 and 233. These NMOS transistors have their gates coupled respectively to the input signal "in" and the reference voltage vref and their sources coupled to the drain of an NMOS transistor 234. Transistor 234 further has a gate coupled to the drain of PMOS transistor 231. The drain of PMOS transistor 230 is coupled to the output signal "out" via three inverters 235, 236, and 237 coupled in series. In this configuration, sense amplifier 219 functions as a current mirror.Table 1 below summarizes the sizes of the transistors comprising the elements of the embodiment of sense amplifier 219 shown in FIG. 2C.<tb> <sep> <sep>TABLE 1<tb> <sep> <sep>ELEMENT<sep>WIDTH (microns)<sep>LENGTH (microns)<tb> <sep> <sep>230<sep>7.5<sep>0.36<tb> <sep> <sep>231<sep>7.5<sep>0.36<tb> <sep> <sep>232<sep>23.0<sep>0.36<tb> <sep> <sep>233<sep>23.0<sep>0.36<tb> <sep> <sep>234<sep>5.0<sep>0.36<tb> <sep> <sep>235 (PMOS)<sep>7.5<sep>0.36<tb> <sep> <sep>235 (NMOS)<sep>2.5<sep>0.36<tb> <sep> <sep>236 (PMOS)<sep>5.0<sep>0.36<tb> <sep> <sep>236 (NMOS)<sep>2.5<sep>0.36<tb> <sep> <sep>237 (PMOS)<sep>20.0<sep>0.36<tb> <sep> <sep>237 (NMOS)<sep>10.0<sep>0.36Note that although a specific embodiment of a sense amplifier is provided in FIG. 2C, sense amplifier 219 can be any known sense amplifier, and is not limited to the current mirror sense amplifier described in detail herein. For example, in another embodiment, the present invention includes a cross-coupled sense amplifier.In yet another embodiment, sense amplifier 219 is replaced with an inverter (thereby eliminating the need for reference voltages). As known by those skilled in the art, an inverter, like a sense amplifier, has a trigger point. Although the magnitude of the defect cannot be determined (as explained in reference to FIG. 3 below) using a single inverter, the location of the defect can be found using one of the test structures of the present invention. To determine the magnitude of the defect, multiple inverters having different trigger points could be provided with the test structures. In this embodiment, inverters are selectively coupled to the tested line. In this manner, the relative magnitude of the defect can be determined. And in yet another embodiment, instead of measuring voltage, the current is measured to determine the resistance.Referring back toFIG. 2B, a driver 211 includes two inverters 212A and 212B coupled in series for driving a test_in signal to vertical line 201N. Driver 211 provides the above-mentioned buffering function. Transistor 213 represents the means to provide the path from driver 211 to vertical decoder transistor 203N. Therefore, transistor 213 could include one or more transistors (or even other devices). Transistor 217 represents the means to provide the path from vertical decoder transistor 204N to sense amplifier 219. Therefore, like transistor 213, transistor 217 could include one or more transistors (or even other devices). Transistor 218, having its gate coupled to Vdd, provides a weak pull-down to the input of sense amplifier 219. Therefore, sense amplifier 219 receives a logic zero, unless a high test_in signal is provided. Transmission gate 220 ensures that the out signal of sense amplifier 219 is transferred to the appropriate circuitry (not shown) as the test_out signal.Table 2 below summarizes the widths and lengths of various transistors comprising the elements of detection circuit 210.<tb> <sep> <sep>TABLE 2<tb> <sep> <sep> <sep>WIDTH<sep>LENGTH<tb> <sep> <sep>ELEMENT<sep>(microns)<sep>(microns)<tb> <sep> <sep>Inverter 212A (PMOS)<sep>10<sep>0.35<tb> <sep> <sep>Inverter 212A (NMOS)<sep>5<sep>0.35<tb> <sep> <sep>Inverter 212B (PMOS)<sep>20<sep>0.35<tb> <sep> <sep>Inverter 212B (NMOS)<sep>10<sep>0.35<tb> <sep> <sep>Access transistor 213<sep>50<sep>0.35<tb> <sep> <sep>Decoder transistor 203N<sep>10<sep>0.35<tb> <sep> <sep>Decoder transistor 204N<sep>10<sep>0.35<tb> <sep> <sep>Access transistor 217<sep>50<sep>0.35<tb> <sep> <sep>Pull-down transistor 218<sep>1<sep>40<tb> <sep> <sep>Transmission gate 220 (PMOS)<sep>20<sep>0.35<tb> <sep> <sep>Transmission gate 220 (NMOS)<sep>10<sep>0.35As mentioned previously, the reference voltage vref controls the sensitivity of sense amplifier 219. In other words, for different values of voltage vref, different line resistances would cause vertical line 201N to be characterized as an open.If the resistance of vertical line 201N is below 10,000 Ohms, then most users would characterize vertical line 201N as not open (i.e. the line "passes"). On the other hand, if the resistance of vertical line 201N is instead 1 MOhm, then most users would characterize vertical line 201N as an open (i.e. the line "fails"). However, in current test vehicles, such as SRAM chips, the actual resistance of the tested line is not measured.In accordance with the present invention, a simulation program is used to generate a mathematical model of the sense amplifier and the tested line. Specifically, the mathematical model plots the reference voltage vref for a specific sense amplifier versus the resistance of the tested line. In one embodiment, a simulation program HSPICE, licensed by Meta Software of Cambridge, Mass., is run on a Sun workstation to provide the mathematical model. HSPICE simulates circuits of almost any size (e.g. 250,000 gate simulations at transistor level) and runs very quickly. The results of HSPICE can be examined using a graphical analysis program, such as ViewTrace, licensed by Innoveda of Marlboro, Mass.Other simulation programs, such as SPICE (Simulation Program with Integrated Circuit Emphasis), can also be used to generate the mathematical model. SPICE is a widely used circuit simulation program developed as public domain software at the University of California. Note that although the device models and simulation algorithms in SPICE are comparable to HSPICE, the user interface is less sophisticated in SPICE (i.e., the graphical output is intended for line printers).FIG. 3 illustrates a logarithmic graph 300 generated with HSPICE to simulate sense amplifier 219 (FIG. 2C) and line 201N. Graph 300 plots reference voltage (Vref) on the x-axis and the resistance (Ropen) on the y-axis. Curve 301 indicates the resistance at which sense amplifier 219 changes its output from one logic state to another. For example, if a reference voltage of 0.7 Volts is provided to sense amplifier 219, then sense amplifier 219 changes its output signal from one logic state to another when the resistance of line 201N is equal to approximately 300 kohms (indicated by point 301A on curve 300). Thus, if the actual resistance of line 201N is less than 300 kohms, then line 201N is known to be in the "pass" (non-open) region 302; whereas, if the actual resistance of line 201N is more than 300 kohms, then line 201N is known to be in the "fail" (open) region 301.In accordance with the present invention, the actual resistance of the tested line is measured by varying the reference voltage vref. In this manner, when the logic transition occurs, then the resistance is known. In one embodiment, successively lower reference voltages are provided to sense amplifier 219. Clearly, once a logic transition occurs, smaller changes in Vref can be provided to more accurately determine the resistance of the tested line.In one embodiment of the present invention, a careful measurement of test structure 200 (FIG. 2A) is done using graph 300 (FIG. 3) to determine the actual resistance of a line 201. Typically, a line 201 is first tested using a high reference voltage Vref (FIG. 2B), such as 1.2V. If sense amplifier 219 outputs a logic zero signal (i.e. the input signal is less than Vref), then the resistance of the open must be greater than 150 kohm, per graph 300. In a resistance homing search, the reference voltage Vref is halved (1.2/2=0.6) and line 201 is then tested at the new reference voltage of 0.6V. If sense amplifier 219 outputs a logic one signal (i.e. the input signal is greater than Vref), then the resistance of the open must be between 150 kohm and 400 kohm. Thus, to continue the resistance homing search, the differential between the last two reference voltages (1.2-0.6=0.6) is halved (0.6/2=0.3), this difference is added to the last reference voltage (0.6+0.3=0.9), and line 201 is then tested at this new reference voltage. The resistance homing search is continued until the value of Vref that causes sense amplifier 219 to switch state (trip voltage) is determined. This value, using graph 300, quantifies the actual resistance of line 201.Other search methods, such as a linear search, are equally applicable to the present invention. In a linear search, a delta change in Vref, such as 0.1V is chosen, then line 201 is tested at successively lower voltages until the trip voltage is determined. Note that this method may result in a longer time to convergence unless a relatively accurate first reference voltage is chosen.After the user knows the magnitude of the resistance in the tested line, the user can pick optimal candidates for voltage contrast testing (Vcontrast). Vcontrast is a known technique used in SEM, for example, to pinpoint the location of an open in the tested line.During FIB, any floating metal pieces may become charged by the focused ion beam (or similarly, during SEM, any floating metal pieces may become charged by the electron beam). As a result, these pieces turn dark and are not visible on the generated x-ray. However, any metal pieces coupled to ground will not be charged (i.e. having a discharge path to ground) and thus will be bright features on the x-ray. Therefore, if an open exists in a conductor, then the portions on either side of the open will appear bright on the x-ray.In Vcontrast, an additional cut is made anywhere on the conductor using the focused ion beam. At this point, the user merely follows the dark segment to the edge of the first bright feature. It is at this edge where the open exists. Clearly, the brighter the segment, the lower the resistance. Of course, the converse is also true, i.e. the darker the segment, the higher the resistance. Unfortunately, distinguishing conductors with no opens and conductors with opens having a low resistance (and thus having some discharge to ground) is difficult. Therefore, those skilled in the art recognize that if the resistance of the conductor is greater than 1 Mohm, then Vcontrast will work. However, if the resistance of the conductor is less than 1 Mohm, then Vcontrast will not work. Therefore, a need arises for a localization method that is effective for even relatively low resistances and preferably a method performed prior to failure analysis, thereby minimizing the expense of using SEM.FIG. 4 illustrates an exemplary plurality of location transistors 401A-401E providing open localization circuitry in accordance with the present invention. Note that although only five location transistors 401 are shown in FIG. 4, actual implementations typically include hundreds of location transistors 401. Each location transistor 401 has its drain coupled to the tested line (vertical line 201C, for example) and its source coupled to an adjacent line (vertical line 201D, for example). In one embodiment, location transistors 401 are controlled by decoders 202. In other embodiments, location transistors 401 are controlled by separate selection circuitry. Note that each layer typically has its own set of location transistors.Although only vertical lines 201C and 201D are shown coupled to location transistors 401, other vertical lines as well as horizontal lines (not shown) may also be coupled to additional location transistors in a similar manner. Note that the gates of decoder transistors 203C and 204C are coupled to voltage vdd (turning on those transistors) and the gates of decoder transistors 203D and 204D remain coupled to ground (turning off those transistors), thereby ensuring that any identified open is associated with a segment on the tested line, i.e. vertical line 201C.FIGS. 5A-5E illustrate predetermined test patterns to identify the segment of the tested line that includes a highly resistive element (hereinafter resistor R). In FIG. 5A, all location transistors 401 have their gates coupled to ground, thereby turning off those transistors. Therefore, a high signal provided to a node I at the top of vertical line 201C traverses the plurality of location transistors 401 in a path 501, i.e. only along vertical line 201C. Path 501 includes resistor R and therefore the sense amplifier (not shown) coupled to a node O at the bottom of line 201C outputs a logic zero.In FIG. 5B, location transistors 401D-401E have their gates coupled to Vcc, thereby turning on those transistors. Location transistors 401A-401C continue to have their gates coupled to ground. Vertical line 201D is floating. Therefore, a high signal provided to node I traverses the plurality of location transistors 401 in an alternate path 502A as well via original path 501. In the present invention, each segment of line 201C, defined by transistors 401, can be individually analyzed. Specifically, two transistors 401 are selectively turned on, thereby creating an alternate path 502 for the input signal. The resistances of the two paths are compared. If the resistances are different, then an open is identified. In other words, if a segment of line 201C containing the resistor R is bypassed using transistors 401, then that path 502 becomes the path of least resistance. Accordingly, the resistance of that path is less than that of original path 501. Note that the adjacent, vertical line 201D must be pre-tested to ensure that no highly resistive elements are present in this line. In this manner, any change in resistance detected by the sense amplifier is attributable to deselecting (or selecting) a certain segment of vertical line 201C having a resistor R. Resistor R is still in alternate path 502A and therefore the output signal at node O is a logic zero. Because the resistances of paths 501 and 502A are substantially equal, alternate path 502A does not localize resistor R.In FIG. 5C, location transistors 401C and 401E have their gates coupled to Vcc, thereby turning on those transistors. Location transistors 401A, 401B, and 401D have their gates coupled to ground, thereby turning off those transistors. Vertical line 201D is floating. Therefore, a high signal provided to node I traverses the plurality of location transistors 401 in an alternate path 502B (and original path 501). Resistor R is still in the alternate path 502B and therefore the output signal at node O is a logic zero. Because the resistances of paths 501 and 502B are substantially equal, alternate path 502B does not localize resistor R.In FIG. 5D, location transistors 401B and 401E have their gates coupled to Vcc, thereby turning on those transistors. Location transistors 401A, 401C, and 401D have their gates coupled to ground. Vertical line 201D is floating. Therefore, a high signal provided to node I traverses the plurality of location transistors 401 in an alternate path 502C (and original path 501). Resistor R is still in alternate path 502C and therefore the output signal at node O is a logic zero. Because the resistances of paths 501 and 502C are substantially equal, alternate path 502C does not localize resistor R.In FIG. 5E, location transistors 401A and 401E have their gates coupled to Vcc, thereby turning on those transistors. Location transistors 401B-401D have their gates coupled to ground, thereby turning off those transistors. Vertical line 201D is floating. In this configuration, a high signal provided to node I traverses the plurality of location transistors 401 in an alternate path 502D (and original path 501). Resistor R is not in alternate path 502D and therefore the output signal at node O is a logic one (the input signal taking the path of least resistance through alternate path 502D). Because the resistances of paths 501 and 502D are different, alternate path 502D does localize resistor R. Specifically, the present invention identifies the segment of line 201C between transistors 401A and 401B as having resistor R.Therefore, the present invention provides an efficient and accurate means to determine the location of the open on the tested line. In this manner, during failure analysis, the time previously spent merely locating the defect is virtually eliminated, thereby allowing a user to focus on critical processes, such as defect analysis. Note that if the exact location of the open within a segment is required, then standard Vcontrast can be used.Moreover, in addition to determining the exact location of the open, the resistance of each segment of test line 201C can also be determined. Specifically, the resistance associated with the segment between location transistors 401D and 401E is determined by subtracting the resistance measured for path 502A in parallel with path 501 (FIG. 5B) from the resistance measured for path 501 (FIG. 5A). In a similar manner, the resistance associated with the segment between location transistors 401C and 401D is determined by subtracting the resistance measured for path 502B in parallel with path 501 (FIG. 5C) from the resistance measured for path 502A in parallel with path 501 (FIG. 5B). Note that typically, the resistance of each segment, excluding the segment including resistance R, is de minimus compared to resistance R.FIGS. 6A-6E illustrate alternative, predetermined test patterns to identify and measure the resistance of each segment of the tested line. In FIG. 6A, all location transistors 401 have their gates coupled to ground, thereby turning off those transistors. Therefore, a high signal provided to a node I at the top of vertical line 201C traverses the plurality of location transistors 401 in a path 601, i.e. only along vertical line 201C. Path 601 includes resistor R and therefore the sense amplifier (not shown) coupled to node O at the bottom of line 201C outputs a logic zero.In FIG. 6B, location transistors 401D-401E have their gates coupled to Vcc, thereby turning on those transistors. Location transistors 401A-401C continue to have their gates coupled to ground. Vertical line 201D is floating. Therefore, a high signal provided to node I traverses the plurality of location transistors 401 in an alternate path 602A as well as original path 601. The resistances of the two paths are compared. If the resistances are different, then an open is identified. In other words, if a segment of line 201C containing the resistor R is bypassed using transistors 401, then that path 602 becomes the path of least resistance. Accordingly, the resistance of that path is less than that of original path 601. Note that the adjacent, vertical line 201D must be pre-tested to ensure that no highly resistive elements are present in this line. In this manner, any change in resistance detected by the sense amplifier is attributable to deselecting (or selecting) a certain segment of vertical line 201C having a resistor R. Resistor R is still in alternate path 602A and therefore the output signal at node O is a logic zero. Because the resistances of paths 601 and 602A are substantially equal, alternate path602C does not localize resistor R.In FIG. 6C, location transistors 401C-401D have their gates coupled to Vcc, thereby turning on those transistors. Location transistors 401A, 401B, and 401E have their gates coupled to ground. Vertical line 201D is floating. Therefore, a high signal provided to node I traverses the plurality of location transistors 401 in an alternate path 602B (and original path 601). Resistor R is still in alternate path 602B and therefore the output signal at node O is a logic zero. Because the resistances of paths 601 and 602B are substantially equal, alternate path 602B does not localize resistor R.In FIG. 6D, location transistors 401B-401C have their gates coupled to Vcc, thereby turning on those transistors. Location transistors 401A, 401D, and 401E have their gates coupled to ground, thereby turning off those transistors. Vertical line 201D is floating. Therefore, a high signal provided to node I traverses the plurality of location transistors 401 in an alternate path 602C (and original path 601). Resistor R is still in alternate path 602C and therefore the output signal at node O is a logic zero. Because the resistances of paths 601 and 602C are substantially equal, alternate path 602C does not localize resistor R.In FIG. 6E, location transistors 401A-401B have their gates coupled to Vcc, thereby turning on those transistors. Location transistors 401C, 401D, and 401E have their gates coupled to ground, thereby turning off those transistors. Vertical line 201D is floating. Therefore, a high signal provided to node I traverses the plurality of location transistors 401 in an alternate path 602D (and original path 601). Resistor R is not in alternate path 602D and therefore the output signal at node O is a logic one (the input signal taking the path of least resistance through alternate path 602D). Because the resistances of paths 601 and 602D are different, alternate path 502D localizes resistor R. Specifically, the present invention identifies the segment of line 201C between transistors 401A and 401B as having resistor R.Note that adjacent, parallel lines in test structure 200 are not limited to similar process features. For example, line 201D can be a metal 1 line whereas line 201C can be a series of metal 1 vias. Moreover, in another embodiment of the invention, parallel, non-adjacent lines are used in the test structure. In yet another embodiment, these non-adjacent lines are provided in different layers. This flexibility can be advantageous in situations where one type of process feature has significantly more defects than another type of process feature. In this situation, a line comprising process features with no substantive defects can be used as the standard against which other lines are compared. In FIGS. 5A-5E and 6A-6E, line 201D is the standard (i.e. no defect line) against which line 201C is compared.FIG. 7 illustrates a flowchart 700 that summarizes one embodiment of the method of the present invention. In step 701, a mathematical model is generated of the sense amplifier and the line resistance. In step 702, reference voltage vref is changed (i.e., increased or decreased). If a logic transition does not occur in the sense amplifier, as determined in step 703, then the process loops back to step 702 in which the reference voltage Vref is changed again. On the other hand, if a logic transition does occur, then in step 704 the resistance of the line is determined based on the generated mathematical model. If desired, in step 705 the location of an open (or multiple opens) in a line and the resistance of each segment of the line can be determined using localization circuitry. Note that if the user desires to detect shorts (not opens), as described in detail below, then step 705 is not used.The test structure of the present invention works equally well to detect shorts. FIG. 8 illustrates a test structure 800 substantially similar to test structure 200 (FIG. 2A) and further including a plurality of test strips 801. In a preferred embodiment, each section of the test line has a pair of test strips provided in parallel orientation on either side of the tested line. For example, in FIG. 8, four pair of test strips 801A-801D roughly define four sections of vertical line 201C (the tested line). Test strips 801 are formed from the same layer as vertical lines 201. Each test strip 801 is connected (using a via or a contact) to a line perpendicular to the tested line in test structure 800, i.e. a horizontal line 208. As described previously, horizontal lines 208 are formed from a different layer than vertical lines 201. Therefore, to detect shorts, the device is made with connections between multiple layers of the integrated circuit.In this embodiment, test strips 801A are connected to horizontal line 208A, test strips 801B are connected to horizontal line 208B, test strips 801C are connected to horizontal line 208C, and test strips 801D are connected to horizontal line 208D. Other test strips associated with other lines (both vertical and horizontal) are omitted for clarity. The length of a test strip 801 may be dependent on the length of the tested line. For example, in one conservative embodiment, test strips 801, if joined end to end, are substantially the length of the tested line.In the configuration shown in FIG. 8, to detect a short S existing between vertical line 201C and an adjacent test strip 801, a logic one signal is first provided to vertical line 201C via terminals in_ver and out_ver. Then, each horizontal line 208 is selected in turn (i.e., the appropriate decoder transistors 206 and 207 are turned on/off). The selected horizontal line 208 is connected to two sense amplifiers (not shown in FIG. 8) via terminals in_hor and out_hor. Therefore, if a short exists, then the logic one signal on vertical line 201C will also be provided on the test strip having the short as well as the horizontal line 208 connected to that test strip. Thus, the sense amplifiers will output a logic one signal when the horizontal line 208 associated with the short is selected.Note that in another embodiment of the invention, the logic one signal may be provided to only one terminal, such as terminal in_ver. However, providing the logic one signal to both terminals in_ver and out_ver ensures that a short can be detected even if the vertical line 201 in question has a single open. Similarly, in another embodiment, only one sense amplifier is coupled to the selected horizontal line 208. However, providing a sense amplifier at both terminals in_hor and out_hor allows detection of the short even if the selected line 208 has a single open.By identifying the horizontal test line(s) 208 that carries the logic one signal, the user can determine the location of the short(s) on vertical test line 201C (i.e., the section of the line). Clearly, identifying the location of the short will also identify the layer (in FIG. 8, the layer associated with vertical line 201C).FIG. 9 illustrates one layout 900 including a test structure 901 in accordance with the present invention, vertical decoders 902(1) and 902(2), and horizontal decoders 903(1) and 903(2). Each decoder 902 has an associated predecoder 904 and control logic 905. In a similar manner, each decoder 903 has an associated predecoder 906 and control logic 907. The control circuitry includes the sense amplifier, pass gates, drivers, and associated transistors (described in reference to FIG. 2B, for example) to create the appropriate path to test selected lines in test structure 901. The decoders and predecoders are standard N-to-1 decoding structures known by those skilled in the art and therefore not described in detail herein.In one embodiment, the test structure of the present invention is placed on a production wafer between two integrated circuits and is spliced off after the wafer is manufactured. FIG. 10A illustrates an illustrative wafer 1000 including a plurality of integrated circuits (i.e. chips) 1001, wherein one or more scribe lines 1002 include the test structure of the present invention.If the user determines that more area is required for test structures to increase the probability of detecting defects, then product can be replaced by chips including larger test structures. FIG. 10B illustrates one such embodiment wherein wafer 1010 includes a plurality of integrated circuits 1001 (product) and a plurality of test chips 1003 dedicated to test systems. In this embodiment, the test structure may be formed using standard design rules for a production chip. Note that the number of chips 1003 and their position can vary between wafers or wafer lots. Thus, for example, a prototype wafer may have more test chips 1003 than a production wafer.In yet another embodiment, shown in FIG. 10C, each integrated circuit 1004 includes a product portion 1007 (such as a programmable logic device), a test system 1005 in accordance with the present invention, and other test structures 1006. In this embodiment, once the yield reaches an acceptable level, the fab can selectively shutter out structures 1005 and 1006, as desired. Alternatively, the fab can replace the reticles for wafer 1020 with reticles that have integrated circuits comprising product only.The present invention has significant advantages over the prior art. Specifically, defect levels down to a few parts-per-million can be detected quickly at minimal expense. Moreover, the location of those defects can be determined within a few micrometers. Because of the unique test structure provided, separate feedback can be provided for each process layer. Finally, resistances can be ordered (from highest to lowest in one embodiment) in a report to the user, thereby ensuring that problems can be quickly analyzed and corrected.As another advantage, the present invention allows the user to better use failure analysis. For example, if the resistances are substantially distributed across the tested line, then failure analysis will be tedious, time-consuming, and generally non-conclusive. However, if one segment of the tested line has a significantly higher resistance than other segments, then failure analysis can be done quickly and yields much better conclusions. Thus, the present invention facilitates better failure analysis.The specific embodiments of the present invention are presented for purposes of description and illustration only. These embodiments are not intended to be exhaustive or to limit the invention in any way. Those skilled in the art will recognize modifications and variations to the present invention. For example, referring to FIG. 2B, instead of transistor 218 being coupled to ground (thereby providing a weak pull-down), transistor 218 is coupled to a positive voltage source Vcc (thereby providing a weak pull-up). In this embodiment, a low test_in signal is provided. As another example and referring to FIG. 4, adjacent parallel lines in the test structure may even be formed from different layers in the integrated circuit. Thus, the present invention is only defined by the appended claims. |
A system transfers BIOS instructions from a BIOS ROM to a processor for either execution or storage in a system memory. The BIOS ROM has an address bus coupled to an address bus of the processor and a data bus coupled to the an intelligent drive electronics ("IDE") controller through the data bus portion of an IDE bus. In operation, the processor applies addresses directly to the address bus of the BIOS ROM, and the corresponding instructions are coupled through the IDE data bus and the system controller to the data bus of the processor. |
What is claimed is: 1. A computer system, comprising:a processor communicating through a processor bus, the processor bus comprising an address bus and a data bus; a system read/write memory communicating through a system memory bus; an addressable device capable of outputting data responsive to an address, the addressable device communicating through a plurality of buses at least one of which is coupled to the processor bus, the plurality of buses of the addressable device comprising an address bus and a data bus, the address bus of the addressable device being coupled to the address bus of the processor bus; a peripheral I/O bus; and a bus bridge coupled to the processor bus including at least the data bus of the processor bus, a system memory bus, the I/O bus and a least one of the buses of the addressable device including at least the data bus of the addressable device, the bus bridge being structured to permit the processor to communicate with each of the system read/write memory, the addressable device, and a peripheral I/O bus so that the data buses of the processor bus and the addressable device are coupled to the each other through the bus bridge. 2. The computer system of claim 1 wherein the addressable device comprises a read only memory.3. The computer system of claim 2 wherein the bus bridge comprises an intelligent drive electronics ("IDE") controller, and wherein the read only memory is coupled to the IDE controller through an IDE bus.4. The computer system of claim 3 further comprising an IDE device coupled to the IDE bus.5. The computer system of claim 2 wherein the read only memory has stored therein a basic input/output system program that is adapted to be executed by the processor at boot-up.6. The computer system of claim 2 wherein the at least one bus through which the read only memory communicates and that is coupled to the processor bus is coupled directly to the processor bus.7. A computer system, comprising:a system controller including a processor interface, a system memory controller, and at least one I/O bus controller having an I/O port, and a processor bus coupled to the processor interface; an I/O bus coupled to the I/O port; a processor coupled to the processor bus, the processor bus comprising a processor address bus and a processor data bus; a system memory bus coupled to the system memory controller; a system read/write memory coupled to the system memory bus; and a read-only memory having a plurality of bus terminals including address bus terminals and data bus terminals, the address bus terminals of the read-only memory being coupled to the processor address bus and the data bus terminals of the read only memory being coupled to the I/O bus. 8. The computer system of claim 7 wherein the processor bus comprises a processor address bus and a processor data bus, and wherein the second portion of the read only memory bus terminals comprise data bus terminals.9. The computer system of claim 7 wherein the at least one I/O bus controller having an I/O port comprises an intelligent drive electronics ("IDE") controller having an IDE bus port, and wherein the read only memory is coupled to the IDE controller through an IDE bus.10. The computer system of claim 9 further comprising an IDE device coupled to the IDE bus.11. The computer system of claim 7 wherein the read only memory has stored therein a basic input/output system program that is adapted to be executed by the processor at boot-up.12. The computer system of claim 7 wherein the system controller further comprises a peripheral component interconnect ("PCI") bus controller, and wherein the computer system further comprises a PCI device coupled to the PCI bus controller through a PCI bus.13. The computer system of claim 7 wherein the first portion of the bus terminals of the read only memory are coupled directly to the processor bus.14. A computer system, comprising:a processor having an address bus port, a data bus port, and a control bus port; a processor address bus coupled to the processor address bus port, a processor data bus coupled to the processor data bus port, and a processor control bus coupled to the processor control bus port; a system controller having a processor interface coupled to the processor address bus, the processor data bus, and the processor control bus, the system controller further including a system memory controller having a system memory bus port, a first I/O controller having a first I/O port, and a second I/O controller having a second I/O port; a system memory bus coupled to the system memory bus port of the system controller; a system read/write memory coupled to the system memory bus; a first I/O bus coupled to the first I/O port; a first I/O device coupled to the first I/O bus; a second I/O bus coupled to the second I/O port, the second I/O bus including a data bus and a control bus; and a read only memory having a data bus, an address bus, and at least one control line, the address bus of the read only memory being coupled to the processor address bus, the data bus of the read only memory being coupled to the data bus of the second I/O bus, and the at least one control line of the read only memory being coupled to the control bus of the second I/O bus. 15. The computer system of claim 14 wherein the first I/O controller comprises a peripheral component interconnect ("PCI") controller, the first I/O bus comprises a PCI bus, and the first I/O device comprises a PCI device.16. The computer system of claim 14 wherein the second I/O controller comprises an intelligent drive electronics ("IDE") controller, and wherein the second I/O bus comprises an IDE bus.17. The computer system of claim 16 further comprising an IDE device coupled to the IDE bus.18. The computer system of claim 14 wherein the read only memory has stored therein a basic input/output system program that is adapted to be executed by the processor at boot-up. |
TECHNICAL FIELDThe present invention relates to computer systems, and, more particularly, to a computer system having a bus bridge with a relatively low number of external terminals.BACKGROUND OF THE INVENTIONWhen a computer system is powered on or reset, computer instructions are executed that are part of a basic input/output system ("BIOS") program. The BIOS program is normally in the form of firmware routines stored in a read only memory ("ROM"), which may or may not be a programmable read only memory ("PROM"). The processor may execute the BIOS program directly from the BIOS ROM. However, the BIOS program is usually transferred from the BIOS ROM to system memory, such as dynamic random access memory ("DRAM"), in a process known as "BIOS shadowing." Following transfer of the BIOS program to. system memory, the processor is initialized and then executes initialization routines, or bootstrap routines, that are part of the BIOS program from the system memory. This entire process, including any shadowing of the firmware routines from the ROM to the system memory, is known as "booting" the computer systemIf the processor executes the BIOS program directly from the BIOS ROM, it must repeatedly apply an address to the ROM and then couple an instruction to the processor that is stored at the address in the ROM. If the BIOS program is shadowed, the processor repeatedly fetches and executes instructions for transferring the BIOS program from the BIOS ROM, as well as the BIOS program itself, in a multi-step process. In either case, the BIOS program instructions are transferred over a relatively low-speed bus through a bus bridge to a processor bus that is connected to the processor.A variety of configurations may be used in a computer system to couple a BIOS ROM to a processor. Examples of such systems are illustrated in FIGS. 1 and 2. With reference to FIG. 1, a computer system 10 includes a processor 14, such as an Intel(R) Pentium(R) processor or Pentium II(R) processor, although other processor may, of course, be used. For example, the processor 14 may be any microprocessor, digital signal processor, micro controller, etc. The processor 14 is coupled to a processor bus 16 which includes data, control, and address buses (not shown) that provide a communication path between the processor 14 and other devices, as explained below. One device with which the processor 14 communicates is a cache memory device 18, typically cache static random access memory ("SRAM"), which is also coupled to the processor bus 16. As is well known in the art, the cache memory device 18 is generally used for the high speed storage of instructions that are frequently executed by the processor 14, as well as for data that are frequently used by the processor 14.Also coupled to the processor bus 16 is a system controller 20. The system controller 20 performs two basic functions. First, the system controller 20 interfaces the processor 14 with a system memory 22, which is generally a dynamic random access memory ("DRAM"). More specifically, the system memory 22 may be an asynchronous DRAM, a synchronous DRAM ("SDRAM"), a video or graphics DRAM, a packetized DRAM, such as a synchronous link DRAM ("SLDRAM"), or any other memory device. The system controller 20 includes a DRAM controller 24, which interfaces the processor 14 to the system memory 24 to allow the processor 14 to write data to and read data from the system memory 22. Basically, the system controller 20 performs this function by receiving and sending data to the processor 14 (although the data may bypass the system controller 20 by being coupled directly to the processor bus 16), receives addresses from the processor 14, and receives high level command and control signals from the processor 14. In response, the system controller 20 couples the data to and from the system memory 22 via a data bus 32, generates separate row and column addresses and sequentially applies them to the memory device via an internal address bus 34, and generates and applies to the system memory 22 lower level command signals via a control bus 36.The second function performed by the system controller 20 is to interface the processor bus 16 to a peripheral I/O bus, such as a Peripheral Component Interconnect ("PCI") bus 40. The PCI bus 40, in turn, is coupled to a conventional PCI-ISA bus bridge 42 and a conventional VGA controller 44 driving a conventional display 46. The PCI bus 40 may also be connected to other peripheral devices (not shown) in a manner well known to one skilled in the art. The PCI-ISA bus bridge 42 may also include a disk drive controller, such as an Intelligent Drive Electronics ("IDE") controller 48, which controls the operation of an IDE disk drive 50 in a conventional manner.The PCI bus 40 is a relatively high speed peripheral I/O bus. Many peripheral devices are adapted to interface with a relatively slow speed peripheral I/O bus, known as an industry standard architecture ("ISA") bus. The computer system 10 illustrated in FIG. 1 includes an ISA bus 60 that may be coupled to such I/O devices as a Keyboard Controller, Real Time Clock, and Serial and Parallel Ports, all of which are collectively designated by reference number 62. The ISA bus 60 may also be coupled to a BIOS ROM 64 as well as other I/O devices (not shown) as is well known in the art. The BIOS ROM 64 stores the BIOS program, which, as explained above, is executed by the processor 14 at boot-up, either directly or after being transferred to the system memory 22 if the BIOS is shadowed.Although the BIOS ROM 64 is shown in the computer system 10 of FIG. 1 coupled to the ISA bus 60, it will be understood that it has conventionally been coupled to other components or buses, including the PCI bus 40, the IDE controller 48 within the PCI-ISA bridge 42, and a controller within the system controller 20. For example, an alternative example of a conventional computer system 70 shown in FIG. 2 includes many of the same components used in the computer system 10 of FIG. 1. Therefore, in the interest of brevity, an explanation of their structure and operation will not the repeated. The system 70 uses a system controller 80 that includes not only a DRAM controller 82 and a PCI bus controller 84, but also an accelerated graphics processor ("AGP") controller 86 and an IDE controller 88. The computer system 70 shown in FIG. 2 thus reflects the trend in computer architecture to couple as many components as possible to the system controller 80. The AGP controller 86 is coupled to an accelerated graphics processor 90 which is, in turn, coupled to a display 94. The IDE controller 88 is coupled through an IDE data bus 96 and an IDE control bus 98 (sometimes known as PC AT Attached ("ATA") buses) to a BIOS ROM 100 as well as to a pair of IDE devices 102, 104, such as disk drives. Not shown in FIG. 2, as will be apparent to one skilled in the art, is circuitry for multiplexing the data bus 96 between an address bus port of the BIOS ROM 100 and a data bus port of the BIOS ROM 100 since the IDE, or ATA, bus does not include an extensive address bus. Instead, the IDE bus includes only 4 address bits.In operation, the system controller 80 is used to interface the processor with all of the other components of the computer system 70 except the cache memory device 18, i.e., the system memory 22, the PCI bus 40, the accelerated graphics processor 90, and the BIOS ROM 100 and IDE devices 102, 104. When a BIOS instruction is to be transferred, the IDE controller 88 outputs the address of the instruction's storage location on the IDE data bus 96, and the BIOS ROM then outputs the instruction which is coupled to the IDE controller 88 through the IDE data bus 96.One problem with the computer system 10 illustrated in FIGS. 1, and particularly the computer system 70 illustrated in FIG. 2, is a proliferation of external terminals that the system controllers 20, 80 and the PCI-ISA bridge 42 must have to interface with all of the components to which they are connected. Increasing the number of terminals on an integrated circuit, such as a bus bridge, increases the cost of packaging the integrated circuit, increases the size of the integrated circuit package, increases the cost and complexity of mounting the integrated circuit on a circuit board, and increases the likelihood all of a faulty interconnection. It is therefore desirable to minimize the number of external terminals on an integrated circuit, such as a bus bridge. Although this problem exists to some degree with many integrated circuits in a computer system, it is particularly serious for system controllers and bus bridges since they generally have more external terminals than other integrated circuits in computer systems.The problems resulting from the proliferation of external terminals are exacerbated by two trends in computer system architecture. First, the sizes of data buses continue to increase to support the faster transfer of data, and the sizes of address buses continue to increase to allow addressing larger capacity system memories. As the size of these buses have increased, the number of terminals that the system controller or bus bridge must have to interface with these buses had correspondingly increased. For example, data buses have grown from 16 data bits, to 32 data bits to currently 64 data bits. Even larger data buses can be expected in the future. Second, as mentioned above, there has been a tendency to relocate the interface with peripheral devices closer to the processor to decrease the time required to access the peripheral devices. This trend is illustrated by comparing the computer system 10 of FIG. 1 with the computer system 70 of FIG. 2. However, as this trend continues, the system controller must interface with additional buses, as also exemplified by the computer system 70 of FIG. 2. Both of these trends have increased the number of external terminals that the system controller must include and, and as a result, have increased the resulting problems.There is therefore a need to reduce the number of external terminals on the system controllers of computer systems despite industry trends tending to increase the number of such external terminals.SUMMARY OF THE INVENTIONA computer system includes a processor communicating though a processor bus, a system read/write memory communicating though a system memory bus, and an addressable device, such as a read only memory, capable of outputting data responsive to an address. The addressable device communicates though a plurality of buses at least one of which is coupled to the processor bus. A bus bridge or system controller is coupled to the processor bus, the system memory bus, and at least one of the buses of the addressable device. The bus bridge is structured to permit the processor to communicate with each of the system read/write memory, and the addressable device.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of a conventional computer system in which a BIOS ROM is coupled to a processor through two I/O buses, a bus bridge, and a system controller.FIG. 2 is a block diagram of a conventional computer system having a more modern architecture in which a BIOS ROM is coupled to a processor through a system controller.FIG. 3 is a block diagram of a computer system in accordance with one embodiment of the invention.FIG. 4 is a flow chart showing the initialization operation of the computer system of FIG. 3.DETAILED DESCRIPTION OF THE INVENTIONA computer system 120 in accordance with one embodiment of the invention is illustrated in FIG. 3. The computer system 120 includes a processor 122 of conventional design, such as a Pentium(R) or Pentium II(R) microprocessor. The processor 122 is coupled to a processor bus 124, which includes a processor address bus 126 and processor data and control buses 128. The processor address bus 126 and the processor data and control buses 128 are coupled to a processor interface 129 in a system controller 130. The system controller includes a system memory controller 132 that is coupled to a system memory 134 through a system memory bus 138. The system controller 130 also includes a PCI bus controller 140 that is coupled to various PCI devices 144 through a PCI bus 146. Finally, the system controller 130 includes and IDE controller 150 that is coupled to an IDE data bus 152 and an IDE control bus 156. Coupled to the buses 152, 156 are a BIOS ROM 160 and first and second IDE devices 162, 164. In contrast to conventional practice exemplified by the computer system 70 of FIG. 2, an address of the BIOS ROM 160 is not coupled to the IDE controller 150 and applied to the BIOS ROM 160 through the IDE data bus 152. Instead, the processor address bus 126 is coupled to the address bus port of the BIOS ROM 160 through a separate ROM address bus 170. The BIOS ROM 160 is selectively enabled by a chip select ("CS") signal applied to the BIOS ROM 160 from the IDE controller 150 through line 178.In operation, the processor 122 writes data to and reads data from the system memory 134 in a conventional manner through the system memory controller 132 in the system controller 130 and through the memory bus 138. Similarly, the processor 122 interfaces with I/O devices, such as the PCI device 144, in a conventional manner through the PCI controller 140 in the system controller 130 and through the PCI bus 146. Finally, the processor 122 interfaces with the IDE devices 162, 164 in a conventional manner through the IDE controller 150 in the system controller 130 and the IDE data bus 152 and the IDE control bus 156. What is not conventional is the manner in which the processor 122 interfaces with the BIOS ROM 160. The processor 120 reads instructions from the BIOS ROM 160 by first applying the address where the instruction is stored to the ROM 160 through the processor address bus 126 and ROM address bus 170. When enabled by a chip select signal coupled through the line 178, the instruction is coupled from the BIOS ROM 160 to the processor 122 through the IDE data bus 152, the IDE controller 150, the processor interface 129 and the processor data bus 128.One advantage of the computer system 120 of FIG. 3 is that the system controller 130 need not include the large number of external terminals that would be required to couple the address bus of the BIOS ROM 160 to the system controller 130. Furthermore, circuitry for multiplexing the IDE data bus 152 to the data bus port and the address bus port of the BIOS ROM 160 is not required.The operation of the computer system 120 of FIG. 3 during initialization is illustrated in FIG. 4. The IDE controller 150 (FIG. 3) waits at for an address from the processor 122 at 200. When an address is received from the processor 122, a determination is made at 202 whether the received address is in the address space of either the IDE device 162 or the IDE device 164. If so, a chip select ("CS") signal for the appropriate IDE device 162, 164 is asserted at 204, and the IDE bus cycle is run at 206.If a determination is made at 202 that the received address is not in the address space of either the IDE device 162 or the IDE device 164, then a check is made at 210 to determine if the address is in the address space of the BIOS ROM 160. If not, the program returns to 200 to wait for another address from the processor. If a determination is made at 210 that the address is in the address space of the BIOS ROM 160, the IDE controller 150 applies a chip select signal to the BIOS ROM 160 at 214. The IDE bus cycle is then run at 206 to transfer the instruction from the BIOS ROM 160 to the processor 122. The above sequence is repeated each time that instruction is transfer from the BIOS ROM 160 to the processor 122.It will be appreciated that, although a specific embodiment of the invention has been described for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Those skilled in the art will appreciate that many of the advantages associated with these circuits and processes described above may be provided by other circuit configurations and processes. For example, although the BIOS ROM 160 has been described in FIG. 3 as being coupled to an IDE bus, it will be understood that the principles exemplified by this architecture exist for other bus systems, such as a PCI bus, in addition to IDE and Enhanced IDE ("EIDE") bus systems. Further, although the system 120 shown in FIG. 3 includes a BIOS ROM coupled to the processor in accordance with one embodiment of the invention, it will be understood that ROMs containing other information or other components addressable by the processor may be coupled to the processor in the same or similar manners. Also, although the BIOS ROM is shown with its address bus coupled directly to the address bus of the processor and its data bus coupled to the data bus of the processor through the system controller, it will by understood that the data bus of the BIOS ROM or other device may be coupled directly to the data bus of the processor and the address bus or other buses of the BIOS ROM or other device may be coupled to the processor through a system controller or other device. Finally, although the BIOS ROM is shown as being coupled to a system controller that is coupled to the processor bus, it will be understood that it may be coupled to other bus bridge devices in the same or a similar manner, or even to bus bridge devices, such as a PCI/ISA bus bridge, that are coupled to the processor through a system controller or other bus bridge. Accordingly, the invention is not limited by the particular disclosure above, but instead the scope of the invention is determined by the following claims. |
Various embodiments of methods and systems for dynamically adjusting a peak dynamic power threshold are disclosed. Advantageously, embodiments of the solution for peak dynamic power management optimize a peak dynamic power threshold based on estimations of real-time leakage current levels and/or actual power supply levels to a power domain of a system on a chip ("SoC"). In this way, embodiments of the solution ensure that a maximum amount of available power supply is allocated to dynamic power consumption for processing workloads at an optimum performance or quality of service ("QoS") level without risking that the total power consumption (leakage power consumption + dynamic power consumption) for the power domain exceeds the power supply capacity. |
CLAIMSWhat is claimed is:1. A method for managing power consumption in a power domain of a portable computing device ("PCD"), the method comprising:setting a peak dynamic power threshold to an initial level;monitoring temperature of one or more processing components of a system on a chip ("SoC");monitoring voltage levels supplied to the one or more processing components; calculating an optimum level for the peak dynamic power threshold, wherein the optimum level is based on the leakage power level calculated from the monitored temperature and voltage levels associated with the one or more processing components; determining that the optimum level of the peak dynamic power threshold differs from the initial level of the peak dynamic power threshold;adjusting the peak dynamic power threshold to the optimum level; and based on the adjusted peak dynamic power threshold, triggering adjustments to a throttling level for one or more of the processing components.2. The method of claim 1, further comprising monitoring one or more power level parameters indicative of an actual power level supply to the power domain, wherein: calculating an optimum level for the peak dynamic power threshold further comprises basing the calculation on the actual power supply level to the power domain.3. The method of claim 2, wherein the one or more power level parameters arc selected from the group consisting of input voltage to a switching mode power supply, power source type, number of active switching mode power supplies, output voltage of a switching mode power supply, temperature associated with a power management integrated circuit and a combination thereof.4. The method of claim 1, wherein the one or more processing components are selected from the group consisting of a graphical processing unit ("GPU"), a camera subsystem, a central processing unit ("CPU"), a modem or a combination thereof.5. The method of claim 1, wherein the adjustments to a throttling level comprise one of reducing operating frequency or reducing execution throughput.6. The method of claim 1, wherein the adjustments to a throttling level comprise one of increasing operating frequency or increasing execution throughput.7. The method of claim 1, wherein the adjustments to a throttling level comprise reducing operating frequency and voltage supplied to the power domain.8. The method of claim 1, wherein the PCD is a mobile communication device.9. A computer system for managing power consumption in a power domain of a portable computing device ("PCD"), the system comprisinga peak dynamic power ("PDP") module operable to perform the following: set a peak dynamic power threshold to an initial level;monitor temperature of one or more processing components of a system on a chip ("SoC");monitor voltage levels supplied to the one or more processingcomponents;calculate an optimum level for the peak dynamic power threshold, wherein the optimum level is based on the leakage power level calculated from the monitored temperature and voltage levels associated with the one or more processing components;determine that the optimum level of the peak dynamic power threshold differs from the initial level of the peak dynamic power threshold;adjust the peak dynamic power threshold to the optimum level; and based on the adjusted peak dynamic power threshold, trigger adjustments to a throttling level for one or more of the processing components.10. The computer system of claim 9, wherein the PDP module is further operable to: monitor one or more power level parameters indicative of an actual power level supply to the power domain; andcalculate the optimum level for the peak dynamic power threshold further based on the actual power supply level to the power domain.1 1. The computer system of claim 10, wherein the one or more power level parameters are selected from the group consisting of input voltage to a switching mode power supply, power source type, number of active switching mode power supplies, output voltage of a switching mode power supply, temperature associated with a power management integrated circuit and a combination thereof.12. The computer system of claim 9, wherein the one or more processing components are selected from the group consisting of a graphical processing unit ("GPU"), a camera subsystem, a central processing unit ("CPU"), a modem or a combination thereof.13. The computer system of claim 9, wherein the adjustments to a throttling level comprise one of reducing operating frequency or reducing execution throughput.14. The computer system of claim 9, wherein the adjustments to a throttling level comprise one of increasing operating frequency or increasing execution throughput.15. The computer system of claim 9, wherein the adjustments to a throttling level comprise reducing operating frequency and voltage supplied to the power domain.16. The computer system of claim 9, wherein the PCD is a mobile telephone.17. A computer system for managing power consumption in a power domain of a portable computing device ("PCD"), the system comprising:means for setting a peak dynamic power threshold to an initial level;means for monitoring temperature of one or more processing components of a system on a chip ("SoC");means for monitoring voltage levels supplied to the one or more processing components;means for calculating an optimum level for the peak dynamic power threshold, wherein the optimum level is based on the leakage power level calculated from the monitored temperature and voltage levels associated with the one or more processing components;means for determining that the optimum level of the peak dynamic power threshold differs from the initial level of the peak dynamic power threshold;means for adjusting the peak dynamic power threshold to the optimum level; and means for triggering adjustments to a throttling level for one or more of the processing components based on the adjusted peak dynamic power threshold.18. The computer system of claim 17, further comprising means for monitoring one or more power level parameters indicative of an actual power level supply to the power domain, wherein:calculating an optimum level for the peak dynamic power threshold further comprises basing the calculation on the actual power supply level to the power domain.19. The computer system of claim 18, wherein the one or more power level parameters are selected from the group consisting of input voltage to a switching mode power supply, power source type, number of active switching mode power supplies, output voltage of a switching mode power supply, temperature associated with a power management integrated circuit and a combination thereof.20. The computer system of claim 17, wherein the one or more processing components are selected from the group consisting of a graphical processing unit ("GPU"), a camera subsystem, a central processing unit ("CPU"), a modem and a combination thereof.21. The computer system of claim 17, wherein the adjustments to a throttling level comprise one of reducing operating frequency or reducing execution throughput.22. The computer system of claim 17, wherein the adjustments to a throttling level comprise one of increasing operating frequency or increasing execution throughput.23. The computer system of claim 17, wherein the adjustments to a throttling level comprise reducing operating frequency and voltage supplied to the power domain.24. A non-transitory computer-readable programmable medium operable to cause a processor in a portable computing device to implement a method for managing power consumption in a power domain of the portable computing device ("PCD"), said method comprising:setting a peak dynamic power threshold to an initial level;monitoring temperature of the one or more processing components of a system on a chip ("SoC");monitoring voltage levels supplied to the one or more processing components; calculating an optimum level for the peak dynamic power threshold, wherein the optimum level is based on the leakage power level calculated from the monitored temperature and voltage levels associated with the one or more processing components; determining that the optimum level of the peak dynamic power threshold differs from the initial level of the peak dynamic power threshold;adjusting the peak dynamic power threshold to the optimum level; and based on the adjusted peak dynamic power threshold, triggering adjustments to a throttling level for one or more of the processing components.25. The non-transitory computer-readable programmable medium of claim 24, further comprising monitoring one or more power level parameters indicative of an actual power level supply to the power domain, wherein:calculating an optimum level for the peak dynamic power threshold further comprises basing the calculation on the actual power supply level to the power domain.26. The non-transitory computer-readable programmable medium of claim 25, wherein the one or more power level parameters are selected from the group consisting of input voltage to a switching mode power supply, power source type, number of active switching mode power supplies, output voltage of a switching mode power supply, temperature associated with a power management integrated circuit and a combination thereof.27. The non-transitory computer-readable programmable medium of claim 24, wherein the one or more processing components are selected from the group consisting of a graphical processing unit ("GPU"), a camera subsystem, a central processing unit ("CPU"), a modem and a combination thereof28. The non-transitory computer-readable programmable medium of claim 24, wherein the adjustments to a throttling level is selected from the group consisting of reducing operating frequency, reducing execution throughput and a combination thereof.29. The non-transitory computer-readable programmable medium of claim 24, wherein the adjustments to a throttling level are selected from the group consisting of increasing operating frequency, increasing execution throughput and a combination thereof.30. The non-transitory computer-readable programmable medium of claim 24, wherein the adjustments to a throttling level arc selected from the group consisting of reducing operating frequency reducing voltage supplied to the power domain and a combination thereof. |
SYSTEM AND METHOD FOR PEAK DYNAMIC POWER MANAGEMENT IN A PORTABLE COMPUTING DEVICEDESCRIPTION OF THE RELATED ART[0001] Portable computing devices ("PCDs") are powerful devices that are becoming necessities for people on personal and professional levels. Examples of PCDs may include cellular telephones, portable digital assistants ("PDAs"), portable game consoles, palmtop computers, and other portable electronic devices. As users have become more and more reliant on PCDs, demand has increased for more and better functionality. Simultaneously, users have also expected that the quality of service ("QoS") and overall user experience not suffer due to the addition of more and better functionality.[0002] Generally, providing more and better functionality in a PCD drives designers to use larger, more robust power management integrated circuits ("PMIC") and/or larger batteries capable of delivering more mA-Hr of battery capacity. Batteries and PMTCs may be sized for "worst case" scenarios of power consumption in the PCD. However, the trend in PCD design is for smaller form factors that often preclude the inclusion of a larger battery or more robust PMIC. Moreover, because the mA-Hr density of available battery technology has stagnated, the inclusion of a higher power density battery in a given size is no longer the answer to support the additional functionality. Rather, to accommodate the additional functionality in today's PCDs, without oversizing the PMIC and battery, the limited amount of available power supply must be managed such that it is leveraged efficiently and user experience is optimized.[0003] Power consuming components on a typical system on a chip ("SoC"), such as processing components, draw power from a power rail that is supplied by a PMIC and regulated by a voltage regulator. If the processing components request an increase in power supply that causes a current threshold for the voltage regulator to be exceeded, then actions must be taken to avoid exceeding the current threshold. For example, the workload and/or the clock frequency setting of one or more processing components may be reduced in an effort to bring the current of the power supply down to a suitable level to avoid performance degradation and/or outright device failure.[0004] Because the current threshold is dictated by the sum of the leakage current and the dynamic current being consumed by the SoC, where the leakage current is a function of the temperature of the processing components on the SoC and the dynamic current is a function of the workload being processed by the processing components, power reduction measures may be avoided by optimizing an allocation of the power supply to dynamic current consumption. Therefore, there is a need in the art for a system and method that adjusts a dynamic power budget threshold in view of the actual leakage power consumption. Moreover, there is a need in the art for a system and method that manages a current supply from a PM1C such that user experience is optimized without exceeding a peak current threshold.SUMMARY OF THE DISCLOSURE[0005] Various embodiments of methods and systems for dynamically adjusting a peak dynamic power threshold are disclosed. Advantageously, embodiments of the solution for peak dynamic power management optimize a peak dynamic power threshold based on estimations of real-time leakage current levels and/or actual power supply levels to a power domain of a system on a chip ("SoC"). In this way, embodiments of the solution ensure that a maximum amount of available power supply is allocated to dynamic power consumption for processing workloads without risking that the total power consumption (leakage power consumption + dynamic power consumption) for the power domain exceeds the power supply capacity.[0006] An exemplary method for managing power consumption in a power domain of a portable computing device ("PCD") begins by setting a peak dynamic power threshold to an initial level. The peak dynamic power threshold determines an allocation of power supplied to a power domain of a SoC for workload processing. The power domain may comprise one or more processing components that consume power, as would be understood by one of ordinary skill in the art. The exemplary method then monitors the operating temperatures of the one or more processing components, as well as voltage levels supplied to the one or more processing components. With the monitored operating temperatures and active voltage levels, the method may then calculate an optimum level for the peak dynamic power threshold based on an estimated leakage power level calculated from the monitored temperature and voltage levels associated with the one or more processing components. Notably, certain embodiments may also monitor parameters indicative of the actual power supply level and further consider the actual power supply level in the calculation of the optimum peak dynamic power threshold. Once the optimum level for the peak dynamic power threshold is calculated, it may be compared to the set level of the threshold and, if different, the peak dynamic power threshold may be adjusted to the optimum level. The adjusted threshold may then be used to trigger adjustments to throttling levels for one or more of the processing components.BRIEF DESCRIPTION OF THE DRAWINGS[0007] In the drawings, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as "102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral encompass all parts having the same reference numeral in all figures. Further, the use of a lower case "n" as a letter character designation is meant to indicate that any number of parts having the same reference numeral may be comprised within a given embodiment of the solution. Even so, the absence of a lower case "n" as a letter character designation will not be construed to suggest that an embodiment of the solution is limited to any specific number of a given part.[0008] FTG. 1 is a graph illustrating a total current consumption by a system on a chip ("SoC") resulting from a leakage current consumption and a dynamic current consumption;[0009] FIG. 2 is a graph illustrating certain benefits of a peak dynamic power ("PDP") management methodology in a system on a chip ("SoC") having a maximum power supply that is exceeded by the sum of a worst case dynamic power consumption level and a worst case leakage power consumption level;[0010] FIG. 3 is a functional block diagram illustrating an exemplary embodiment of a system for peak dynamic power ("PDP") management to a system on a chip ("SoC") in a portable computing device ("PCD");[0011] FIG. 4 illustrates an exemplary aspect of a peak dynamic power lookup table that may be used by the exemplary peak dynamic power ("PDP") management system of FIG. 3;[0012] FIG. 5 is a functional block diagram illustrating further detail for certain aspects of the exemplary peak dynamic power ("PDP") management system of FIG. 3 ;[0013] FIG. 6 is a logical flowchart illustrating a method for peak dynamic power ("PDP") management to a system on a chip ("SoC") in a portable computing device ("PCD"); [0014] FIG. 7 is a functional block diagram of an exemplary, non-limiting aspect of a portable computing device ("PCD") in the form of a wireless telephone forimplementing methods and systems for peak dynamic power ("PDP") management; and [0015] FIG. 8 is a schematic diagram illustrating an exemplary software architecture of the portable computing device ("PCD") of FIG. 7 for supporting application of algorithms associated with peak dynamic power management techniques.DETAILED DESCRIPTION[0016] The word "exemplary" is used herein to mean serving as an example, instance, or illustration. Any aspect described herein as "exemplary" is not necessarily to be construed as exclusive, preferred or advantageous over other aspects.[0017] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.[001 8] As used in this description, the terms "component," "database," "module," "system," "processing component," "estimator," "calculator," "limiter," "regulator" and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).[0019] In this description, the terms "central processing unit ("CPU")," "digital signal processor ("DSP")," and "chip" are used interchangeably. Moreover, a CPU, DSP, or a chip may be comprised of one or more distinct processing components generally referred to herein as "core(s)."[0020] In this description, the terms "workload," "process load" and "process workload" are used interchangeably and generally directed toward the processing burden, or percentage of processing burden, associated with a given processing component or functional block in a given embodiment. Further to that which is defined above, a "processing component" or "functional block" consumes power to process a workload and may be, but is not limited to, a central processing unit, a graphical processing unit, a core, a main core, a sub-core, a processing area, a hardware engine, a modem, a camera subsystem, etc. or any component residing within, or external to, an integrated circuit within a portable computing device.[0021] In this description, the terms "peak current management," "currentmanagement," "peak power management," "power management" and the like generally refer to measures and/or techniques for optimizing the use of power supplied from a PMIC and to a SoC. It is an advantage of various embodiments that the current supply may be managed by peak power management techniques to optimize user experience and provide higher levels of quality of service without violating a peak current threshold associated with a voltage regulator(s).[0022] In this description, the term "portable computing device" ("PCD") is used to describe any device configured to operate on a limited capacity power supply, such as a battery. Although battery operated PCDs have been in use for decades, technological advances in rechargeable batteries coupled with the advent of third generation ("3G") and fourth generation ("4G") wireless technology have enabled numerous PCDs with multiple capabilities. Therefore, a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop computer with a wireless connection, a notebook computer, an ultrabook computer, a tablet personal computer ("PC"), among others.[0023] Exemplary methods and systems generally referred to herein as including peak dynamic power ("PDP") management module(s) seek to monitor, analyze and manage a power supply in a PCD. A PDP module, perhaps in conjunction with a monitoring module, seeks to monitor and manage a peak dynamic current budget in view of realtime assessments of a power supply level, voltage lcvcl(s) and a leakage current lcvcl(s). Tn doing so, a PDP module may maximize a power supply allocation to processing components on a SoC for workload processing. A PDP module may also work with a dynamic control and voltage scaling ("DCVS") system to modify a clock frequency or voltage level to one or more processing components such that an overall current demand is adjusted and the peak current level maintained within a dynamic current budget. It is envisioned that in certain embodiments a PDP module may determine an input to a DCVS module based partly on operating temperatures of processing components controlled by the DCVS module.[0024] A PDP solution may be either a hardware or software scheme, or combination thereof, that works to dynamically adjust the peak dynamic power limit and/or the operating frequency limit based on an estimated silicone leakage change and power supply capability change. In doing so, a PDP solution maximizes the amount of power allocated for workload processing while ensuring that the total power consumed does not exceed the power supply capacity.[0025] FIG. 1 shows a graph 97 illustrating a total current consumption by a system on a chip ("SoC") resulting from a leakage current consumption and a dynamic current consumption. As can be seen in graph 97, a total current level consumed by one or more function blocks on the SoC is the sum of the leakage current consumption and the dynamic current consumption. As one of ordinary skill in the art would understand, the leakage current is a function of the temperature of the function blocks and may change slowly as the temperature(s) rise or fall with thermal energy generation and/or dissipation. By contrast, the dynamic power is a function of the workload(s) being processed by the function blocks. As a workload for a given function block increases, the amount of dynamic current consumed by the functional block must also increase. Similarly, as a workload decreases, so docs the dynamic power being consumed by the function block that is processing the workload. Consequently, the amount of dynamic power being consumed is prone to "spikes" as short term workloads are processed by various function blocks on a SoC.[0026] In many PCDs, the available power capacity is designed for very short, peak power demands. In the graph 97, the x-axis represents time and the y-axis represents power consumption (illustrated as amperes of current consumption). The lower portion of the graph 97 shown in a hatched pattern depicts the portion of the overall power consumption attributable to silicon leakage power. The upper portion of the graph 97 shown above the hatched pattern and beneath the total current trace depicts the portion of the overall power consumption attributable to dynamic leakage, i.e. the portion of the total power consumption due to clocking and frequency settings of the functional blocks.[0027] Notably, the leakage power being consumed is independent of frequency / clocking of the functional blocks. Leakage power consumption changes slowly whereas dynamic power consumption changes quickly based on workload changes. As can further be seen in the exemplary graph 97, the total current demand may peak at levels above the maximum current supply (illustrated as 12A in the FIG. 1 graph 97) if not managed.[0028] FIG. 2 shows a diagram 98 illustrating certain benefits of a peak dynamic power ("PDP") management methodology in a system on a chip ("SoC") having a maximum power supply that is exceeded by the sum of a worst case dynamic power consumption level and a worst case leakage power consumption level. At the left side of the diagram 98, a maximum power supply capacity is represented by the span between upper and lower dotted horizontal lines 205U, 205L. As would be understood by one of ordinary skill in the art, the demand for power by function blocks on the SoC must be managed to stay within the maximum power supply capacity.[0029] Blocks 210 and 215 represent, respectively, a worst case scenario for dynamic power consumption and leakage power consumption for an exemplary SoC. Notably, the sum of the worst case dynamic power and leakage power exceeds the maximum power supply capacity. As such, PCD designers for the exemplary SoC must manage the power demand on the SoC such that the demand does not exceed the maximum power supply.[0030] Blocks 220 and 225 represent a power management technique for ensuring that the total power demand for the exemplary SoC docs not exceed the maximum power supply. For the technique represented by blocks 220 and 225, the worst-case leakage power consumption 225 is assumed, leaving the remainder of the power supply (the difference in the maximum power supply and the worst case leakage powerconsumption 225) as the maximum available power supply for allocation to dynamic power consumption. Consequently, the peak dynamic power threshold is limited to the amount represented by block 220 even if the actual maximum power supply is increased and/or the actual leakage power consumption is less than the worst case 225. Because the peak dynamic power threshold may be overly limited in certain applications, the technique represented by blocks 220 and 225 may not optimize the allocation of available power and thus cause QoS (as measured in terms of processing performance or throughput, for example) to suffer unnecessarily.[0031] Blocks 230 and 235 also represent a power management technique for ensuring that the total power demand for the exemplary SoC does not exceed the maximum power supply. For the technique represented by blocks 230 and 235, the worst case dynamic power consumption 230 is assumed, leaving the remainder of the power supply (the difference in the maximum power supply and the worst case dynamic power consumption 230) as the maximum available power supply for allocation to leakage power consumption. Consequently, the peak leakage power threshold is limited to the amount represented by block 235 even if the actual maximum power supply is increased and/or the actual dynamic power consumption is less than the worst case 230. Because the peak leakage power threshold may be overly limited in certain applications, the technique represented by blocks 230 and 235 may unnecessarily trigger the application of thermal mitigation techniques in an effort to reduce leakage power consumption and, in doing so, cause QoS (as measured in terms of processing performance or throughput, for example) to suffer unnecessarily.[0032] Blocks 240 and 245 represent the application of an exemplary method for peak dynamic power ("PDP") management according to an embodiment of the solution proposed herein. As can be seen from the relationship between blocks 240 and 245, a PDP management solution works to optimize the amount of power allocated to dynamic power consumption by varying the peak dynamic power threshold in view of changes in leakage power consumption. Additionally, some embodiments of a PDP management solution may also take into consideration variations in the actual amount of power supply available. Notably, as the leakage power consumption 245 trends downward in the FIG. 2 illustration from 245A to 245n, the amount of power supply allocated to the dynamic power consumption 240 trends upward from 240 to 240n. In this way, a PDP management methodology may ensure that the maximum amount of available power supply is allocated to dynamic power, thereby optimizing the ability to process workloads and maintain a high QoS level (as measured in terms of processing performance or throughput, for example).[0033] FIG. 3 is a functional block diagram illustrating an exemplary embodiment of a system 99A for peak dynamic power ("PDP") management to a system on a chip ("SoC") 102 in a portable computing device ("PCD") 100. As can be seen in the exemplary illustration of FTG. 3, a power management integrated circuit ("PMTC") 180 is configured to supply power to each of one or more exemplary processing components or function blocks residing within the SoC 102. As depicted, the power is sourced from a power supply 188 (such as a battery or an AC power source) and distributed by the PMIC 180 to the SoC 102 through a voltage regulator 189 and via a number of dedicated sets of power rails 190 (only one set being shown in FIG. 3). Notably, each of cores 0, 1, 2 and 3 of function block 1 (such as may be the case for a CPU 110 or GPU 182 (shown in FIG.5 and discussed below) may have its own dedicated power rail 190, as would be understood by one of ordinary skill in the art. Moreover, one of ordinary skill in the art will recognize that any core, sub-core, sub-unit or the like within a processing component, may share a common power rail with complimentary components or have a dedicated power rail 190 and, as such, the particular architecture illustrated in FIG. 3 is exemplary in nature and will not limit the scope of the disclosure.[0034] Returning to the FIG. 3 illustration, one or more temperature sensors 157A are configured to sense operating temperatures (such as junction temperatures) associated with the various function blocks and generate signals, to monitor module 114, indicative of those temperatures. The monitor module 114 may monitor the temperature signals and provide them to the peak dynamic power ("PDP") module 101 that, in turn, may use the temperature readings to query a lookup table 24 and determine an active leakage current associated with each of the function blocks. Notably, it is envisioned that certain embodiments of the solution may use current sensors in an effort to monitor the power rails 190. The current sensors may be of a type such as, but not limited to, a Hall effect type for measuring the electromagnetic field generated by current flowing through a power rail 190, a shunt resistor current measurement type for calculating current from voltage drop measured across a resistor in a power rail 190, or any type known to one of ordinary skill in the art. As such, while the particular design, type or configuration of a sensor 157 that may be used in an embodiment of the systems and methods may be novel in, and of, itself, the systems and methods are not limited to any particular type of sensor 157. Essentially, the sensors 157, regardless of type or location, may be used by a given embodiment of the solution to deduce leakage power consumption associated with one or more function blocks and/or power supply levels associated with PMIC 180 via voltage regulator(s) 189.[0035] As described above, monitor module 1 14 may monitor and receive the signals generated by the scnsor(s) 157 to indicate actual, near real-time leakage power consumption of the function blocks and actual, near real-time power supply levels from the PMIC 180. Notably, although the monitor module 114 and PDP module 101 are depicted in the FIG. 3 illustration as residing on the SoC 102, one of ordinary skill in the art will recognize that either or both may reside off chip 102 in certainembodiments. Moreover, one of ordinary skill in the art will recognize that, in some embodiments of a PCD 100, the monitor module 114 and/or certain sensors 157 may be included in the PMIC 180.[0036] As one of ordinary skill in the art will recognize, embodiments of the PDP module 101 and/or monitor module 1 14 may include hardware and/or software interrupts handled by an interrupt service routine. That is, depending on theembodiment, a PDP module 101 and/or monitor module 114 may be implemented in hardware as a distinct system with control outputs, such as an interrupt controller circuit, or implemented in software, such as firmware integrated into a memory subsystem.[0037] Returning to the FIG. 3 illustration, the monitor module 114 monitors a signal from one or more temperature sensors 157A to track leakage power consumption levels of active components associated with the various rails. In addition to the temperature sensors 157A, monitor module 1 14 may also monitor sensors 157B (not shown) associated with the PMIC 180 to recognize parameters useful for determining an actual provided power supply level. The monitor module 114 may subsequently communicate with the PDP module 101 to relay the monitored data indicative of active leakage power consumption of functional blocks residing on the SoC 102 and actual power supply levels available from the PMIC 180. Advantageously, the PDP module 101 may use the monitored data to determine an actual available power supply for allocation to dynamic power consumption by the various function blocks and then adjust a peak dynamic current threshold based on the determination. An adjusted peak dynamic current threshold may be used to trigger a dynamic control and voltage scaling (DCVS) module 26 to throttle the function blocks to optimal workload processing levels, as would be understood by one of ordinary skill in the art of dynamic control and voltage scaling of processing components. Through application of the throttling adjustments by the DCVS module 26, the PDP module 101 may effectively optimize user experience by maintaining current consumption of the function block(s) beneath a dynamic and optimized peak current threshold.[0038] FIG. 4 illustrates an exemplary aspect of a peak dynamic power lookup table 24 that may be used by the exemplary peak dynamic power ("PDP") management system 99A of FIG. 3. As described relative to FIG. 3 and reiterated at the top of FIG. 4, the PDP module 101 receives inputs, from the monitor module 114, that may be used to determine a real-time dynamic power budget that includes an active leakage power consumption and dynamic power allocation. Using the dynamic power allocation, the PDP module 101 may adjust a peak current threshold such that workload capacities for the various function blocks may be optimized.[0039] Using the dynamic power budget calculation that is derived from an estimate of the actual leakage power levels, the PDP module 101 may query a lookup table 24 to determine threshold settings for the various function blocks. FIG. 5 is a functional block diagram illustrating further detail for certain aspects of the exemplary peak dynamic power ("PDP") management system 99 of FIG. 3. With reference to FIG. 4 and FIG. 5, for the exemplary table, suppose that the PDP module 101 has determined that a peak power threshold for a GPU 182 is between 4W and 5 W (notably, in certainembodiments it may have determined that the peak current threshold is within a certain range - as one of ordinary skill in the art would understand, references to power and current may be interchangeable within the context of the solutions described herein). Using the table, the PDP module 101 may set various function blocks of the GPU 1 82 (such as a shader processor, texture processor, etc.) to optimum workload processing levels. For example, the PDP module 101 may cause function block #1 to be set to a workload processing level of 10 frames per second, function block #2 to be set to 20 frames per second and so on. As another example, the PDP module 101 may cause function block #1 to be set to a workload processing level based on millions of instructions per second ("MIPS") if the function block #1 were a core associated with a CPU 110.[0040] The FIG. 5 illustration includes three main components of the system 99 - the PMIC 180, the PDP module 101 and a power domain (e.g. GPU 182). As described above, the PMIC 180 supplies power to the power domain which resides on the SoC 102. And, the PDP module 101 adjusts the peak current threshold for that power supply in order to optimize the amount of power allocated to the power domain for workload processing (i.e., to optimize the dynamic power budget).[0041] As shown in FIG. 5, a power source ( shown as batteryl88A or battery 188B) may be in the form of a battery 188A or an AC adapter 1 18B, as would be understood by one of ordinary skill in the art. The PMIC 180 supplies the power to the power domain on the SoC 102 via one or more switching mode power supplies (voltage regulators) shown as SMPS 189A and SMPS 189. Notably, the power supplied via the switching mode power supplies ("SMPS") 189 to the power domain may vary depending on any number of factors including, but not limited to, SMPS input voltage, power source type, number of SMPS allocated to the power domain, temperature of the PMIC 180, output voltage of the various SMPS, etc. As such the PMIC 180 may include a power supply capability estimator 181 that works with the monitor module 1 14 (not shown) or includes its own monitor module to monitor the factors and estimate an actual power supply level to the power domain based on the factor readings.[0042] The power supply capability estimator 181 may indicate the maximum power supply level (or maximum current level, as the case may be) coming out of the PMIC 180 to the PDP module 101.[0043] With reference still to FIG. 5, the maximum power supply level is indicated to a dynamic power budget calculator 184. The dynamic power budget calculator 184 may also receive an active leakage power level estimate from a leakage power estimator 185 which may have estimated the active leakage power level based on active voltage inputs from the DCVS module 26, temperature inputs from the temperature sensors 157C and queried TDDQ specification leakages from the eFuse 186. The eFuse 1 86 may exist in ROM 112, as would be understood by one of ordinary skill in the art. Further, and as would be understood by one of ordinary skill in the art, the temperature readings and voltage levels may be used to determine an expected IDDQ leakage current level [leakage = iDDQ*(em*(v"Vref)+n*(Tj"Tref))][0044] Returning to the dynamic power budget calculator 184, it may calculate the amount of the actual power supply that may be allocated to dynamic powerconsumption [Imax- P ieakage = P_remain] . The dynamic power allocation is then provided to the operating frequency limiter 183 and the peak power demand ("PPD") threshold controller 179 which, in turn, may adjust the PPD threshold settings utilized by the performance regulator 187 of the power domain to govern its workload utilization.[0045] The operating frequency limiter 183 may adjust the maximum frequency and bin step up limits based on the estimated actual power supply level and indicate as much to the DCVS module 26. The DCVS module 26 may, in turn, modulate the frequency of the power domain. Moreover, in the event that the amount of workload of the power domain exceeds the dynamic power budget set by the PPD threshold controller 179, a trigger signal may be provided back to the DCVS module 26 to reduce voltage in addition to frequency. In doing so, the power domain may be able to operate at a lower voltage for a drastically reduced frequency.[0046] FIG. 6 is a logical flowchart illustrating a method 600 for peak dynamic power ("PDP") management to a system on a chip ("SoC") 102 in a portable computing device ("PCD") 100. The method 600 begins at block 605 with the setting of an initial dynamic power budget threshold ("ST"). The dynamic power budget threshold may be associated with any unit of electrical measurement indicative of power consumption such as, but not limited to, watts or amperes. Also, the dynamic power budget threshold may be associated with a power domain on the SoC 102 that includes a single function block or a combination of function blocks. The amount of power provided to the power domain for processing workloads is dictated by the dynamic power budget threshold such that, if the threshold is exceeded, a DCVS module 26 may be triggered to adjust down frequency settings and/or voltage settings of one or more function blocks.[0047] Once the dynamic power budget threshold is set, at block 610, the monitor module 114 and/or the PDP module 101 may monitor the power supply level indicators, the operating temperature(s) associated with the power domain, the voltage being supplied to the power domain, etc. Next, at block 615, based on the power supply level indicators, the PDP module 101 may determine the actual power supply level being provided from the PMIC 180 to the power domain. At block 620, based on the operating temperature(s) of the power domain, the PDP module 101 may estimate the amount of power being consumed by the power domain due to leakage current of the function block(s). Based on the actual power supply level and the estimated leakage current consumption, at block 625 the PDP module 101 may determine a remaining amount of the power supply that may be allocated for dynamic power consumption, i.e. power that may be used for processing workloads. Using the remaining power budget, an optimum dynamic power budget threshold ("OT") may be determined.[0048] Once the OT is determined, at decision block 630 the PDP module 101 may compare the OT to the previously set dynamic power budget threshold ST. If there is no significant difference between the ST and the OT, the "no" branch may be followed back to block 610 and the various parameters monitored further. If, however, the OT differs from the ST, the "yes" branch may be followed to block 635 and the dynamic power budget threshold modified to the calculated OT. The method 600 then proceeds to block 640 and the DCVS module 26 may modify frequency and/or voltage settings to the one or more function blocks based on the new OT power budget threshold. The method 600 returns and repeats such that the OT may be modified per a subsequently calculated OT in the event that the power supply level changes and/or the leakage current consumption of the power domain changes. In this way, a peak dynamic power management methodology may ensure that a maximum available power supply headroom is allocated to workload processing at any given point in time, thereby optimizing the QoS (as measured in terms of processing performance or throughput, for example) experienced by a user of the PCD 100.[0049] FTG. 7 is a functional block diagram of an exemplary, non-limiting aspect of a portable computing device ("PCD") 100 in the form of a wireless telephone for implementing methods and systems for peak dynamic power ("PDP") management. As shown, the PCD 100 includes SoC 102 that includes a multi-core central processing unit ("CPU") 110 and an analog signal processor 126 that are coupled together. The CPU 1 10 may comprise a zeroth core 222, a first core 224, and an Nth core 230 as understood by one of ordinary skill in the art. Further, instead of a CPU 1 10, a digital signal processor ("DSP") may also be employed as understood by one of ordinary skill in the art.[0050] Tn general, the peak dynamic power ("PDP") module 101 , in conjunction with the monitor module 114, may be responsible for monitoring leakage currentconsumption levels, power supply levels, determining optimum dynamic current budgets, and applying peak current management techniques to help a PCD 100 optimize its power consumption and maintain a high level of functionality. Further, the PDP module 101 may consider operating temperatures of one or more processingcomponents or function blocks in a power domain when determining an appropriate adjustment to a peak dynamic current threshold.[0051] The monitor module 114 communicates with multiple operational sensors 157 distributed throughout the on-chip system 102 and/or PMIC 180 and with the CPU 1 10 of the PCD 100 as well as with the PDP module 101. In some embodiments, monitor module 114 may also monitor power sensors 157B for current consumption rates uniquely associated with the cores 222, 224, 230 and transmit the power consumption data to the PDP module 101 and/or a database (which may reside in memory 112). The PDP module 101 may work with the monitor module 1 14 to determine available dynamic current budgets for processing components residing on the SoC 102 such that a peak current threshold for power through the voltage regulator 189 is adjusted to optimum levels. [0052] As illustrated in FIG. 7, a display controller 128 and a touch screen controller 130 are coupled to the digital signal processor 110. A touch screen display 132 external to the on-chip system 102 is coupled to the display controller 128 and the touch screen controller 130. A PDP module 101 may monitor temperatures and voltage levels for the cores 222, 224, 230, for example, and work with the DCVS module 26 to manage power consumed by the cores for workload processing.[0053] PCD 100 may further include a video encoder 134, e.g., a phase-alternating line ("PAL") encoder, a sequential couleur avec memoire ("SECAM") encoder, a national television system(s) committee ("NTSC") encoder or any other type of video encoder 134. The video encoder 134 is coupled to the multi-core central processing unit ("CPU") 110. A video amplifier 136 is coupled to the video encoder 134 and the touch screen display 132. A video port 138 is coupled to the video amplifier 136. As depicted in FIG. 3, a universal serial bus ("USB") controller 140 is coupled to the CPU 1 10. Also, a USB port 142 is coupled to the USB controller 140. A memory 112 and a subscriber identity module (SIM) card 146 may also be coupled to the CPU 110.Further, as shown in FIG. 7, a digital camera 148 may be coupled to the CPU 110. In an exemplary aspect, the digital camera 148 is a charge-coupled device ("CCD") camera or a complementary metal-oxide semiconductor ("CMOS") camera.[0054] As further illustrated in FIG. 7, a stereo audio CODEC 150 may be coupled to the analog signal processor 126. Moreover, an audio amplifier 152 may be coupled to the stereo audio CODEC 1 0. In an exemplary aspect, a first stereo speaker 154 and a second stereo speaker 156 are coupled to the audio amplifier 152. FIG. 7 shows that a microphone amplifier 158 may also be coupled to the stereo audio CODEC 150.Additionally, a microphone 160 may be coupled to the microphone amplifier 158. In a particular aspect, a frequency modulation ("FM") radio tuner 162 may be coupled to the stereo audio CODEC 150. Also, an FM antenna 164 is coupled to the FM radio tuner 162. Further, stereo headphones 166 may be coupled to the stereo audio CODEC 150.[0055] FIG. 7 further indicates that a radio frequency ("RE") transceiver 168 may be coupled to the analog signal processor 126. An RE switch 170 may be coupled to the RE transceiver 168 and an RF antenna 172. As shown in FIG. 7, a keypad 174 may be coupled to the analog signal processor 126. Also, a mono headset with a microphone 176 may be coupled to the analog signal processor 126. Further, a vibrator device 178 may be coupled to the analog signal processor 126. FIG. 7 also shows that a power supply 1 88, for example a battery, is coupled to the on-chip system 102 through PMTC 180. In a particular aspect, the power supply 188 includes a rechargeable DC battery or a DC power supply that is derived from an alternating current ("AC") to DC transformer that is connected to an AC power source. Power from the PMIC 180 is provided to the chip 102 via a voltage regulator 189.[0056] The CPU 1 10 may also be coupled to one or more internal, on-chip thermal sensors 157A as well as one or more external, off-chip thermal sensors 157C. The on- chip thermal sensors 157A may comprise one or more proportional to absolute temperature ("PTAT") temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor ("CMOS") very large-scale integration ("VLSI") circuits. The off-chip thermal sensors 157C may comprise one or more thermistors. The thermal sensors 157C may produce a voltage drop that is converted to digital signals with an analog-to-digital converter ("ADC") controller 103. However, other types of thermal sensors 157A, 157C may be employed without departing from the scope of the invention.[0057] The PDP modulc(s) 101 may comprise software that is executed by the CPU 1 10. However, the PDP module(s) 101 may also be formed from hardware and/or firmware without departing from the scope of the invention.[0058] The touch screen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, the power supply 188, the PMIC 180 and the thermal sensors 157C are external to the on-chip system 102.However, it should be understood that the monitor module 1 14 may also receive one or more indications or signals from one or more of these external devices by way of the analog signal processor 126 and the CPU 1 10 to aid in the real time management of the resources operable on the PCD 100.[0059] In a particular aspect, one or more of the method steps described herein may be implemented by executable instructions and parameters stored in the memory 1 12 that form the one or more PDP module(s) 101. These instructions that form the PDP module(s) 101 may be executed by the CPU 110, the analog signal processor 126, or another processor, in addition to the ADC controller 103 to perform the methods described herein. Further, the processors 1 10, 126, the memory 1 12, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein. [0060] FIG. 8 is a schematic diagram illustrating an exemplary software architecture of the portable computing device ("PCD") 100 of FIG. 7 for supporting application of algorithms associated with peak dynamic power management techniques. Any number of algorithms may form or be part of at least one peak dynamic power management technique that may be applied by the PDP module 101 when certain power supply budgets arc determined and certain operating temperatures arc recognized in a given power domain.[0061 ] As illustrated in FTG. 8, the CPU or digital signal processor 1 10 is coupled to the memory 112 via a bus 21 1. The CPU 1 10, as noted above, is a multiple-core processor having "n" core processors. That is, the CPU 110 includes a first core 222, a second core 224, and an Nthcore 230. As is known to one of ordinary skill in the art, each of the first core 222, the second core 224 and the ΝΛcore 230 are available for supporting a dedicated application or program. Alternatively, one or more applications or programs may be distributed for processing across two or more of the available cores.[0062] The CPU 1 10 may receive commands from the PDP modulc(s) 101 that may comprise software and/or hardware. If embodied as software, the PDP module(s) 101 comprises instructions that are executed by the CPU 1 10 that issues commands to other application programs being executed by the CPU 110 and other processors. For example, the PDP module(s) 101 may instruct CPU 1 10 to cause a certain active application program to cease so that leakage current consumption by the SoC 102 is maintained at a certain level.[0063] The first core 222, the second core 224 through to the Nth core 230 of the CPU 1 10 may be integrated on a single integrated circuit die, or they may be integrated or coupled on separate dies in a multiple-circuit package. Designers may couple the first core 222, the second core 224 through to the ΝΛcore 230 via one or more shared caches and they may implement message or instruction passing via network topologies such as bus, ring, mesh and crossbar topologies.[0064] Bus 211 may include multiple communication paths via one or more wired or wireless connections, as is known in the art. The bus 211 may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the bus 211 may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. [0065] When the logic used by the PCD 100 is implemented in software, as is shown in FIG. 8, it should be noted that one or more of startup logic 250, management logic 260, peak dynamic power management interface logic 270, applications in application store 280 and portions of the file system 290 may be stored on any computer-readable device or medium for use by or in connection with any computer-related system or method.[0066] In the context of this document, a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that may contain or store a computer program and data for use by or in connection with a computer-related system or method. The various logic elements and data stores may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that may fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "computer-readable medium" may be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.[0067] The computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette(magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc readonly memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program may be electronically captured, for instance via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.[0068] In an alternative embodiment, where one or more of the startup logic 250, management logic 260 and perhaps the peak dynamic power interface logic 270 are implemented in hardware, the various logic may be implemented with any or a combination of the following technologies, which arc each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriatecombinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.[0069] The memory 1 12 is a non- volatile data storage device such as a flash memory or a solid-state memory device. Although depicted as a single device, the memory 112 may be a distributed memory device with separate data stores coupled to the digital signal processor (or additional processor cores).[0070] Tn one exemplary embodiment for managing peak dynamic power consumption to optimize user experience and QoS (as measured in terms of processing performance or throughput, for example), the startup logic 250 includes one or more executable instructions for selectively identifying, loading, and executing a select program for peak dynamic power management. A select program may be found in the program store 296 of the embedded file system 290 and is defined by a specific combination of a performance scaling algorithm 297 and a set of parameters 298. The select program, when executed by one or more of the core processors in the CPU 110, may operate in accordance with one or more signals provided by the monitor module 114 incombination with control signals provided by the one or more PDP module(s) 101 and DCVS module(s) 26 to scale or suspend the performance of the respective processor core in an effort to maintain dynamic current consumption by the SoC 102 at an optimal level in view of a dynamic peak current threshold.[0071] The management logic 260 includes one or more executable instructions for terminating a peak dynamic power management program on one or more of the respective processor cores, as well as selectively identifying, loading, and executing a more suitable replacement program for managing or controlling the power draw of one or more of the available cores based on a calculated current budget. The management logic 260 is arranged to perform these functions at run time or while the PCD 100 is powered and in use by an operator of the device. A replacement program may be found in the program store 296 of the embedded file system 290.[0072] The replacement program, when executed by one or more of the core processors in the digital signal processor, may operate in accordance with one or more signals provided by the monitor module 1 14 or one or more signals provided on the respective control inputs of the various processor cores to scale or suspend the performance of the respective processor core. In this regard, the monitor module 114 may provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, temperature, current leakage, etc in response to control signals originating from the PDP module 101.[0073] The interface logic 270 includes one or more executable instructions for presenting, managing and interacting with external inputs to observe, configure, or otherwise update information stored in the embedded file system 290. In one embodiment, the interface logic 270 may operate in conjunction with manufacturer inputs received via the USB port 142. These inputs may include one or more programs to be deleted from or added to the program store 296. Alternatively, the inputs may include edits or changes to one or more of the programs in the program store 296.Moreover, the inputs may identify one or more changes to, or entire replacements of one or both of the startup logic 250 and the management logic 260. By way of example, the inputs may include a change to the management logic 260 that instructs the PCD 100 to apply a desired throttling algorithm when the calculated dynamic power budget is beneath a certain value.[0074] The interface logic 270 enables a manufacturer to controllably configure and adjust an end user's experience under defined operating conditions on the PCD 100. When the memory 1 12 is a flash memory, one or more of the startup logic 250, the management logic 260, the interface logic 270, the application programs in the application store 280, data in a database or information in the embedded file system 290 may be edited, replaced, or otherwise modified. In some embodiments, the interface logic 270 may permit an end user or operator of the PCD 100 to search, locate, modify or replace the startup logic 250, the management logic 260, applications in the application store 280, data in a database and information in the embedded file system 290. The operator may use the resulting interface to make changes that will be implemented upon the next startup of the PCD 100. Alternatively, the operator may use the resulting interface to make changes that are implemented during run time.[0075] The embedded file system 290 includes a hierarchically arranged peak dynamic power management store 292. In this regard, the file system 290 may include a reserved section of its total file system capacity for the storage of information for theconfiguration and management of the various parameters 298 and peak dynamic power management algorithms 297 used by the PCD 100.[0076] Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", "subsequently", etc. are not intended to limit the order of the steps. These words arc simply used to guide the reader through the description of the exemplary method.[0077] Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example. Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the drawings, which may illustrate various process flows.[0078] Tn one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM,EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer.[0079] Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. [0080] Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.Combinations of the above should also be included within the scope of computer- readable media.[0081] Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims. |
A memory cell design is disclosed. In an embodiment, the memory cell structure includes at least one memory bit layer stacked between top and bottom electrodes. The memory bit layer provides a storageelement for a corresponding memory cell. One or more additional conductive layers may be included between the memory bit layer and either, or both, of the top or bottom electrodes to provide a betterohmic contact. In any case, a dielectric liner structure is provided on sidewalls of the memory bit layer. The liner structure includes a dielectric layer, and may also include a second dielectric layer on a first dielectric layer. Either or both first dielectric layer or second dielectric layer comprises a high-k dielectric material. As will be appreciated, the dielectric liner structure effectively protects the memory bit layer from lateral erosion and contamination during the etching of subsequent layers beneath the memory bit layer. |
1.A storage device, including:Multiple conductive bit lines;Multiple conductive word lines; andA group of memory cells included in a memory cell array, each of the memory cells being located between a corresponding bit line of the plurality of conductive bit lines and a corresponding word line of the plurality of conductive word lines , Each of the memory units includes:A stack of layers including memory bit layers, andThe dielectric layer is only on one or more sidewalls of a part of the total thickness of the layer stack, so that the dielectric layer is on one or more sidewalls of the memory bit layer.2.The memory device according to claim 1, wherein the memory cell array is arranged in three dimensions, wherein the memory cells are placed in rows and columns along a plurality of XY planes stacked in the Z direction.3.The memory device of claim 2, wherein the size of one or more of the memory cells in the X direction is larger than the size in the Y direction.4.The memory device according to claim 1, wherein the layer stack is composed of only a first conductive layer, the memory bit layer on the first conductive layer, and a second conductive layer on the memory bit, And wherein, there is only the layer stack between the corresponding bit line of the plurality of conductive bit lines and the corresponding word line of the plurality of conductive word lines.5.4. The memory device of claim 4, wherein the dielectric layer is on one or more sidewalls of the second conductive layer, and the dielectric layer is on a top surface of the first dielectric layer.6.The memory device of claim 1, wherein the plurality of conductive bit lines extend orthogonal to the plurality of conductive word lines.7.The memory device of claim 1, wherein one or more of the memory cells have a height between about 60 nm and about 80 nm.8.The memory device of claim 1, wherein the dielectric layer includes a high-k material.9.The memory device according to claim 1, wherein the dielectric layer is a first dielectric layer, and the device further comprises a second dielectric layer on the first dielectric layer, wherein the second dielectric layer is On one or more sidewalls of the total thickness of the layer stack, such that the first dielectric layer does not exist between the second dielectric layer and the layer stack in at least one position.10.The memory device of claim 1, wherein the memory bit layer includes chalcogenide.11.10. The memory device according to any one of claims 1-10, wherein the plurality of conductive bit lines and the plurality of conductive word lines comprise one or both of tungsten and carbon.12.An integrated circuit comprising the storage device according to any one of claims 1-10.13.A printed circuit board comprising the integrated circuit according to claim 12.14.A memory chip, comprising the memory device according to any one of claims 1-10.15.An electronic device including:A chip package including one or more dies, at least one of the one or more dies including:A layer stack between a word line and a bit line, the layer stack including a memory bit layer, andThe dielectric layer is only on one or more sidewalls of a part of the total thickness of the layer stack, so that the dielectric layer is on one or more sidewalls of the memory bit layer.16.The electronic device according to claim 15, wherein the dielectric layer comprises a high-k material.17.The electronic device according to claim 15, wherein the dielectric layer is a first dielectric layer, and the device further comprises a second dielectric layer on the first dielectric layer, wherein the second dielectric layer is On one or more sidewalls of the total thickness of the layer stack, such that the first dielectric layer does not exist between the second dielectric layer and the layer stack in at least one position.18.The electronic device according to claim 15, wherein the memory bit layer includes chalcogenide.19.The electronic device according to any one of claims 15-18, wherein the layer stack consists of only a first conductive layer, the memory bit layer on the first conductive layer, and the memory bit layer The second conductive layer is composed of, and wherein only the layer stack exists between the word line and the bit line.20.A method of manufacturing a storage device includes:Deposit a conductive layer on the substrate;Depositing a layer stack on the conductive layer, the layer stack including a memory bit layer;Only etch through a part of the total thickness of the layer stack, so that the etching passes through the entire thickness of the memory bit layer;Depositing a dielectric layer on at least one or more sidewalls of the memory bit layer; andEtching through the rest of the layer stack and through the thickness of the conductive layer. |
Storage device with double-layer protective linerBackground techniqueAs electronic devices continue to become smaller and more complex, the need to store more data and quickly access that data has similarly grown. A new memory architecture has been developed that uses an array of memory cells with special materials with variable body resistance, allowing the resistance value to indicate whether a given memory cell stores logic "0" or logic "1" . There are many challenges in manufacturing such a memory architecture.Description of the drawingsAs the following specific implementations proceed and when referring to the accompanying drawings, the features and advantages of the claimed subject matter embodiments will become apparent, in the accompanying drawings:Figure 1A shows a cross-sectional view of a portion of a stacked array of memory cells according to some embodiments of the present disclosure.1B and 1C show orthogonal cross-sectional views of stacked arrays of memory cells according to some embodiments of the present disclosure.Figure 2 illustrates a cross-sectional view of a chip package containing one or more memory dies according to some embodiments of the present disclosure.FIG. 3 shows a cross-sectional view of a part of the memory device during the manufacturing process of the memory device according to some embodiments of the present disclosure.4A and 4B show orthogonal cross-sectional views of the state of the memory device during the manufacturing process according to some embodiments of the present disclosure.5A and 5B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.6A and 6B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.7A and 7B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.8A and 8B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.9A and 9B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.10A and 10B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.11A and 11B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.12A and 12B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.13A and 13B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.14A and 14B show orthogonal cross-sectional views of another state of the memory device during the manufacturing process according to some embodiments of the present disclosure.FIG. 15 is a flowchart of a manufacturing process for a memory device according to an embodiment of the present disclosure.Figure 16 shows an exemplary electronic device that may include one or more of the embodiments of the present disclosure.Although the following specific embodiments will be made with reference to illustrative examples, many substitutions, modifications and changes will be apparent according to the present disclosure. As will be further understood, the drawings are not necessarily drawn to scale or are intended to limit the disclosure to the specific configuration shown. For example, although some drawings usually indicate perfect straight lines, right angles, and smooth surfaces, considering the real-world limitations of the processing equipment and technology used, the actual implementation of the integrated circuit structure may have not so perfect straight lines, right angles, And some features may have surface topology or be non-smooth in other cases.Detailed waysThe memory cell design is disclosed. This design is particularly suitable for 3D X-point storage structures, although other storage applications can also benefit significantly from it. In an embodiment, the memory cell structure includes at least one memory bit layer stacked between top and bottom electrodes. The memory bit layer provides storage elements for corresponding memory cells. One or more additional conductive layers can be included between the memory bit layer and either or both of the top or bottom electrodes to provide better ohmic contact. In any case, a dielectric pad structure is provided on the sidewall of the memory bit layer. In an embodiment, the pad structure includes a dielectric layer, and in another embodiment, the pad structure includes a second dielectric layer on the first dielectric layer. Either or both of the first dielectric layer and/or the second dielectric layer include a high-k dielectric material. As will be understood from this disclosure, the dielectric liner structure effectively protects the memory bit layer from lateral erosion and contamination during etching of subsequent layers below the memory bit layer. From the present disclosure, many configurations and embodiments will be apparent.General overviewAs mentioned above, there are several significant problems associated with manufacturing memory arrays based on changes in the bulk resistance of the memory bit materials. For example, in some cases, the memory bit material is included as a layer in the multilayer stack, which also includes an electrode material layer. The multilayer stack is then etched into an array of smaller individual stacks. Each individual stack can be used as a memory cell in the entire array. A problem that arises during the etching process is that as the etching continues through the stack, the etching process exposes various material layers, and if these materials are exposed together in the same etching chamber, some materials may contaminate others material. For example, etching through the metal material while exposing the memory bit material may cause sidewall damage to the memory bit material. For example, in some cases, etching through tungsten while retaining exposed portions of the memory bit material results in undesirable lateral etching of the memory bit material, thereby giving the memory bit material a tapered profile. This tapered profile causes poor electrical characteristics and reduces the performance of the memory cell. Peter does not use the term "embodiment" when discussing the prior art-this term is reserved to describe the subject of the invention.Therefore, this article provides technology and design to help eliminate or otherwise reduce this problem. In an exemplary embodiment, a manufacturing method is provided that uses a dielectric liner layer or structure to protect the memory bit layer before performing any further etching through the metal layer. The method includes depositing a first conductive layer over a substrate and depositing a first layer stack over the first conductive layer. The first layer stack includes a first memory bit layer. The method includes only etching through a part of the total thickness of the first layer stack, so that the entire thickness of the first memory bit layer is etched through, but the subsequent metal layer is not etched (or otherwise minimally etched) , And then deposit one or more dielectric layers on at least one or more sidewalls of the memory bit layer. The method also includes etching through the remainder of the first layer stack and through the thickness of the conductive layer after placing the dielectric pad on the memory bit layer.The various operations can be described in sequence as multiple discrete actions or operations in a manner that is most helpful for understanding the claimed subject matter. However, the order of description should not be interpreted as implying that these operations must be order-related. In particular, these operations may not be performed in the order shown. The described operations may be performed in a different order from the described embodiments. Various additional operations may be performed, and/or the described operations may be omitted in additional embodiments.This description uses the phrase "in an embodiment," which may each refer to one or more of the same or different embodiments. In addition, the terms "including", "including", "having" and the like used in the embodiments of the present disclosure are synonymous. When used to describe a range of sizes, the phrase "between X and Y" means that the range includes X and Y. For convenience, the phrase "Figure 1" can be used to refer to the collection of graphs in Figures 1A-1B, and the phrase "Figure 4" can be used to refer to the collection of graphs in Figures 4A-B, and so on.It should be easily understood that the meaning of "above" and "above" in this disclosure should be understood in the broadest way, so that "above" and "above" not only mean " Directly on", but also includes the meaning of being on something with intermediate features or layers in between. In addition, the meaning of "on" in the present disclosure should be understood as being directly on something (that is, there is no intermediate feature or layer between them).In addition, for the convenience of description, spatially such as "below", "below", "lower", "above", "upper", etc. may be used in this text. Related terms describe the relationship of one element or feature as shown in the figure with respect to another element or feature. In addition to the orientations described in the figures, related terms in space are intended to cover different orientations of the device in use or operation. The device can be oriented in other ways (rotated by 90 degrees or in other orientations), and the related spatial descriptions used herein can be understood accordingly.As used herein, the term "substrate" refers to a material on which a subsequent layer of material is added. The substrate itself can be patterned. The material added on top of the substrate can be patterned or can remain unpatterned. In addition, the substrate may include various semiconductor materials, such as silicon, germanium, gallium arsenide, indium phosphide, and the like. Alternatively, the substrate may be made of a non-conductive material such as glass, plastic or sapphire wafer.As used herein, the term "layer" refers to a portion of a material that includes a region having a thickness. A monolayer is a layer composed of a single layer of atoms of a given material. The layer may extend over the entire lower or upper structure, or may have a range smaller than that of the lower or upper structure. In addition, the layer may be a uniform or non-uniform region of a continuous structure, the region having a thickness less than the thickness of the continuous structure. For example, the layer may be located between the top and bottom surfaces of the continuous structure or between any pair of horizontal planes at the top and bottom surfaces of the continuous structure. The layer may extend horizontally, vertically, and/or along a tapered surface. The layer may conform to a given surface (whether flat or curved) with a relatively uniform thickness throughout the layer. The substrate may be a layer, may include one or more layers therein, and/or may have one or more layers on, above, and/or below it.Memory array architectureFIG. 1A shows a cross-sectional view of a portion 100 of a memory cell array according to an embodiment. According to some embodiments, the portion 100 includes adjacent memory cells 102, and each adjacent memory cell includes a material layer stack 108 sandwiched between a specific word line 104 and a bit line 106. A potential is applied to the specific word line 104 and the specific bit line 106 to read or program data from the memory cell 102 at the intersection of the selected word line 104 and the selected bit line 106 (between). As such, the word line 104 and the bit line 106 provide the top and bottom electrodes for the memory cell 102. As described in this example, the word line 104 extends orthogonal to the bit line 106. The word line 104 and the bit line 106 may be made of any conductive material, such as metal, metal alloy, or polysilicon. In some examples, the word line 104 and the bit line 106 are made of tungsten, silver, aluminum, gold, carbon, copper, or multilayer structures including such materials (eg, tungsten and carbon layers).According to an embodiment, each memory cell 102 includes a layer stack 108 having at least one memory bit layer 112. As used herein, the term "memory bit layer" refers to the standard meaning of the phrase in the context of a memory device, and in some cases refers to one or more layers including metal-like alloys. Metalloids include, for example, boron (B), silicon (Si), germanium (Ge), arsenic (As), antimony (Sb), tellurium (Te), selenium (Se), and polonium (Po). In some embodiments, the memory bit layer 112 includes chalcogenides, which include alloys of germanium, arsenic, antimony, and tellurium, such as GeTe, GeSbTe, GeBiTe (GeTe alloyed with bismuth), GeAsSe, GeSiAsSe, or GeInSbTe (indium GeSbTe forming alloys, just to name a few non-limiting examples. Also, note that the stoichiometry of this chalcogenide compound may be different in one embodiment from another, and that this compound without a stoichiometric coefficient or value representation is intended to represent all forms of the compound.The memory bit layer 112 includes a material that changes its threshold voltage based on the polarity of the electric potential applied to it to represent a logic “0” or a logic “1” of a given memory cell 102. In some example embodiments, chalcogenide is used as a memory bit material, and its threshold voltage may be changed based on the polarity of the potential applied on the chalcogenide.The layer stack 108 may include one or more other conductive layers. For example, the layer stack 108 may include a first conductive layer 110 and a second conductive layer 114 to provide enhanced ohmic contact to the memory bit layer 112. In an embodiment, the conductive layers 110 and 114 include tungsten and/or carbon.As described above, at least the sidewalls of the memory bit layer 112 are protected by the dielectric liner structure 109, which in this exemplary embodiment includes the dielectric liner 116. According to some embodiments, the dielectric pad 116 includes a high-k dielectric material. It can be seen that the dielectric liner 116 may only be on one or more sidewalls of a portion of the total thickness of the layer stack 108 (for example, not the entire thickness of the layer stack 108). For example, in the exemplary embodiment shown, the dielectric pad 116 is not present on the sidewall of the first conductive layer 110. The dielectric liner 116 effectively protects the memory bit layer 112 from lateral erosion and contamination during the etching of the lower material layer (eg, layer 110). Further exemplary details of the dielectric pad 116 and its manufacturing process will be discussed with reference to FIGS. 5-14.It is further noted that the dielectric pad structure 109 may include additional layers. For example, in the exemplary embodiment shown, an additional dielectric layer 118 is provided above the sidewall of each memory cell 102 to serve as a barrier between the various material layers of the memory cell 102 and the oxide filling material 120 Floor. In some embodiments, the oxide filling material 120 fills the remaining area between adjacent memory cells 102. Note that the additional dielectric layer 118 may cover a larger portion of the sidewall of the memory cell. For example, in the example embodiment shown, the dielectric layer 118 exists on the sidewall of the first conductive layer 110 in addition to the sidewall of the dielectric pad 116. In other embodiments, an initial dielectric layer (such as a nitride layer) is deposited before the dielectric liner 116 to improve the adhesion between the liner 116 and the memory cell material.1B and 1C show cross-sectional views of the memory array 122 according to some embodiments. Cross-sectional views are taken orthogonally to each other in the memory array 122. The memory array 122 includes a plurality of memory cells 102 arranged in arrays A, B, and C stacked in the Z direction to form a 3D memory structure. The array includes an ordered arrangement of rows and columns of memory cells 102 in the XY plane, as shown in FIGS. 1B and 1C. Other ordered arrangements are also possible. Each memory cell 102 generally includes a layer stack 108 that includes one or more memory bit layers. In addition, a part of the sidewall of the layer stack 108 is protected by the dielectric liner structure 109, so that at least one or more memory bit layers of the layer stack 108 are protected.The memory array 122 also includes a plurality of word lines 104 and bit lines 106 for addressing a particular memory cell 102. As described in this example, the word line 104 extends orthogonal to the bit line 106, and the memory array 100 alternates between the word line 104 and the bit line 106 in the Z direction. 1B and 1C, the word line 104 extends in the Y direction (entering and leaving the page in FIG. 1B), and the bit line 106 extends in the X direction (entering and leaving the page in FIG. 1C).It should be understood that the number of memory cells 102 shown is only an example, and any number of memory cells 102 can be used in each level, and any number of levels in the Z direction can also be used. According to some embodiments, the height of a given memory cell 102 in the Z direction is between about 30 nm and about 50 nm. According to some embodiments, the width of a given memory cell 102 in the X direction or the Y direction is between about 10 nm and about 20 nm. The width can be the same in the X direction and the Y direction. In some embodiments, the width of a given memory cell is different in the X direction and the Y direction. As will be understood, any number of memory cell geometries can be utilized.FIG. 2 shows an exemplary embodiment of a chip package 200. It can be seen that the chip package 200 includes one or more dies 202. When the one or more dies 202 include one or more memory dies, the chip package 200 may be a memory device, whether it is a dedicated memory die, or has a memory portion juxtaposed with other functional circuits of the die. Certain other dies (e.g., processors with on-board memory, for example). In some exemplary configurations, the die 202 may include any number of memory arrays 122 and any other circuits for interfacing with the memory arrays. In other embodiments, the memory array 122 may exist on one die 202, and other circuits for interfacing with the die 202 (eg, cell selection circuit, readout circuit, and programming circuit) are located in the chip package 200 On the other die.As can be further seen, the chip package 200 includes a housing 204 bonded to the package substrate 206. The housing 204 may be any standard or proprietary housing, and for example, provides electromagnetic shielding and environmental protection for the components of the chip package 200. One or more dies 202 can be conductively coupled to the package substrate 206 using connections 208, which can be implemented using any number of standard or proprietary connection mechanisms, such as solder bumps, balls Grid array (BGA), pin or wire bonding, just to name a few. The package substrate 206 can be any standard or proprietary package substrate, but in some cases includes a dielectric material that has an extension between the surfaces of the package substrate 206, or on each surface The conductive path of the dielectric material between different locations on the (for example, including conductive vias and wires). In some embodiments, the thickness of the package substrate 206 can be less than 1 millimeter (eg, between 0.1 millimeter and 0.5 millimeter), although any number of package geometries can be used. The additional conductive contact portion 212 may be provided at the opposite surface of the package substrate 206 for conductively contacting, for example, a printed circuit board. One or more vias 210 extend through the thickness of the package substrate 206 to provide a conductive path between one or more of the connections 208 and one or more of the contacts 212. For ease of illustration, the vias 210 are shown as a single inline through the package substrate 206, although other configurations (e.g., damascene, dual damascene, through-silicon vias) may be used. In other embodiments, the vias 210 are made of multiple smaller stacked vias, or staggered at different locations on the package substrate 206. In the illustrated embodiment, the contact 212 is a solder ball (e.g., for bump-based connections or ball grid array arrangements), but any suitable package bonding mechanism can be used (e.g., pin grid array arrangement Pin or land in a grid array arrangement). In some embodiments, a solder resist is provided between the contact portions 212 to prevent short circuits.In some embodiments, the molding material 214 may be disposed around one or more dies 202 included in the housing 204 (eg, disposed between the die 202 and the package substrate 206 as an underfill material, and disposed in Between the die 202 and the shell 204 is used as an overfill material). Although the size and quality of the molding material 214 may be different in one embodiment and another embodiment, in some embodiments, the thickness of the molding material 214 is less than 1 millimeter. Exemplary materials that can be used for the molding material 214 suitably include epoxy molding materials. In some cases, the molding material 214 is thermally conductive in addition to being electrically insulating.Manufacturing processFIG. 3 shows a cross-sectional view at an early stage of a manufacturing process for a portion of the memory array 100 according to an embodiment. The subsequent FIGS. 4 to 14 show other stages of the manufacturing process, and each drawing shows a cross-sectional view taken along the X axis (A) and a cross-sectional view taken along the Y axis (B). For example, FIG. 4A shows a cross-sectional view of a portion of the memory array 122 taken along the X axis at the dashed cross-sectional line AA shown in FIG. 4B, and FIG. 4B shows the dashed cross-sectional line BB shown in FIG. 4A A cross-sectional view of a portion of the memory array 122 taken along the Y axis. For all figures, the Z axis is the vertical axis on the page (ie, along the thickness of each deposited layer). The various layers and structures shown in FIGS. 3-14 are not intended to be drawn to scale, but are shown in a specific manner for visual clarity. Note further that it can be understood that some intermediate processes that are not explicitly shown (for example, polishing and cleaning processes or other standard processes) may be performed. In other embodiments, not all illustrated layers are used and/or additional layers may be included.As shown in FIG. 3, a first conductive layer 304 is deposited on the substrate 301, and then a layer stack 308 is deposited. The substrate 301 may be any suitable substrate material used to form additional material layers thereon. In some embodiments, the substrate 301 includes a bulk semiconductor material, such as silicon, germanium, silicon germanium (SiGe), gallium arsenide, or indium phosphide. The substrate 301 may include one or more insulating layers (such as silicon oxide or silicon nitride) buried under the top semiconductor layer at its top surface or, for example, in a semiconductor-on-insulator substrate construction.The first conductive layer 304 may be a metal, such as tungsten, silver, aluminum, titanium, cobalt, or an alloy. In some embodiments, after the first conductive layer 304 is patterned into word lines or bit lines, the first conductive layer 304 has a sufficient thickness (for example, 1 to 50 nm thick) to propagate signals.According to some embodiments, the layer stack 308 may include one or more conductive layers 310 and 314 with a memory bit layer 312 sandwiched therebetween. Each of the conductive layers 310 and 314 may include any conductive material that enhances ohmic contact with the memory bit layer 312. In one example, the conductive layers 310 and 314 include carbon. The first layer stack 308 may include any number of deposited layers having at least one memory bit layer 312. In some embodiments, the layer stack 308 includes two or more layers with memory bit materials. The various layers can be deposited using standard deposition techniques such as chemical vapor deposition (CVD), physical vapor deposition (PVD), and atomic layer deposition (ALD) techniques.According to some embodiments, the thickness of the first conductive layer 304 is between about 30 nm and 50 nm, the thickness of each of the conductive layers 310 and 314 is between about 10 nm and 15 nm, and the thickness of the memory bit layer 312 is between about 15 nm and 15 nm. Between 25nm. Any standard deposition technique can be used to deposit all layers, such as metal sputtering or evaporation for the first conductive layer 304, and plasma enhanced chemical vapor for the conductive layers 310 and 314 and the memory bit layer 312 Deposition (PECVD).4A and 4B show a first etching process (generally indicated by an arrow) according to some embodiments, which etches through a portion of the total thickness t of the layer stack 308. According to some embodiments, the etching forms a band including the material layer of the memory bit layer 312 extending in the Y direction. The etching process etches through the thickness of the memory bit layer 312, thereby exposing the sidewalls of the memory bit layer 312. However, according to some embodiments, the etching does not extend into any part of the first conductive layer 304. In one example, the etching stops at the conductive layer 310. In some other examples, the etching extends into a portion of the thickness of the conductive layer 310. Due to the anisotropic nature of the etching process, the lateral etching of the material layer is minimized. According to an embodiment, anisotropic etching is performed using standard dry etching techniques by placing the substrate 301 in a vacuum chamber and introducing various gas chemical compositions and bias potentials to etch through various material layers. In addition, standard photolithography techniques are performed to pattern the hard mask layer (not shown) to mask portions of the layer from being etched. Exemplary hard mask layers include silicon oxide or silicon nitride.5A and 5B illustrate the deposition of the first dielectric layer 502 over at least the sidewalls of the memory bit layer 312 according to some embodiments. Although not explicitly shown for the sake of clarity, the deposition of the first dielectric layer 502 covers all areas of the device with a thickness, and an etching process will be performed to remove the planar portion of the first dielectric layer 502, thereby leaving the sidewall portion of the deposited film . In the example shown, a dielectric layer 502 is deposited on the sidewalls of the memory bit layer 312, over the sidewalls of the conductive layer 314, and on the top surface of the conductive layer 310.Due to the good adhesion properties of silicon nitride to most other materials, the first dielectric layer 502 may include, for example, silicon nitride. In some other examples, the first dielectric layer 502 includes a high-k dielectric material, which may be used with or without the nitride layer. Examples of high-k materials include oxides of one or more of the following elements: lithium (Li), boron (B), magnesium (Mg), aluminum (Al), silicon (Si), calcium (Ca), scandium (Sc), titanium (Ti), vanadium (V), chromium (Cr), manganese (Mn), iron (Fe), cobalt (Co), nickel (Ni), copper (Cu), zinc (Zn), gallium (Ga), germanium (Ge), strontium (Sr), yttrium (Y), zirconium (Zr), niobium (Nb), molybdenum (Mo), ruthenium (Ru), rhodium (Rh), indium (In), tin (Sn), antimony (Sb), barium (Ba), lanthanum (La), cerium (Ce), praseodymium (Pr), neodymium (Nd), samarium (Sm), europium (Eu), gadolinium (Gd), dysprosium (Dy), holmium (Ho), erbium (Er), thulium (Tm), ytterbium (Yb), lutetium (Lu), hafnium (Hf), tantalum (Ta), iridium (Ir), platinum (Pt), lead (Pb) and Bismuth (Bi). In some specific exemplary embodiments, the first dielectric layer 502 includes hafnium oxide (HfO), zirconium oxide (ZrO), or aluminum oxide (AlO). Other examples of high-k dielectrics include hafnium silicon oxide, lanthanum aluminum oxide, zirconium silicon oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, lead scandium tantalum oxide, and lead zinc niobate. Generally, a high-k dielectric is a dielectric material having a dielectric constant greater than that of silicon dioxide. According to some embodiments, the first dielectric layer 502 may be deposited using, for example, a low temperature (eg, less than 350° C.) atomic layer deposition (ALD) process. According to some embodiments, the first dielectric layer 502 may be deposited to a thickness between about and about .FIGS. 6A and 6B show a second etching process (generally indicated by arrows) according to some embodiments, which etches through the remainder of the thickness of the layer stack 308 and through the thickness of the first conductive layer 304. The second etching process may be similar to the etching process shown in FIG. 4, such as anisotropic dry etching, to provide the required degree of directivity. Less directional etching (wet and/or dry etching can also be used, but note that lateral etching can occur, which may or may not be acceptable in a given application). During the second etching process, the exposed sidewalls of the memory bit layer 312 are protected by the first dielectric layer 502. Specifically, according to some embodiments, during the time of etching the first conductive layer 304, no part of the memory bit layer 312 is exposed. The etching process patterns the strips of the first conductive layer 304 extending in the Y direction, thereby forming a plurality of word lines or bit lines.Figures 7A and 7B illustrate the deposition of additional material layers between adjacent layer stacks according to some embodiments. According to some such embodiments, the second dielectric layer 702 is deposited on the first dielectric layer 502. According to some embodiments, the second dielectric layer 702 is also deposited on the sidewalls of each layer stack and the sidewalls of the etched first conductive layer 304. In some embodiments, the second dielectric layer 702 may include any of the materials discussed above for the first dielectric layer 502, but may be different in other embodiments. For example, in some embodiments, the first layer 502 is a nitride and the second layer 702 is a high-k dielectric, while in other embodiments, the first layer 502 is a high-k dielectric, and the second layer 702 is a conventional dielectric (For example, oxide, nitride, or oxynitride). Any standard deposition technique (eg, the same low temperature ALD process used to deposit the first dielectric layer 502) can be used to deposit the second dielectric layer 702 to a thickness of, for example, between about and about .A filling material 704 is deposited to fill the remaining area between adjacent layer stacks. In some embodiments, the filling material 704 is silicon oxide and is deposited using a PECVD process. After depositing the filling material 704, a chemical mechanical polishing (CMP) process may be used to planarize the top surface of the structure.8A and 8B illustrate the deposition of a second conductive layer 806 and another layer stack 804 including at least one memory bit layer 802 according to some embodiments. The layer stack 804 may have substantially the same structure as the layer stack 308, but in other embodiments, one or more aspects may be changed, as will be understood. Similarly, the second conductive layer 806 may be the same material as the first conductive layer 304, but not necessarily the same. In some embodiments, the second conductive layer 806 has a greater thickness than the first conductive layer 304.9A and 9B show another etching process (generally indicated by an arrow) according to some embodiments, which etches through a portion of the total thickness t of the layer stack 804. According to some embodiments, the etching forms bands of some material layers including the layer stack 804 of the memory bit layer 802 extending in the X direction. As can be seen in this exemplary case, the etching process etches through the thickness of the memory bit layer 802, thereby exposing the sidewalls of the memory bit layer 802. However, according to some embodiments, the etching does not extend into any part of the second conductive layer 806. Due to the anisotropic nature of the etching process, the lateral etching of the material layer is minimized. According to an embodiment, anisotropic etching is performed using standard dry etching techniques by placing the substrate 301 in a vacuum chamber and introducing various gas chemical compositions and bias potentials, thereby etching through various material layers. In addition, standard photolithography techniques are performed to pattern the hard mask layer (not shown) to mask portions of the layer from being etched. Exemplary hard mask layers include silicon oxide or silicon nitride.10A and 10B illustrate the deposition of a third dielectric layer 1002 over at least the sidewalls of the memory bit layer 802 according to some embodiments. Although not explicitly shown for clarity, the deposition of the third dielectric layer 1002 thickly covers all areas of the device, and an etching process will be performed to remove the planar portion of the third dielectric layer 1002, thereby leaving the sidewalls of the deposited film section. The third dielectric layer 1002 may include any material the same as those discussed above for the first dielectric layer 502. The third dielectric layer 1002 may be deposited using a low temperature (for example, less than 350° C.) ALD process. The third dielectric layer 1002 may be deposited to a thickness between about and about .11A and 11B show another etching process (generally indicated by arrows) according to some embodiments, which etches through the remainder of the thickness of the layer stack 804 and through the thickness of the second conductive layer 806. According to some embodiments, the etching process may be similar to the etching process shown in FIG. 6. During the etching process, the memory bit layer 802 is protected by the third dielectric layer 1002. Specifically, according to some embodiments, during the time of etching the second conductive layer 806, the exposed sidewall portion of the memory bit layer 802 is not exposed. The etching process patterns the strips of the second conductive layer 806 extending in the X direction, thereby forming a plurality of word lines or bit lines.In some embodiments, the etching process continues further through the second conductive layer 806 and etches through a portion of the thickness of the layer stack 308. Therefore, the etching process of FIG. 11 begins to form individual memory cells from the layer stack 308. The etching process etches through the thickness of the memory bit layer 312, thereby exposing the sidewalls of the memory bit layer 312. However, according to some embodiments, the etching does not extend into any part of the first conductive layer 304.Figures 12A and 12B illustrate the deposition of a fourth dielectric layer 1216 on top of the third dielectric layer 1002 and also on at least the exposed sidewalls of the memory bit layer 312 according to some embodiments. The fourth dielectric layer 1216 may include any of the exemplary materials discussed above for the first dielectric layer 502, for example. Any standard deposition technique (eg, the same low temperature ALD process used to deposit the first dielectric layer 502) can be used to deposit the fourth dielectric layer to a thickness between about and .13A and 13B show another etching process (usually indicated by arrows) according to some embodiments, which etches through the remainder of the thickness of the layer stack 308, but not etched (or only minimally etched) ) In the first conductive layer 304. According to some embodiments, the etching process may be similar to the etching process shown in FIG. 6. If any part of the first conductive layer 304 is exposed during the etching process, the memory bit layer 312 is protected by the fourth dielectric layer 1216.Figures 14A and 14B illustrate the deposition of additional material layers between adjacent layer stacks according to some embodiments. According to some such embodiments, a fifth dielectric layer 1418 is deposited over the fourth dielectric layer 1216, and a filling material 1420 is deposited to fill the remaining area between adjacent layer stacks. The fifth dielectric layer 1418 and the filling material 1420 may be substantially similar to the second dielectric layer 702 and the filling material 704 as described above in FIG. 7, respectively.According to some embodiments, the first level of memory cell 102 is formed using the manufacturing process shown in FIGS. 3-14. The manufacturing process can be repeated any number of times to form additional levels of the memory cell 102 and form a three-dimensional memory device. During any etching process through the metal layer (eg, conductive layers 304 and 806), there are one or more dielectric layers above the sidewalls of one or more of the memory bit layers to protect the memory during the etching process One or more of the bit layers.In some embodiments, the fourth dielectric layer 1216 is not deposited and the etching performed in FIG. 13 is not performed. In these examples, the bottom portion of the layer stack 308 (for example, the conductive layer 310) remains attached to the plurality of memory cells extending in the Y direction.Figure 15 is a flowchart of a method 1500 for manufacturing a memory device that includes an array of memory cells with memory bit material, according to an embodiment. Various operations of the method 1500 may be shown in FIGS. 3-14. However, the correlation between the various operations of the method 1500 and the specific components shown in FIGS. 3 to 14 is not intended to imply any structural restrictions and/or usage restrictions. Rather, Figures 3-4 provide an exemplary embodiment of the method 1500. Other operations may be performed before, during, or after any operation of method 1500.The method 1500 begins with operation 1502, where a conductive layer is deposited over the substrate. The conductive layer may be, for example, a tungsten layer, and may be patterned into multiple word lines or bit lines later. It will be understood that other suitable conductor materials may also be used.The method 1500 continues with operation 1504, where a layer stack is deposited over the conductive layer. The layer stack may include any number of layers including a plurality of conductive layers and at least one memory bit layer. In the layer stack, the memory bit layer may be sandwiched by conductive layers. The conductive layer in the layer stack may be or include carbon in other cases. The memory bit layer may include, for example, chalcogenide or other exemplary materials provided throughout this document.The method 1500 continues with operation 1506, where the etching process is performed through only a portion of the thickness of the layer stack. According to some embodiments, the etching passes through at least the thickness of the memory bit layer so that the sidewalls of the memory bit layer are exposed. By only etching through a part of the layer stack, the etching does not expose the underlying conductive layer.The method 1500 continues with operation 1508, where a dielectric layer is deposited over the exposed sidewalls of the memory bit layer. Due to the good adhesion properties of silicon nitride to most other materials, the dielectric layer may include silicon nitride. In some other examples, the dielectric layer includes a high-k dielectric material. Examples of high-k materials include oxides of one or more of the following elements: Li, B, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn , Ga, Ge, Sr, Y, Zr, Nb, Mo, Ru, Rh, In, Sn, Sb, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Dy, Ho, Er, Tm, Yb , Lu, Hf, Ta, Ir, Pt, Pb and Bi. A low temperature ALD process can be used to deposit the dielectric layer. The dielectric layer can be deposited to a thickness between about and about . In some embodiments, multiple dielectric layers are deposited over the exposed sidewalls of the memory bit layer. Multiple dielectric layers can have different thicknesses and material compositions.The method 1500 continues with operation 1510, where a second etch is performed through the remainder of the layer stack and also through the thickness of the conductive layer. The second etching process may be similar to the first etching process of operation 1506. During the second etching process, the memory bit layer is protected by a dielectric layer deposited on its sidewalls. Specifically, according to some embodiments, during the time of etching the conductive layer, no part of the memory bit layer is exposed. The etching process patterns the strips of the conductive layer, thereby forming a plurality of word lines or bit lines.According to some embodiments, the operations of method 1500 are generally repeated to form each level of memory cells in a three-dimensional memory device.Exemplary electronic equipmentFig. 16 shows an exemplary electronic device 1600, which may include one or more storage devices, such as the embodiments disclosed herein. In some embodiments, the electronic device 1600 can be used as a host or incorporated into a personal computer, workstation, server system, laptop, super laptop, tablet, touchpad, portable computer, handheld computer, palmtop Computers, personal digital assistants (PDA), mobile phones, combinations of mobile phones and PDAs, smart devices (for example, smart phones or smart tablets), mobile Internet devices (MID), information transceiver devices, data communication devices, imaging devices, Wearable devices, embedded systems, etc. In some embodiments, any combination of different devices can be used.In some embodiments, the electronic device 1600 may include any combination of a processor 1602, a memory 1604, a network interface 1606, an input/output (I/O) system 1608, a user interface 1610, and a storage system 1612. As can be further seen, buses and/or interconnections are also provided to allow communication between the various components listed above and/or other components not shown. The electronic device 1600 can be coupled to the network 1616 through the network interface 1606 to allow communication with other computing devices, platforms, or resources. According to the present disclosure, other components and functions that are not reflected in the block diagram of FIG. 16 will be obvious, and it will be understood that other embodiments are not limited to any specific hardware configuration.The processor 1602 may be any suitable processor, and may include one or more coprocessors or controllers to assist in controlling and processing operations associated with the electronic device 1600. In some embodiments, the processor 1602 may be implemented as any number of processor cores. The processor (or processor core) can be any type of processor, such as a microprocessor, embedded processor, digital signal processor (DSP), graphics processing unit (GPU), network processor, field programmable gate array Or other devices configured to execute code. The processors can be multi-threaded cores because each of them can include multiple hardware thread environments (or "logical processors").The memory 1604 may be implemented using any suitable type of digital storage device, including, for example, flash memory and/or random access memory (RAM). In some embodiments, the memory 1604 may include a memory hierarchy and/or various layers of a memory cache. The memory 1604 may be implemented as a volatile memory device, such as but not limited to RAM, dynamic RAM (DRAM), or static RAM (SRAM) devices. The storage system 1612 may be implemented as a non-volatile storage device, such as but not limited to hard disk drives (HDD), solid state drives (SSD), universal serial bus (USB) drives, optical disk drives, tape drives, internal storage devices, attachments One or more of storage devices, flash memory, backup battery synchronous DRAM (SDRAM), and/or network-accessible storage devices. In some embodiments, the storage system 1612 may include technology to increase storage performance enhanced protection of valuable digital media when multiple hard drives are included. According to some embodiments of the present disclosure, either or both of the memory 1604 and the storage system 1612 include one or more memory arrays 122 having memory cells 102 manufactured using the processes discussed herein. According to some embodiments of the present disclosure, either or both of the memory 1604 and the storage system 1612 may be incorporated in the chip package 200 and bonded to a printed circuit board (PCB) together with one or more other devices.The processor 1602 may be configured to execute an operating system (OS) 1614, which may include any suitable operating system, such as Google Android (Google Inc., Mountain View, CA), Microsoft Windows (Microsoft Corp., Redmond, WA) ), Apple OS X (Apple Inc., Cupertino, CA), Linux or real-time operating system (RTOS).The network interface 1606 may be any suitable network chip or chipset that allows wired and/or wireless connections between other components of the electronic device 1600 and/or the network 1616, thereby enabling the electronic device 1600 to communicate with other local and/or Communicate with remote computing systems, servers, cloud-based servers, and/or other resources. Wired communication can comply with existing (or yet to be developed) standards, such as Ethernet. Wireless communication may comply with existing (or yet to be developed) standards, such as cellular communication including LTE (Long Term Evolution), Wireless Fidelity (Wi-Fi), Bluetooth, and/or Near Field Communication (NFC). Exemplary wireless networks include, but are not limited to, wireless local area networks, wireless personal area networks, wireless city local area networks, cellular networks, and satellite networks.The I/O system 1608 may be configured to interface between various I/O devices and other components of the electronic device 1600. I/O devices may include, but are not limited to, a user interface 1610. The user interface 1610 may include devices (not shown) such as a display element, a touch panel, a keyboard, a mouse, and a speaker. The I/O system 1608 may include a graphics subsystem that is configured to perform image processing for rendering on the display element. The graphics subsystem may be, for example, a graphics processing unit or a visual processing unit (VPU). An analog or digital interface can be used to communicatively couple the graphics subsystem and the display element. For example, the interface may be any one of a high-definition multimedia interface (HDMI), display port, wireless HDMI, and/or any other suitable interface using wireless high-definition compatible technology. In some embodiments, the graphics subsystem may be integrated into any chipset of the processor 1602 or the electronic device 1600.It will be understood that, in some embodiments, the various components of the electronic device 1600 may be combined or integrated in a system-on-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware, or software.In various embodiments, the electronic device 1600 may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, the electronic device 1600 may include components and interfaces suitable for communication on a wireless shared medium, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, etc. . Examples of wireless shared media may include parts of a wireless spectrum, such as a radio frequency spectrum. When implemented as a wired system, the electronic device 1600 may include components and interfaces suitable for communication on a wired communication medium, such as an input/output adapter, a physical connector that connects the input/output adapter with the corresponding wired communication medium, and a network Interface card (NIC), optical disc controller, video controller, audio controller, etc. Examples of wired communication media may include wires, cable metal leads, printed circuit boards (PCBs), backplanes, switching structures, semiconductor materials, twisted pairs, coaxial cables, optical fibers, and the like.Unless specifically stated otherwise, it can be understood that terms such as "processing", "calculation", "operation", and "determination" refer to the actions and/or processes of a computer or computing system or similar electronic computing device, which will Data manipulation and/or conversion expressed as physical quantities (for example, electronics) in the registers and/or memory units of the computer system are similarly expressed as physical quantities in the registers, memory units, or other such information storage and transmission or displays of the computer system Other data. The embodiments are not limited to this environment.Many specific details have been elaborated herein to provide a thorough understanding of the embodiments. However, as will be understood from the present disclosure, the embodiments can be practiced without these specific details. In other cases, well-known operations, components and circuits are not described in detail so as not to make the embodiments difficult to understand. It can be understood that the specific structure and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments. In addition, although the subject matter has been described in language specific to structural features and/or method actions, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or actions described herein. Rather, the specific features and actions described herein are disclosed as exemplary forms of implementing the claims.Other exemplary embodimentsThe following examples relate to other embodiments, many of which will be apparent in arrangements and configurations.Example 1 is a memory device that includes a plurality of conductive bit lines, a plurality of conductive word lines, and a group of memory cells included in a memory cell array. Each of the memory cells is located between a corresponding bit line of the plurality of conductive bit lines and a corresponding word line of the plurality of conductive word lines. Each of the memory cells includes a layer stack having a memory bit layer and a dielectric layer on one or more sidewalls of only a portion of the total thickness of the layer stack, so that the dielectric layer is in the memory bit layer. On one or more sidewalls of the layer.Example 2 includes the subject matter of Example 1, where the memory cell array is arranged in three dimensions, where the memory cells are placed in rows and columns along a plurality of XY planes stacked in the Z direction.Example 3 includes the subject matter of Example 2, wherein the size of one or more of the memory cells in the X direction is greater than the size in the Y direction.Example 4 includes the subject matter of any of Examples 1-3, wherein the layer stack is composed of only the first conductive layer, the memory bit layer on the first conductive layer, and the second conductive layer on the memory bit, and wherein a plurality of conductive layers There is only a layer stack between the corresponding one of the bit lines and the corresponding one of the plurality of conductive word lines.Example 5 includes the subject matter of Example 4, wherein the dielectric layer is on one or more sidewalls of the second conductive layer, and the dielectric layer is on the top surface of the first dielectric layer.Example 6 includes the subject matter of any of Examples 1-5, wherein the plurality of conductive bit lines extend orthogonal to the plurality of conductive word lines.Example 7 includes the subject matter of any of Examples 1-6, wherein the height of one or more of the memory cells is between about 60 nm and about 80 nm.Example 8 includes the subject matter of any of Examples 1-7, wherein the dielectric layer includes a high-k material.Example 9 includes the subject matter of any of Examples 1-8, wherein the dielectric layer is a first dielectric layer, and the device further includes a second dielectric layer above the first dielectric layer, wherein the second dielectric layer is at the total thickness of the layer stack Above one or more sidewalls of, so that the first dielectric layer does not exist between the second dielectric layer and the layer stack in at least one position.Example 10 includes the subject matter of any of Examples 1-9, wherein the memory bit layer includes a chalcogenide.Example 11 includes the subject matter of any of Examples 1-10, wherein the plurality of conductive bit lines and the plurality of conductive word lines include one or both of tungsten and carbon.Example 12 is an integrated circuit including the memory device of any of Examples 1-11.Example 13 is a printed circuit board including the integrated circuit of Example 12.Example 14 is a memory chip including the memory device of any one of Examples 1-11.Example 15 is an electronic device, which includes a chip package, the chip package includes one or more dies, wherein one or more of the dies includes: a layer stack between a word line and a bit line, the The layer stack includes a memory bit layer; and a dielectric layer that is only on one or more sidewalls of a part of the total thickness of the layer stack, so that the dielectric layer is on one or more sidewalls of the memory bit layer on.Example 16 includes the subject matter of Example 15, wherein the bit lines extend orthogonal to the word lines.Example 17 includes the subject matter of Examples 15 or 16, wherein the dielectric layer includes a high-k material.Example 18 includes the subject matter of any of Examples 15-17, wherein the dielectric layer is a first dielectric layer, and the device further includes a second dielectric layer above the first dielectric layer, wherein the second dielectric layer is less than the total thickness of the layer stack Above the one or more sidewalls so that the first dielectric layer does not exist between the second dielectric layer and the layer stack in at least one location.Example 19 includes the subject matter of any of Examples 15-18, wherein the memory bit layer includes a chalcogenide.Example 20 includes the subject matter of any of Examples 15-19, wherein the layer stack consists of only the first conductive layer, the memory bit layer on the first conductive layer, and the second conductive layer on the memory bit layer, and wherein the word line There is only a layer stack between the bit line and the bit line.Example 21 is a method of manufacturing a memory device. The method includes: depositing a conductive layer on a substrate; depositing a layer stack on the conductive layer, the layer stack including a memory bit layer; etching only a portion of the total thickness of the layer stack so that the etching passes through the memory The entire thickness of the bit layer; depositing a dielectric layer on at least one or more sidewalls of the memory bit layer; and etching through the rest of the layer stack and through the thickness of the conductive layer.Example 22 includes the subject matter of Example 21, wherein the dielectric layer includes a high-k material.Example 23 includes the subject matter of Examples 21 or 22, wherein depositing the dielectric layer includes depositing the first dielectric layer to a thickness between and .Example 24 includes the subject matter of any of Examples 21-23, and further includes depositing a second dielectric layer over the first dielectric layer.Example 25 includes the subject matter of Example 24, wherein depositing the second dielectric layer includes depositing silicon nitride.Example 26 includes the subject matter of any of Examples 21-25, wherein depositing the first dielectric layer includes using atomic layer deposition (ALD) to deposit the first dielectric layer.Example 27 includes the subject matter of any of Examples 21-26, wherein the layer stack is the first layer stack, the conductive layer is the first conductive layer, and the memory bit layer is the first memory bit layer, and the method further includes A second conductive layer is deposited on the layer stack, and a second layer stack is deposited on the second conductive layer, the second layer stack including a second memory bit layer.Example 28 includes the subject matter of Example 27, and also includes: only etching through a portion of the total thickness of the second layer stack, so that the etching through the entire thickness of the second memory bit layer; in at least one or more of the second memory bit layer Depositing a second dielectric layer on each sidewall; and etching through the rest of the second layer stack, through the thickness of the conductive layer, and through the thickness of the first layer stack.Example 29 includes the subject matter of Example 28, and also includes depositing a third dielectric layer over one or more sidewalls of at least the second memory bit layer.Example 30 includes the subject matter of any of Examples 21-29, wherein the memory bit layer includes a chalcogenide.Example 31 includes the subject matter of any of Examples 21-30, wherein depositing the conductive layer includes depositing one or both of tungsten and carbon. |
An interface couples a controller (605) to a physical layer (PHY) block (610), where the interface includes a set of data pins comprising transmit data pins to send data to the PHY block (610) and receive data pins to receive data from the PHY block (610). The interface further includes a particular set of pins to implement a message bus interface, where the controller (605) is to send a write command to the PHY block (610) over the message bus interface to write a value to at least one particular bit of a PHY message bus register (620), bits of the PHY message bus register (620) are mapped to a set of control and status signals, and the particular bit is mapped to a recalibration request signal to request that the PHY block perform a recalibration. |
An apparatus comprising:physical layer (PHY) circuitry;a memory to implement a message bus register, wherein a set of control and status signals are mapped to bits of the message bus register, and the set of control and status signals comprises a recalibration request signal mapped to a particular one of the bits of the message bus register; andan interface to couple to a controller, wherein the interface comprises a PHY Interface for the PCI Express (PIPE)-based interface, and the interface comprises:a set of data pins comprising transmit data pins to send data to the controller and receive data pins to receive data from the controller;a particular set of pins to implement a message bus interface, wherein a write command is to be received from the controller over the message bus interface to write a value to the particular bit; andrecalibration circuitry to perform a recalibration of the PHY circuitry based on the value written to the particular bit.The apparatus of Claim 1, wherein the write command comprises a committed write.The apparatus of any one of Claims 1-2, wherein the value of the particular bit is to be reset automatically after a number of clock cycles.The apparatus of any one of Claims 1-3, further comprising detection circuitry to:detect one or more attributes of the PHY circuitry; anddetermine that the recalibration should be performed based on the one or more attributes.The apparatus of Claim 4, wherein the PHY circuitry is to send a write command to the controller over the message bus interface to write a value to a message bus register of the controller to indicate to the controller a request to perform the recalibration.The apparatus of Claim 5, wherein the write command from the controller is received based on the request to perform the recalibration.The apparatus of Claim 6, wherein the PHY circuitry is to implement a link and the recalibration is to be performed while the link is in recovery, wherein the controller is to initiate the recovery.The apparatus of any one of Claims 1-7, wherein the PHY circuitry is to send a write command to the controller over the message bus interface to write a value to a message bus register of the controller to indicate to the controller that the recalibration is complete.The apparatus of any one of Claims 1-8, wherein the PIPE-based interface comprises a PHY Interface for PCI Express, SATA, DisplayPort, and Converged IO Architectures.The apparatus of any one of Claims 1-9, further comprising the controller.The apparatus of Claim 10, wherein the controller comprises a media access controller (MAC). |
This application claims benefit to U.S. Provisional Patent Application Serial No. 62/802,946, filed February 8, 2019 and incorporated by reference herein in its entirety.FIELDThis disclosure pertains to computing system, and in particular (but not exclusively) to computer interfaces.BACKGROUNDAdvances in semi-conductor processing and logic design have permitted an increase in the amount of logic that may be present on integrated circuit devices. As a corollary, computer system configurations have evolved from a single or multiple integrated circuits in a system to multiple cores, multiple hardware threads, and multiple logical processors present on individual integrated circuits, as well as other interfaces integrated within such processors. A processor or integrated circuit typically comprises a single physical processor die, where the processor die may include any number of cores, hardware threads, logical processors, interfaces, memory, controller hubs, etc. As the processing power grows along with the number of devices in a computing system, the communication between sockets and other devices becomes more critical. Accordingly, interconnects, have grown from more traditional multi-drop buses that primarily handled electrical communications to full blown interconnect architectures that facilitate fast communication. Unfortunately, as the demand for future processors to consume at even higher-rates corresponding demand is placed on the capabilities of existing interconnect architectures. Interconnect architectures may be based on a variety of technologies, including Peripheral Component Interconnect Express (PCIe), Universal Serial Bus, and others.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an embodiment of a computing system including an interconnect architecture.FIG. 2 illustrates an embodiment of a interconnect architecture including a layered stack.FIG. 3 illustrates an embodiment of a request or packet to be generated or received within an interconnect architecture.FIG. 4 illustrates an embodiment of a transmitter and receiver pair for an interconnect architecture.FIGS. 5A-5C illustrate example implementations of a PHY/MAC interface.FIG. 6 illustrates a representation of a PIPE PHY/MAC interface.FIG. 7 illustrates a representation of a portion of an example status and control register of an example PHY/MAC interface.FIG. 8 illustrates a signaling diagram illustrating an example transaction involving a register of an example PHY/MAC interface.FIG. 9A illustrates use of a message bus interface of an example PHY/MAC interface to perform a controller-initiated recalibration.FIG. 9B illustrates use of a message bus interface of an example PHY/MAC interface to perform a physical layer-initiated recalibration.FIGS. 10A-10B are flowcharts illustrating example techniques involving an example PHY/MAC interface.FIG. 11 illustrates an embodiment of a block diagram for a computing system including a multicore processor.FIG. 12 illustrates another embodiment of a block diagram for a computing system.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth, such as examples of specific types of processors and system configurations, specific hardware structures, specific architectural and micro architectural details, specific register configurations, specific instruction types, specific system components, specific measurements/heights, specific processor pipeline stages and operation etc. in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present invention. In other instances, well known components or methods, such as specific and alternative processor architectures, specific logic circuits/code for described algorithms, specific firmware code, specific interconnect operation, specific logic configurations, specific manufacturing techniques and materials, specific compiler implementations, specific expression of algorithms in code, specific power down and gating techniques/logic and other specific operational details of computer system haven't been described in detail in order to avoid unnecessarily obscuring the present invention.Although the following embodiments may be described with reference to energy conservation and energy efficiency in specific integrated circuits, such as in computing platforms or microprocessors, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy conservation. For example, the disclosed embodiments are not limited to desktop computer systems or Ultrabooks™. And may be also used in other devices, such as handheld devices, tablets, other thin notebooks, systems on a chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below. Moreover, the apparatus', methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency.As computing systems are advancing, the components therein are becoming more complex. As a result, the interconnect architecture to couple and communicate between the components is also increasing in complexity to ensure bandwidth requirements are met for optimal component operation. Furthermore, different market segments demand different aspects of interconnect architectures to suit the market's needs. For example, servers require higher performance, while the mobile ecosystem is sometimes able to sacrifice overall performance for power savings. Yet, it's a singular purpose of most fabrics to provide highest possible performance with maximum power saving. Below, a number of interconnects are discussed, which would potentially benefit from aspects of the invention described herein.One interconnect fabric architecture includes the Peripheral Component Interconnect (PCI) Express (PCIe) architecture. A primary goal of PCIe is to enable components and devices from different vendors to inter-operate in an open architecture, spanning multiple market segments; Clients (Desktops and Mobile), Servers (Standard and Enterprise), and Embedded and Communication devices. PCI Express is a high performance, general purpose I/O interconnect defined for a wide variety of future computing and communication platforms. Some PCI attributes, such as its usage model, load-store architecture, and software interfaces, have been maintained through its revisions, whereas previous parallel bus implementations have been replaced by a highly scalable, fully serial interface. The more recent versions of PCI Express take advantage of advances in point-to-point interconnects, Switch-based technology, and packetized protocol to deliver new levels of performance and features. Power Management, Quality Of Service (QoS), Hot-Plug/Hot- Swap support, Data Integrity, and Error Handling are among some of the advanced features supported by PCI Express.Referring to FIG. 1 , an embodiment of a fabric composed of point-to-point Links that interconnect a set of components is illustrated. System 100 includes processor 105 and system memory 110 coupled to controller hub 115. Processor 105 includes any processing element, such as a microprocessor, a host processor, an embedded processor, a co-processor, or other processor. Processor 105 is coupled to controller hub 115 through front-side bus (FSB) 106. In one embodiment, FSB 106 is a serial point-to-point interconnect as described below. In another embodiment, link 106 includes a serial, differential interconnect architecture that is compliant with different interconnect standard.System memory 110 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible by devices in system 100. System memory 110 is coupled to controller hub 115 through memory interface 116. Examples of a memory interface include a double-data rate (DDR) memory interface, a dual-channel DDR memory interface, and a dynamic RAM (DRAM) memory interface.In one embodiment, controller hub 115 is a root hub, root complex, or root controller in a Peripheral Component Interconnect Express (PCIe or PCIE) interconnection hierarchy. Examples of controller hub 115 include a chipset, a memory controller hub (MCH), a northbridge, an interconnect controller hub (ICH) a southbridge, and a root controller/hub. Often the term chipset refers to two physically separate controller hubs, i.e. a memory controller hub (MCH) coupled to an interconnect controller hub (ICH). Note that current systems often include the MCH integrated with processor 105, while controller 115 is to communicate with I/O devices, in a similar manner as described below. In some embodiments, peer-to-peer routing is optionally supported through root complex 115.Here, controller hub 115 is coupled to switch/bridge 120 through serial link 119. Input/output modules 117 and 121, which may also be referred to as interfaces/ports 117 and 121, include/implement a layered protocol stack to provide communication between controller hub 115 and switch 120. In one embodiment, multiple devices are capable of being coupled to switch 120.Switch/bridge 120 routes packets/messages from device 125 upstream, i.e. up a hierarchy towards a root complex, to controller hub 115 and downstream, i.e. down a hierarchy away from a root controller, from processor 105 or system memory 110 to device 125. Switch 120, in one embodiment, is referred to as a logical assembly of multiple virtual PCI-to-PCI bridge devices. Device 125 includes any internal or external device or component to be coupled to an electronic system, such as an I/O device, a Network Interface Controller (NIC), an add-in card, an audio processor, a network processor, a hard-drive, a storage device, a CD/DVD ROM, a monitor, a printer, a mouse, a keyboard, a router, a portable storage device, a Firewire device, a Universal Serial Bus (USB) device, a scanner, and other input/output devices. Often in the PCIe vernacular, such as device, is referred to as an endpoint. Although not specifically shown, device 125 may include a PCIe to PCI/PCI-X bridge to support legacy or other version PCI devices. Endpoint devices in PCIe are often classified as legacy, PCIe, or root complex integrated endpoints.Graphics accelerator 130 is also coupled to controller hub 115 through serial link 132. In one embodiment, graphics accelerator 130 is coupled to an MCH, which is coupled to an ICH. Switch 120, and accordingly I/O device 125, is then coupled to the ICH. I/O modules 131 and 118 are also to implement a layered protocol stack to communicate between graphics accelerator 130 and controller hub 115. Similar to the MCH discussion above, a graphics controller or the graphics accelerator 130 itself may be integrated in processor 105. It should be appreciated that one or more of the components (e.g., 105, 110, 115, 120, 125, 130) illustrated in FIG. 1 can be enhanced to execute, store, and/or embody logic to implement one or more of the features described herein.Turning to FIG. 2 an embodiment of a layered protocol stack is illustrated. Layered protocol stack 200 includes any form of a layered communication stack, such as a Quick Path Interconnect (QPI) stack, a PCIe stack, a next generation high performance computing interconnect stack, or other layered stack. Although the discussion immediately below in reference to FIGS. 1-4 are in relation to a PCIe stack, the same concepts may be applied to other interconnect stacks. In one embodiment, protocol stack 200 is a PCIe protocol stack including transaction layer 205, link layer 210, and physical layer 220. An interface, such as interfaces 117, 118, 121, 122, 126, and 131 in Figure 1 , may be represented as communication protocol stack 200. Representation as a communication protocol stack may also be referred to as a module or interface implementing/including a protocol stack.PCI Express uses packets to communicate information between components. Packets are formed in the Transaction Layer 205 and Data Link Layer 210 to carry the information from the transmitting component to the receiving component. As the transmitted packets flow through the other layers, they are extended with additional information necessary to handle packets at those layers. At the receiving side the reverse process occurs and packets get transformed from their Physical Layer 220 representation to the Data Link Layer 210 representation and finally (for Transaction Layer Packets) to the form that can be processed by the Transaction Layer 205 of the receiving device.Transaction LayerIn one embodiment, transaction layer 205 is to provide an interface between a device's processing core and the interconnect architecture, such as data link layer 210 and physical layer 220. In this regard, a primary responsibility of the transaction layer 205 is the assembly and disassembly of packets (i.e., transaction layer packets, or TLPs). The translation layer 205 typically manages credit-based flow control for TLPs. PCIe implements split transactions, i.e. transactions with request and response separated by time, allowing a link to carry other traffic while the target device gathers data for the response.In addition PCIe utilizes credit-based flow control. In this scheme, a device advertises an initial amount of credit for each of the receive buffers in Transaction Layer 205. An external device at the opposite end of the link, such as controller hub 115 in Figure 1 , counts the number of credits consumed by each TLP. A transaction may be transmitted if the transaction does not exceed a credit limit. Upon receiving a response an amount of credit is restored. An advantage of a credit scheme is that the latency of credit return does not affect performance, provided that the credit limit is not encountered.In one embodiment, four transaction address spaces include a configuration address space, a memory address space, an input/output address space, and a message address space. Memory space transactions include one or more of read requests and write requests to transfer data to/from a memory-mapped location. In one embodiment, memory space transactions are capable of using two different address formats, e.g., a short address format, such as a 32-bit address, or a long address format, such as 64-bit address. Configuration space transactions are used to access configuration space of the PCIe devices. Transactions to the configuration space include read requests and write requests. Message transactions are defined to support in-band communication between PCIe agents.Therefore, in one embodiment, transaction layer 205 assembles packet header/payload 156. Format for current packet headers/payloads may be found in the PCIe specification at the PCIe specification website.Quickly referring to FIG. 3 , an embodiment of a PCIe transaction descriptor is illustrated. In one embodiment, transaction descriptor 300 is a mechanism for carrying transaction information. In this regard, transaction descriptor 300 supports identification of transactions in a system. Other potential uses include tracking modifications of default transaction ordering and association of transaction with channels.Transaction descriptor 300 includes global identifier field 302, attributes field 304 and channel identifier field 306. In the illustrated example, global identifier field 302 is depicted comprising local transaction identifier field 308 and source identifier field 310. In one embodiment, global transaction identifier 302 is unique for all outstanding requests.According to one implementation, local transaction identifier field 308 is a field generated by a requesting agent, and it is unique for all outstanding requests that require a completion for that requesting agent. Furthermore, in this example, source identifier 310 uniquely identifies the requestor agent within a PCIe hierarchy. Accordingly, together with source ID 310, local transaction identifier 308 field provides global identification of a transaction within a hierarchy domain.Attributes field 304 specifies characteristics and relationships of the transaction. In this regard, attributes field 304 is potentially used to provide additional information that allows modification of the default handling of transactions. In one embodiment, attributes field 304 includes priority field 312, reserved field 314, ordering field 316, and no-snoop field 318. Here, priority sub-field 312 may be modified by an initiator to assign a priority to the transaction. Reserved attribute field 314 is left reserved for future, or vendor-defined usage. Possible usage models using priority or security attributes may be implemented using the reserved attribute field.In this example, ordering attribute field 316 is used to supply optional information conveying the type of ordering that may modify default ordering rules. According to one example implementation, an ordering attribute of "0" denotes default ordering rules are to apply, wherein an ordering attribute of "1" denotes relaxed ordering, wherein writes can pass writes in the same direction, and read completions can pass writes in the same direction. Snoop attribute field 318 is utilized to determine if transactions are snooped. As shown, channel ID Field 306 identifies a channel that a transaction is associated with.Link LayerLink layer 210, also referred to as data link layer 210, acts as an intermediate stage between transaction layer 205 and the physical layer 220. In one embodiment, a responsibility of the data link layer 210 is providing a reliable mechanism for exchanging Transaction Layer Packets (TLPs) between two components a link. One side of the Data Link Layer 210 accepts TLPs assembled by the Transaction Layer 205, applies packet sequence identifier 211, i.e. an identification number or packet number, calculates and applies an error detection code, i.e. CRC 212, and submits the modified TLPs to the Physical Layer 220 for transmission across a physical to an external device.Physical LayerIn one embodiment, physical layer 220 includes logical sub block 221 and electrical sub-block 222 to physically transmit a packet to an external device. Here, logical sub-block 221 is responsible for the "digital" functions of Physical Layer 221. In this regard, the logical sub-block includes a transmit section to prepare outgoing information for transmission by physical sub-block 222, and a receiver section to identify and prepare received information before passing it to the Link Layer 210.Physical block 222 includes a transmitter and a receiver. The transmitter is supplied by logical sub-block 221 with symbols, which the transmitter serializes and transmits onto to an external device. The receiver is supplied with serialized symbols from an external device and transforms the received signals into a bit-stream. The bit-stream is de-serialized and supplied to logical sub-block 221. In one embodiment, an 8b/10b transmission code is employed, where ten-bit symbols are transmitted/received. Here, special symbols are used to frame a packet with frames 223. In addition, in one example, the receiver also provides a symbol clock recovered from the incoming serial stream.As stated above, although transaction layer 205, link layer 210, and physical layer 220 are discussed in reference to a specific embodiment of a PCIe protocol stack, a layered protocol stack is not so limited. In fact, any layered protocol may be included/implemented. As an example, an port/interface that is represented as a layered protocol includes: (1) a first layer to assemble packets, i.e. a transaction layer; a second layer to sequence packets, i.e. a link layer; and a third layer to transmit the packets, i.e. a physical layer. As a specific example, a common standard interface (CSI) layered protocol is utilized.Referring next to Figure 4 , an embodiment of a PCIe serial point to point fabric is illustrated. Although an embodiment of a PCIe serial point-to-point link is illustrated, a serial point-to-point link is not so limited, as it includes any transmission path for transmitting serial data. In the embodiment shown, a basic PCIe link includes two, low-voltage, differentially driven signal pairs: a transmit pair 406/412 and a receive pair 411/407. Accordingly, device 405 includes transmission logic 406 to transmit data to device 410 and receiving logic 407 to receive data from device 410. In other words, two transmitting paths, i.e. paths 416 and 417, and two receiving paths, i.e. paths 418 and 419, are included in a PCIe link.A transmission path refers to any path for transmitting data, such as a transmission line, a copper line, an optical line, a wireless communication channel, an infrared communication link, or other communication path. A connection between two devices, such as device 405 and device 410, is referred to as a link, such as link 415. A link may support one lane - each lane representing a set of differential signal pairs (one pair for transmission, one pair for reception). To scale bandwidth, a link may aggregate multiple lanes denoted by xN, where N is any supported Link width, such as 1, 2, 4, 8, 12, 16, 32, 64, or wider. In some implementations, each symmetric lane contains one transmit differential pair and one receive differential pair. Asymmetric lanes can contain unequal ratios of transmit and receive pairs. Some technologies can utilize symmetric lanes (e.g., PCIe), while others (e.g., Displayport) may not and may even including only transmit or only receive pairs, among other examples.A differential pair refers to two transmission paths, such as lines 416 and 417, to transmit differential signals. As an example, when line 416 toggles from a low voltage level to a high voltage level, i.e. a rising edge, line 417 drives from a high logic level to a low logic level, i.e. a falling edge. Differential signals potentially demonstrate better electrical characteristics, such as better signal integrity, i.e. cross-coupling, voltage overshoot/undershoot, ringing, etc. This allows for better timing window, which enables faster transmission frequencies.In some implementations, a data link layer or logical physical layer can include a controller or embody a media access control (MAC) layer. In some implementations, the physical (PHY) layer (e.g., its logic and/or physical fabric) can be provided as a separate intellectual property (IP), or computing, block, which can be coupled with other computing block providing other portions of the hardware logic to implement an interconnect stack. To enable such implementations, an interface can be provided to connect the computing blocks while still supporting a particular interconnect protocol (or potentially multiple different interconnect protocols) over the resulting interconnect (e.g., provided by the interconnected computing blocks). As an example, the PHY Interface for the PCI Express architecture (PIPE) has been developed to define such interfaces. Indeed, PIPE has been extended to enable interfaces between controllers (referred to also as "media access controllers" or MACs" herein) and PHYs in now multiple different interconnect technologies, including not only PCIe, but also SATA, USB, DisplayPort, Thunderbolt, and Converged IO architectures. Accordingly, PIPE is also sometimes referred to, alternatively, as the PHY Interface for PCI Express, SATA, DisplayPort, and Converged IO Architectures. PIPE is intended to enable the development of functionally equivalent PCI Express, SATA and USB PHY's. Accordingly, PHYs can be delivered as discrete integrated chip packages (ICs) or as macrocells for inclusion in ASIC designs or other systems. The specification defines a set of PHY functions which must be incorporated in a PIPE compliant PHY. PIPE is defined to provide a standard interface between such a PHY and a Media Access Layer (MAC) and/or Link Layer ASIC. A standardized PHY interface, such as PIPE, can provide an interface to which ASIC and endpoint device vendors can develop.FIGS. 5A-5C are simplified block diagrams 500a-c illustrating a defined interface 505 between a PHY and a MAC layer (e.g., implemented as two or more distinct computing blocks (e.g., integrated circuits (ICs), macrocells, intellectual property (IP) blocks, etc.). In some implementations, the interface may be implemented according to a PIPE-based protocol. The interface may assist in defining a partition of the physical layer and other layers of a system according to respective architectures. For instance, FIG. 5A illustrates a partitioning for PCIe using the interface, FIG. 5B illustrates a partitioning for USB using the interface, and FIG. 5C illustrates a partition for Converged IO using the interface, among other examples.In the examples of FIGS. 5A-5C , data transmitted or received over a physical channel 510 is processed by PHY layer logic. In one example, such as in PCIe architectures (e.g., as illustrated in FIG. 5A ), the physical layer may be considered to include both the physical media attachment (PMA) layer 515, the physical coding sublayer (PCS) 520, and the media access layer (MAC) 525. In other examples, such as USB architectures (e.g., as illustrated in FIG. 5B ), the physical layer may be defined to include the PMA layer 515 and the PCS 520, with the MAC implementing at least a portion of the link layer of the architecture. In yet another example, such as a Converged IO architecture (e.g., as illustrated in FIG. 5C ), the PMA layer 515 and the PCS 520 implement the physical layer, while the MAC implements a logical layer of the architecture, among other example partitioning of layers.Generally, an example PMA 515 may include analog buffers, a serializer/deserializer (SERDES), an interface (to the channel 510) (e.g., a 10-bit or 130-bit interface), among other example logic and elements. The PCS 520 can include coding/decoding logic (e.g., 8b/10b encode/decode, 64b/66b encode/decode, 128b/130b encode/decode, 128b/132b encode/decode, etc. depending on the architecture), an elastic buffer, and receiver detection logic, among other example logic and elements. In one example, the MAC layer 525 can include state machines for link training, flow control, elastic buffering, lane-to-lane deskew, and status, scrambling and descrambling logic, among other example logic and elements. The MAC layer 525 may provide or enable an interface 530 between the PHY layer (and/or link layer, depending on the architecture) and higher protocol layers of the architecture, such as a data link layer, transaction layer, transport layer, etc.In some implementations, a PIPE-based PHY/MAC interface 505 may include additional features (e.g., while allowing the interface to be backward compatible with earlier versions of PIPE). For instance, to address the issue of increasing signal count in some implementations, a message bus interface may be adopted in some implementations of the interface 505. The message bus interface may map legacy PIPE signals without critical timing requirements so that their associated functionality can be accessed via the message bus interface (e.g., implemented on control and status pins of the interface) instead of implementing dedicated signals. Additionally, in some instances, to further facilitate the design of generate purpose PHYs implemented as hard IP blocks and to provide the MAC layer with more freedom to do latency optimizations, a SerDes architecture may be provided to simplify the PHY and shift further protocol-specific logic into the block implementing the MAC layer, among other example features and enhancements.In some implementations, a PIPE message bus interface, such as introduced above, may be implemented as a defined interface between a controller and PHY can attempt to define a standardized interface between the controller and PHY including the definition of control and status signals for transmission between the computing blocks implementing the controller and PHY in connection with management of the interface and support of one or more interconnect protocols on a link. For instance, PIPE defines an interface between a MAC and PHY, which may be implemented using dedicated control and status signal wires for each operation involving communication between the MAC and the PHY. As the number of PIPE signals has grown over time as each of the protocol specifications PIPE supports (e.g. PCIe, SATA, USB) evolves (and as additional protocols are added for support through PIPE (e.g., USB Type-C, Displayport, Thunderbolt, etc.), implementing control and status signaling in PIPE using dedicated signal wires resulting in a problematic increase in the pin count demanded of the PIPE interface. Indeed, escalating pin count can threaten the future scalability and usability of interface such as PIPE, among other example issues.A message bus interface, such as utilized in some of the implementations discussed herein, may assist in addressing at least some of the issues above. For instance, a message bus interface may utilize a register-based status and control interface. In some example interfaces, a set of datapath signals and control and status signals can be defined. For instance, assuming a PIPE interface with defined datapath signals and control and status signals per Rx/Tx lane pair (and other interfaces may additionally support configurable pairs where pairs are configured either as {Rx, Tx}, {Rx, Rx}, {Tx, Tx} or {Tx, Rx}, etc.), in one embodiment, a low pin count version of a traditional PIPE interface can be implemented using a message bus interface, for instance, by providing an interface that maintains dedicated wires for datapath signals, asynchronous control and status signals, and latency-sensitive control and status signals, but that maps remaining control and status signals defined for the interface to registers (e.g. 8-bit, 16-bit, or 32-bit registers), which can be accessed over a small number of additional pins/wires (e.g., the message bus interface), such as wires facilitating data transmission of 4-bits, 8-bits, etc. per direction. To support messaging of these control and status signals using the registers (also referred to herein as "message bus registers"), an address space can be provided (e.g., 12 address bits), into which the defined registers are mapped. In some implementations, this address space can be designed to be deliberately large to accommodate expansion of the set of operations, control and status signals that are to use these defined registers. This allows plenty of headroom for future expansion as well as room to house vendor-specific registers that PHY designs can use to expose useful status information to the controller or to provide additional configurability.Continuing with the above example, to facilitate messaging of these control and status signals using the registers, read, write, completion, and other commands may be defined for accessing the registers. Included is a mechanism for grouping multiple writes together so that they take effect in the same cycle. Included is also a mechanism for distinguishing between 1-cycle assertion type signals and signals that are held to a constant value. A transaction involving these registers may include command, address, and data or any subset of these three elements, which may be transferred over the small set of wires in a time multiplexed manner (e.g., over multiple unit intervals or clock cycles). A framing scheme can also be defined in connection with the interface, by which a corresponding computing block may identify boundaries (e.g., start and end) of potentially multiple sequential (or contemporaneous) register transactions, each transaction serving to communicate one or more control or status signals in lieu of these same signals being driven over dedicated wires, as is done, for instance, in traditional PIPE interfaces, among other example features. Accordingly, a message bus interface may offload some signals of a MAC-PHY interface (e.g., PIPE) to specialized registers and thereby enable more interface operations in the future, as the protocols supported by the interface (e.g., PIPE) evolve to add new features, all while saving the interface from further increases in interface signal count.Turning to FIG. 6 , a simplified block diagram 600 is shown of an example PIPE interface utilizing a register-based, low pin count PIPE control and status interface (e.g., a message bus interface). The PIPE interface may couple a MAC computing block 605 with a PHY computing block 610 and at least a subset of the control and status signals generally defined for the interface may be categorized as either asynchronous signals, timing critical signals, and regular control and status signals, among other example categories. In this example, the asynchronous and timing critical control and status signals may be assigned dedicated wires on the improved interface, such as shown in FIG. 6 . The regular control and status signals, however, may be mapped into and replaced by the bits of registers (e.g., 615, 620), which are accessed over a small set of wires (e.g. four or eight bits) as shown in the present example. Register commands, e.g. reads and writes, register address, and register data may be transmitted in a time-multiplexed manner across this small serial interface to cause values to be written to the message bus registers. Further, the datapath related signals of the interface may be separate from the control and status signals and may, in effect, be the same or very similar to those provided in implementations where all control and status signals are implemented using dedicated pins (e.g., implementations not supporting message bus signaling).In one example implementation of message bus registers, a set of PIPE control and status signals can be mapped into 8-bit PIPE registers. In some cases, only a subset of the numerous control and status signals defined for an interface (e.g., in a PIPE-based specification) may be mapped to register bits in a computing block, while, in practice, potentially all of the control and status signals of a defined link layer-physical layer interface (e.g., PIPE) may be mapped to register bits (e.g., with exceptions for the asynchronous and timing critical control signals, which may remain implemented through dedicated wires), among other examples. Further, while some implementations may use 8-bit registers to implement message bus registers, other potential register widths can just as easily be used, including 16- or 32-bit registers, etc. In one example implementation, MAC→PHY control and status signals can be mapped to a first address space corresponding to the register of the PHY computing block, while PHY→MAC control and status signals can be mapped to a second address space corresponding to the register of the MAC computing block. In some cases, the first and second address spaces can utilize independent address spaces, such that the same address may potentially be used in each of the PHY's and MAC's register. In other example, a common, or shared, address space can be utilized such that first and second address spaces are nonoverlapping, with each register in the PHY and MAC having a unique address. In one example, MAC→PHY control and status signals can be mapped into an address space starting at address zero, while the PHY→MAC control and status signals can be mapped into another address space starting at address zero. As an example, a 12-bit address space may be implemented, which may be considered large enough to accommodate the currently defined PIPE signals with plenty of headroom for future signal growth, however, other address space sizes can be chosen in other examples. A large address space may be utilized in connection with the registers to enable room for a dedicated address range for vendor specific registers that can be used to expose useful PHY status information and/or to provide additional configurability. In still other examples, different sized address spaces can be provided that can be accessed via different commands, depending on latency requirements of transmitting the full command plus address bits across the serial interface, among other example implementations.Bits within a set of status/control registers of an example PHY/MAC interface can be mapped to defined signals in a set of signals defined or extended in the PHY/MAC interface (e.g., the signals defined in the PIPE specification). In one implementation, when a "1" is written to a bit mapped to a particular signal, this value is interpreted the same as if the particular signal were received in an implementation of the interface that provides dedicated wires to each of the signals. As an example, a first computing block may determine that a TxDetect state should be entered and can message this to the other computing block by preparing a write (to be sent over a subset of the pins of the interface designated as the status and control interface of the PHY/MAC interface), which causes a "1" to be written to corresponding bit (e.g., 12'h000[6]) to indicate the signal "TxDetectRx/Loopback" in this particular example. The receiving, second computing block can detect that the "1" has been written to bit 6 of the register at address 12'h000 and interpret this value as the receipt of the PIPE TxDetectRx/Loopback signal, among other potential examples.TABLE 1: Example of Register Commands4'b0000NOPUsed during idle periods4'b0001write_committedIndicates that the current write as well as any previously uncommitted writes should be committed, e.g. their values should be updated in the PIPE registers. Contains address and data.4'b0010write_uncommittedIndicates that the current write should be saved off and its associated values are updated in the PIPE registers at a future time when a write_committed is received. Contains address and data.4'b0011readContains address.4'b0100read completionThis is the data response to a read. Contains data only.OthersReservedReservedTable 1 provides examples of some register commands for use in accessing registers maintained in connection with control and status signals defined for a MAC-PHY interface, such as PIPE. For instance, a no operation (or "NOP") command can be utilized to indicate that there is no operation being requested (e.g., for use during idle states). Write operations can be used to replace transmission of one or more of a set of control and status signals defined for the interface. For instance, a write can write a value to a particular bit of a particular register mapped to a particular one of the set of control and status signals. The value of the particular bit can be interpreted as the receipt of the particular signal (even though the particular signal was not actually sent (e.g., as the dedicated wire has been omitted in the improved interface design)).In some instances, an interface can provide for a combination of signals in the set of control and status signals to be sent at the same time. For instance, certain PIPE signals may need to be aligned so that their values take effect during the same cycle. In a conventional version of the interface, this combination of signals can be transmitted concurrently each on their respective wires. In an improved implementation based on registers, it may not be feasible to concurrently write to each of the register bits corresponding to the combination of signals (e.g., the bits may be scattered across multiple registers with multiple different addresses). In one example, write commands can include committed and uncommitted writes. For example, an uncommitted command can be used to provisionally write, or queue a write, to an identified register address corresponding to the command. Uncommitted writes can be held until the next committed write is received, at which point the values requested in the intervening uncommitted writes (e.g., since the last committed write) are written to their respective register bits together with the writing to the register requested in the committed write. For instance, an uncommitted write can be written to a buffer (that is flushed on a committed write) or to a shadow register to store the write until the next committed write is received and the status and control register is updated, while committed writes are written directly to the status and control register. In this manner, one or more uncommitted writes can be requested followed by a committed write to simultaneously write values to multiple different registers and bits so as to achieve alignment of the signals mapped to these bits.As an example, in an implementation with 8-bit registers, 24 different signals (from a defined interface) can be mapped across three or more registers, such as registers A, B, and C. In one example, three signals mapped to three respective bits in register A, may need to be aligned with another signal mapped to a respective bit in register B, and two signals mapped to two respective bits in register C. In this particular illustrative example, to emulate the alignment of these signals, values can be written to the three bits in register A in a first write_uncommitted command, followed by a second write_uncommitted command to write the value to the bit in register B. Thereafter, a write_committed command can be utilized to not only write to the values of the two bits in register C, but also to "commit" and cause the uncommitted writes to registers A and B to be performed simultaneously with the writes to register C and thereby cause all the values associated with the writes to registers A, B, and C to take effect in the same cycle.Additional operations can be provided in connection with the status and control registers of an improved interface. For instance, read and read completion commands can be provided for accessing values written to particular status registers. Acknowledgement (ACK) commands can also be defined, for instance, to indicate acknowledgement (i.e., to requesting computing block) that a committed or uncommitted write has been successful performed at a particular register.Some implementations may omit support of a write_uncommitted command. For instance, in one implementation, the registers of a particular computing block can be defined in such a way, with width and signal assignments, that signals understood to need alignment are mapped to bits in the same register or adjacent registers, thereby making it possible to write to each of the corresponding bits in the register in a single committed write. Other potentially useful commands may include (but are not limited to) writes that span multiple adjacent registers, among other examples.In one example implementation of a message bus, the specification of a PIPE-based interface may define 12-bit address spaces to enable the message bus interface, with the MAC and the PHY each implementing unique 12-bit address spaces. For instance, FIG. 7 illustrates a block diagram illustrating an example address space 700. Such an address space may be used to host message bus registers associated with various interface operations (e.g., PIPE operations). For instance, in some implementations of a PIPE message bus, the MAC and PHY may access specific bits in the registers to initiate operations, to participate in handshakes, or to indicate status. The MAC initiates requests on the message bus interface to access message bus registers hosted in the PHY address space. Similarly, the PHY initiates requests on the message bus interface to access similar registers hosted in the MAC address space. As shown in the representation of FIG. 7 , in some examples, each 12-bit address space (e.g., 700) may be divided into four main regions: receiver address region 705, transmitter address region 710, common address region 715, and vendor specific address region 720. For instance, a receiver address region 705 may be used to configure and report status related to receiver operation (e.g., spanning the 1024KB region from 12'h000 to 12'h3FF and supporting up to two receivers with 512KB allocated to each). A transmitter address region 710 may be used to configure and report status related to transmitter operation (e.g., spanning the 1024KB region from 12'h400 to 12'h7FF and supporting up to two transmitters, TX1 and TX2, with a 512KB region associated with each). The common address region 715 may host registers relevant to both receiver and transmitter operation (e.g., spanning the 1024KB region from 12'h800 to 12'hBFF and supporting up two sets of Rx/Tx pairs with 512KB allocated toward the common registers for each pair). The vendor specific address region 820 may be implemented as a 1024K region from 12'hC00 to 12'hFFF and may enable individual vendors to define registers as needed outside of those defined in a particular version of a corresponding PIPE-based specification, among other example implementations. As noted above, the address space may be defined to support configurable Rx/Tx pairs. Up to two differential pairs may be assumed to be operational at any one time. Supported combinations are one Rx and one Tx pair, two Tx pairs, or two Rx pairs, among other example implementations.Tables 2 and 3 show example detailed implementations of the example PIPE message bus address space illustrated in FIG. 7 . For instance, PCIe RX margining operations and elastic buffer depth may be controlled via message bus registers hosted in these address spaces. Additionally, several legacy PIPE control and status signals may be been mapped into registers hosted in these address spaces. The following subsections define the PHY registers and the MAC registers. Individual register fields are specified as required or optional. In addition (as illustrated in the examples of Tables 2 and 3), each field may have an attribute description of either level or 1-cycle assertion. When a level field is written, the value written is maintained by the hardware until the next write to that field or until a reset occurs. When a 1-cycle field is written to assert the value high, the hardware maintains the assertion for only a single cycle and then automatically resets the value to zero on the next cycle.Table 2 lists the PHY registers and their associated address. The details of each register are provided in the subsections below. To support configurable pairs, the same registers defined for RX1 are also defined for RX2, the same registers defined for TX1 are defined for TX2, and the same registers defined for CMN1 are defined for CMN2. In this example, only two differential pairs are active at a time based on configuration, for instance, valid combinations correspond to registers defined in RX1+TX1+CMN1, RX1+RX2+CMN1+ CMN2, or TX1+TX2+CMN1+CMN2. In this example, a PHY that does not support configurable pairs only implements registers defined for RX1, TX1, and CMN1. In one example, PHY registers may be implemented such as set forth in the particular example of Table 1, listed below:TABLE 2: Representation of example PHY Message Bus Registers12'h0RX1: RX Margin Control012'h1RX1: RX Margin Control112'h2RX1: Elastic Buffer ControlN/A for SerDes Architecture12'h3RX1: PHY RX Control0N/A for SerDes Architecture12'h4RX1: PHY RX Control112'h5RX1: PHY RX Control212'h6RX1: PHY RX Control312'h7RX1: Elastic Buffer Location Update FrequencyN/A for SerDes Architecture12'h8RX1: PHY RX Control4Some fields N/A for SerDes Architecture12'h9-12'h1FFRX1: Reserved12'h200 to 12'h3FFRX2: Same registers are defined in this region for RX2 as for RX1 above.12'h400TX1: PHY TX Control0N/A for SerDes Architecture12'h401TX1: PHY TX Control1N/A for SerDes Architecture12'h402TX1: PHY TX Control212'h403TX1: PHY TX Control312'h404TX1: PHY TX Control412'h405TX1: PHY TX Control512'h406TX1: PHY TX Control612'h407TX1: PHY TX Control712'h408TX1: PHY TX Control812'h409-12'h5FFTX1: Reserved12'h600-12'h7FFTX2: Same registers are defined in this region for TX2 as for TX1 above12'h800CMN1: PHY Common Control0N/A for SerDes Architecture12'h801-12'h9FFCMN1: Reserved12'hA00 - 12'BFFCMN2: Same registers are defined in this region for CMN2 as for CMN1 above12'hC00-12'hFFFVDR: ReservedSimilarly, Table 3 lists an example implementation of MAC registers, their characteristics, and their associated addresses. For instance:TABLE 3: Representation of example MAC Message Bus Registers12'h0RX1: RX Margin Status012'h1RX1: RX Margin Status112'h2RX1: RX Margin Status212'h3RX1: Elastic Buffer StatusN/A for SerDes Architecture12'h4RX1: Elastic Buffer LocationN/A for SerDes Architecture12'h5RX1: RX Status012'h6RX1: RX Control012'h75-12'h9RX1: Reserved12'hARX1: RX Link Evaluation Status012'hBRX1: RX Link Evaluation Status112'hCRX1: RX Status 412'hDRX1: RX Status 512'hE-12'h1FFRX1: Reserved12'h200 to 12'h3FFRX2: Same registers are defined in this region for RX2 as for RX1 above.12'h400TX1: TX Status012'h401TX1: TX Status112'h402TX1: TX Status212'h403TX1: TX Status312'h404TX1: TX Status412'h405TX1: TX Status512'h406TX1: TX Status612'h403-12'h5FFTX1: Reserved12'h600-12'h7FFTX2: Same registers are defined in this region for TX2 as for TX1 above12'h800-12'h9FFCMN1: Reserved12'hA00-12'hBFFCMN2: Reserved12'hC00-12'hFFFVDR: ReservedIt should be appreciated that the example registers enumerated in Tables 2 and 3 are presented as illustrative examples only. Indeed, one of the example benefits of a message bus interface is the easy extensibility of the control and status signals supported by an example implementation of a MAC-PHY interface, such as PIPE.Turning to FIG. 8 , a signal diagram 800 is shown illustrating example signaling on an 8-bit status and control interface 830 of a MAC-PHY interface. 8-bits of data can be sent during each clock (PCLK) 835 cycle, or unit interval (UI). At startup, or following an idle state, zeros can be transmitted, as no control or status signals are being sent between the MAC and PHY blocks. When non-zero data is sent following an idle, the data can be interpreted as the beginning of a status/control transaction on the interface. For instance, in the example of FIG. 8 , a first one of the computing blocks can determine that a particular one of the defined status and control signals is to be sent to the other computing block as defined by the interface. In a register-based implementation, the dedicated signaling pins have been omitted, and the first computing block instead sends data over the status and control interface. For instance, the transaction can begin (at 810) with a four bit register command (e.g., "4'd1") followed by the first four bits of the register address to which the command applies being transmitted in a first UI. In the next UI, the remaining 8 bits of the register's address are sent (at 815) followed by four UIs of data (32 bits) containing the values to be written to the 32-bit register (beginning at 820).In some implementations, all status and control register transactions may contain a command. For write and read commands, the transaction can further include the associated register address. For writes and read completions, the transaction can also contain data (identifying contents of the register). As a result, the number of cycles it takes to transfer a transaction across the interface can be deduced from the command type. For instance, the example transaction shown in FIG. 8 involves a write command 805 transferred across an 8-bit serial interface, assuming a 4-bit command, 32-bit registers, and 12-bit address space, that is completed in 6 cycles (or UIs). Other transactions in this configuration will be expected to take a respective number of UIs to complete. For instance, a read may take two UI (e.g., for a 4-bit command and 12-bit address) and a read completion may take five UI (e.g., for a 4-bit command and 32-bits of read data), among other examples. Given the predictability of the length of these various transactions, the end of a transaction can be detected based on the transaction type. Consequently, the beginning of another transaction can likewise be detected, for instance, when non-zero data immediately follows the UI or bits detected to represent the end of a preceding transaction. This can allow the omission of a transaction identifier in some implementations. Further, a start of transaction may likewise be detected when a valid command is received following an idle or null signal (e.g., 825), among other examplesIn some defined interfaces, such as PIPE, some existing status and control signals are defined based not only on the designated wire on which they are transmitted but also the duration at which the signal is held on the corresponding wire. Accordingly, in an implementation that replaces at least some of these dedicated signaling wires with a register mapping (such as described above), it can be desirable to enable the distinguishing of signals that require 1-cycle assertions from signals that need to be held over multiple UIs (e.g., at a static value). For instance, particular register bits or registers can be configured such that a value written to the bit is held at that value but then automatically returned to a default or un-asserted value (i.e., without requiring an explicit write transaction to return the value to the default (e.g., from "1" back to "0"). For instance, a particular bit may be mapped to a particular signal that has a 1-cycle assertion, such that when a "1" is written to the particular bit, the "1" is interpreted as an instance of the particular signal. However, rather than keeping the value of the particular bit at "1", after the expiration of the corresponding single UI, or cycle, the value can be automatically returned to "0". Likewise, signals that are to be held at a value for more than one UI can be mapped to register bits that are configured to be held at that value until the expiration of a defined number of cycles or until the bit is overwritten, among other examples. In some instances, bits with similar configurations can be grouped within the same register or consecutively addressed registers. For instance, the bits within a given register can all be mapped to respective single cycle signal assertions, such that the register is processed to always return values back to a default for any bit in the register. Other registers can be used to group other bits mapped to other signals with similarly equal signal assertion lengths, among other examples. In another implementation, 1-cycle assertion type signals and static value type signals can be distinguished simply by grouping the two different signal types into different registers that are located in different address spaces or different address ranges of the same address space, and interpreting their values based on their respective address. In still another implementation, different signal types can be mapped to different registers and accessed using different write command types (e.g., a static-type write and a single-cycle write, etc.), among other examples.In some implementations, a message bus interface signal may be defined (e.g., through the definitions of corresponding message bus registers) to support recalibration of a PHY (e.g., recalibration of a PHY receiver) utilizing an implementation of a MAC-PHY interface. For instance, as I/O interconnect data rates increase, the absolute margins that transmitters and receivers must meet become smaller. As a result, proper PHY operation becomes more sensitive to nonidealities in the silicon circuits and to noise terms (e.g., thermal noise, cross talk, etc.). Additionally, support for wider operating temperature ranges introduces yet another factor that can impact proper PHY operation. In general, PHYs can mitigate this situation by adding monitoring circuitry, which senses variation of circuit parameters (e.g., offset, gain) and corrects deviations on the fly. However, in some implementations, such corrections may involve the addition of a redundant path, which may be very expensive to implement at higher data rates, among other example disadvantages.To enable a more cost-efficient implementation, in some examples, a PIPE interface may be enhanced to allow the PHY to notify the MAC that it has detected a situation that requires recalibration of the PHY. For instance, in certain situations, the PHY may identify conditions at the PHY and determine that it needs to be recalibrated. These conditions may include changes in operating conditions (e.g., Vref changes) or detection of certain error conditions, among other examples. In response to receiving a signal from the PHY that recalibration is desired, the MAC may quiesce the link by forcing the link to go into recovery and then signals to the PHY to perform the incremental correction needed to address the problem, among other example implementations.In some implementations, an example interface may be provided, which supports PHY- or controller-initiated recalibration. For instance, a PIPE-based interface may be enhanced to support PHY-initiated and controller-initiated PHY recalibration in a cost-effective manner (e.g., without redundant paths). For instance, in certain situations, the PHY may identify conditions at the PHY and determine that it needs to be recalibrated. These conditions may include changes in operating conditions (e.g., Vref changes) or detection of certain error conditions, among other examples. Further, the MAC-PHY interface may be so defined that either the controller or the PHY may initiate recalibration. In some implementations, recalibration (e.g., PHY Rx recalibration) may be required to take place during a recovery state. In some implementations, recovery may be initiated only by the controller (and not the PHY directly). In some implementations, only the MAC may be defined as capable of initiating recovery. In such implementations, if the PHY determines that a recalibration is necessary (and initiates recalibration) it may first notify the MAC (e.g., through a message bus interface) that the MAC should initiate recovery and then (or at the same time) request a recalibration. On the other hand, if recalibration is initiated by the MAC, in such examples, it may unilaterally cause the link to enter recovery in connection with a recalibration request (sent over a message bus interface). In both cases, the PHY may signal the controller when the recalibration is complete so that the controller can exit recovery and resume normal operation), among other example implementations.In some implementations, a message bus interface implemented on a PIPE-based specification may be implemented to enable signaling in a defined sequence to implement both PHY-initiated and controller-initiated PHY recalibration. In some implementations, the PHY recalibration operation (whether PHY- or MAC-initiated) is to occur while the link is in recovery (e.g., Recovery.RcvrLock for PCIe). The controller can initiate a link's transition into recovery. Accordingly, the controller may notify the PHY when the link has entered recovery and the link is prepared for performance of the recalibration operation. Additionally, the PHY may be responsible for notifying the controller when the recalibration process has completed and it is okay to exit recovery. In one example implementation, these signals may be facilitated on a PIPE-based interface through a PIPE message bus. For instance, two bits in the MAC message bus address space may be designated for signals to be sent by the PHY to the MAC and at least one bit may be dedicated within the PHY's message bus address space for signals to be sent by the MAC to the PHY in connection with a message bus implementation of PHY- and controller-initiated recalibration requests.Turning to FIGS. 9A-9B , simplified diagrams 900a-b are shown illustrating example signaling on an implementation of a message bus interface of a PHY-MAC interface to implement recalibration requests, which cause one or more blocks of the PHY (e.g., the PHY receiver ports) to be recalibrated.For instance, as shown in FIG. 9A , a signaling sequence may be defined, using a message bus interface, to implement controller-initiated recalibration requests. For instance, pins of a MAC-PHY interface (e.g., a PIPE-based interface) may include, for each data lane, 8 pins to implement an 8-bit MAC-to-PHY (M2P) message bus 905 and 8 pins to implement a corresponding 8-bit PHY-to-MAC (P2M) message bus 910. Messages sent on the message buses 905, 910 may be in accordance with an interface clock 915 (e.g., PIPE interface clock (pclk)). In the example of FIG. 9A , a MAC may determine (or may receive a signal from higher-layer logic) to identify that a recalibration of the PHY receiver should be performed. Accordingly, the MAC may initiate 920 a receiver recalibration request by writing to a bit in the PHY's message bus register mapped to receive recalibration requests. For instance, the MAC may use the M2P message bus 905 to send a write command 925a (e.g., a committed write command) to cause a "1" to be written to the receiver recalibration bit (e.g., IORecal bit) in the PHY's message bus register. In some implementations, the receiver recalibration bit may be configured as a 1-cycle register value.Continuing with the example of FIG. 9A , in response to reading the receiver recalibration bit written-to by the MAC, the PHY may send, over P2M message bus 910, an acknowledgement signal 930. As noted above, in some implementations, a protocol or other rule may be defined to require that PHY recalibration be performed while the link is in a recovery or other particular state. In some cases, such a state or state transition may only be initiated by the MAC (or higher layer protocol logic). For instance, in the example of FIG. 9A , the MAC may initiate a recovery state in connection with writing 925a to the receiver recalibration bit. For instance, the MAC may initiate the recovery state substantially in concert with or automatically after writing the recalibration request to the PHY message bus register.When a PHY identifies a recalibration request written to a particular one of its message bus registers, as in the example of FIG. 9A , the PHY may begin performing the recalibration. Upon completion of the recalibration, the PHY may utilize the P2M message bus 910 to signal to the MAC that the recalibration is complete (at 935). For instance, the PHY may send a write request 940a on the P2M message bus 910 to cause a particular bit of the MAC message bus register (e.g., IORecalDone) to be written to identify to the MAC that the recalibration is complete. In some implementations, this particular bit may be configured as a 1-cycle register value. Further, as with the recalibration request, an acknowledgement (e.g., 945) may be defined to be sent (by the MAC) in response to an indication from the PHY that recalibration is complete. In some implementations, the MAC may cause a link to transition from recovery into another state based upon receiving an indication (on its message bus register) that recalibration is complete, among other example actions based on the completion of recalibration.As a summary, in some implementations, such as shown in the example of FIG. 9A , a sequence of transactions as may be seen across the PIPE message bus interface for a controller-initiated PHY recalibration. For instance, when a controller determines a PHY recalibration is needed (e.g., due to detection of excessive number of errors), it may force the link into recovery and initiates a PHY recalibration by writing to the IORecal bit in the PHY's message bus address space. The PHY may acknowledge receipt of the recalibration request by returning a write_ack. After completing the recalibration process, the PHY may signal completion to the controller by writing to the IORecalDone bit in the controller's message bus address space, which the controller acknowledges by return of a write ack. Upon notification that the recalibration process has completed, the controller can subsequently exit recovery and resume normal operation on the linkTurning to FIG. 9B , an example of a PHY-initiated receiver recalibration is shown. For instance, tools may be provided at a PHY block implementation to monitor attributes of the PHY and a link established using the PHY to detect instances where recalibration of the PHY (e.g., a receiver of the PHY) is desirable (e.g., based on detecting that the signal quality as dropped below an acceptable level). For instance, in certain situations, the PHY may identify conditions at the PHY and determine that it needs to be recalibrated, such as changes in operating conditions (e.g., Vref changes) or detection of certain error conditions, among other examples. In response to identifying that a recalibration is desired, the PHY may send a write command 955 on the P2M message bus 910 to cause a value to be written to a recalibration request bit (e.g., PhyIORecalRequest) defined in a message bus register of the MAC and thereby initiate (at 950) receiver calibration. The MAC may detect that the value has been written to this bit (e.g., a 1-cycle register bit) and send an acknowledgement 960 to the PHY using the M2P message bus 905. In some implementations, the PHY may first write this request (at 955) before performing the recalibration, because recalibration is to be performed during a particular link state (e.g., recovery), which is to be initiated using the MAC. For instance, the MAC may initiate the state upon receiving the recalibration request at its message bus register. From this point, in some implementations, the signaling sequence may correspond to that in the controller-initiated recalibration shown and described in FIG. 9A . For instance, the MAC may request recalibration to continue (at 965) by writing a recalibration request 925b to the PHY message bus register (using M2P message bus 905) to indicate that the link is ready for recalibration (e.g., that recovery has been entered). The PHY may acknowledge (through message bus signal 970 on P2M message bus 910) the MAC's recalibration signal 925b and respond by performing the recalibration at the PHY. Upon completion of the recalibration, the PHY may message the same (at 975), by writing 940b to the MAC message bus register using the P2M message bus 910 to indicate that recalibration is completed. The MAC may cause the link to exit recovery based on the PHY's signal and may further send an acknowledgement signal 980 on the M2P message bus 905 to complete the signaling sequence, among other example features and implementations.To summarize, the example of FIG. 9B illustrates the sequence of transactions seen across the PIPE message bus interface for a PHY-initiated PHY recalibration. For instance, when the PHY detects variations in circuit parameters that warrant a recalibration, it signals to the controller that it should enter recovery and request a PHY recalibration; this is done by writing to the PhyIORecalRequest bit in the controller's message bus address space. The controller acknowledges receipt of this request by returning a write ack response. The remaining sequence of actions may follow exactly the sequence in the controller-initiated PHY recalibration described above, among other example implementations.Tables 4 and 5 illustrate an example implementation of message bus registers used to implement signaling on a message bus of a PIPE-based interface to facilitate PHY recalibration requests. For instance, Table 4 illustrates an example implementation of a portion of a MAC-based register utilized for messaging from a PHY to the MAC using a PIPE message bus interface:TABLE 4: Representation of example MAC Message Bus RegistersRX Status0Bit 11-cycleIORecalDone -- This field is set to '1' to indicate that an IORecal operation has successfully completed.The PHY advertises in its datasheet via the PhyRecalRequirement parameter whether this functionality is required.RX Control0Bit 01-cyclePhylORecalRequest - The PHY sets this to '1' to indicate that the controller should enter Recovery and request a Rx recalibration via the IORecal bit.The PHY advertises in its datasheet via the PhyRecalRequirement parameter whether this functionality is required.In the particular example of Table 4, two fields in a MAC message bus register may be defined in order to enable recalibration requests over a message bus interface of a PIPE-based interface. For instance, a PHY-initiated recalibration request signal (PhyIORecalRequest) may be mapped to bit 0 of a receiver control (e.g., Rx Control0) register of the MAC message bus register. When a value of "1" is written to the bit (through a committed write request received from the PHY over the message bus interface), the MAC may read the register and interpret the written value as an Rx recalibration request signal from the PHY. The PhyIORecalRequest bit may be configured to be a 1-cycle assertion type signal. A recalibration done signal (IORecalDone) may also be defined and mapped, in this example, to bit 1 of an Rx Status0 register of the MAC message bus register. A PHY may write a "1" to this bit using the message bus interface (e.g., through a committed write) to indicate to the MAC that Rx recalibration has been completed by the PHY. This signal may also be a 1-cycle assertion type signal field.In some implementations, a PhyRecalRequirement parameter may be utilized as a parameter, which may be advertised by a PHY block (e.g., in its datasheet, in a capability register or other data structure, from an online source, etc.) whether the corresponding controller (connecting to the PHY over a PIPE-based interface) is required to support the recalibration functionality signaling sequence and related message bus register values (e.g., PhyIORecalRequest, IORecal, and IORecalDone, etc.), among other example information pertaining to requesting and implementing PHY recalibrations.Table 5 illustrates an example implementation of a portion of a PHY-based register utilized for messaging from a MAC to the PHY using a PIPE message bus interface:TABLE 5: Representation of example PHY Message Bus RegistersPHY RX Control111-cycleIORecal -This field is set to '1' to request the PHY to do an RX recalibration followed immediately by a retrain. The controller asserts this signal either in response to PhylORecalRequest or autonomously if it determines a recalibration is needed. The PHY indicates completion of recalibration via the IORecalDone bit.The PHY advertises in its datasheet via the PhyRecalRequirement parameter whether this functionality is required.Table 5 represents a register bit of an example PHY message bus register defined to enable recalibration requests using a PIPE message bus interface. Specifically, in this example, a recalibration request signal (IORecal) may be defined and mapped to bit 1 of a PHY RX Control register of the PHY message bus register. When a value of "1" is written to the bit (through a committed write request received from the MAC over the message bus interface), the PHY may read the register and interpret the written value as an Rx recalibration request signal from the MAC. The IORecal signal may be so sent, for instance, in a MAC-initiated Rx recalibration request or as a response to a PHY-initiated Rx recalibration request (e.g., to confirm that the link is in a state (e.g., recovery) suitable for Rx recalibration to be performed), among other examples. The IORecal bit may be configured to be a 1-cycle assertion type signal.FIGS. 10A-10B are flowcharts illustrating example procedures corresponding to initiating recalibration through a message bus of an example MAC-PHY interface. For instance, in the example of FIG. 10A , an example of a PHY-initiated recalibration is illustrated. For instance, attributes of the PHY may be detected 1005 (e.g., by detection circuitry on the PHY) and it may be determined 1010 (e.g., at the PHY), based on these attributes or events, that the PHY (e.g., the PHY receiver(s)) should be recalibrated. Accordingly, the PHY may utilize a message bus interface to send a write command 1015 to the controller and write a value to a particular bit in the controller's message bus register mapped to a recalibration request signal. In some implementations, the controller may be responsible for ensuring that a link implemented by the PHY is in a recovery state before the recalibration can continue. Accordingly, a write command may be received 1020 from the MAC over the message bus interface to write a value to a particular bit of a PHY message bus register to indicate a recalibration request/confirmation by the MAC. The PHY may perform 1025 the recalibration (e.g., a PHY Rx recalibration) based on detecting the write to the particular bit in the PHY message bus register. Upon completion of the recalibration, the PHY block may again use the message bus interface to write 1030 a value to another value of the MAC message bus register mapped to a recalibration complete signal, to indicate to the MAC that the recalibration is complete.Turning to FIG. 10B , an example of a MAC-initiated recalibration is illustrated. For instance, attributes of a PHY block may be detected 1040 (e.g., by the MAC or higher layer logic) and, based on the attributes, it may be determined 1045 that at least a portion (e.g., receiver(s)) of the PHY block should be recalibrated. The MAC may also initiate the transitioning 1050 of a link, implemented using the PHY block, to a recovery state in order to facilitate completion of the recalibration. A message bus interface may be used to write a value 1055 to a particular bit of a PHY message bus interface mapped to a recalibration request signal to indicate a recalibration request to the PHY block. The MAC may receive a write request from the PHY block over the message bus interface to write a value to a particular bit of a MAC message bus register mapped to a recalibration complete signal to indicate that the recalibration has been completed. The MAC may then cause the link to exit 1065 the recovery state and resume communication (e.g., in an active link state) following the recalibration, among other example implementations and features.Note that the apparatus', methods', and systems described above may be implemented in any electronic device or system as aforementioned. As specific illustrations, the figures below provide exemplary systems for utilizing the invention as described herein. As the systems below are described in more detail, a number of different interconnects are disclosed, described, and revisited from the discussion above. And as is readily apparent, the advances described above may be applied to any of those interconnects, fabrics, or architectures.Referring to FIG. 11 , an embodiment of a block diagram for a computing system including a multicore processor is depicted. Processor 1100 includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SOC), or other device to execute code. Processor 1100, in one embodiment, includes at least two cores-core 1101 and 1102, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor 1100 may include any number of processing elements that may be symmetric or asymmetric.In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.Physical processor 1100, as illustrated in FIG. 11 , includes two cores-core 1101 and 1102. Here, core 1101 and 1102 are considered symmetric cores, i.e. cores with the same configurations, functional units, and/or logic. In another embodiment, core 1101 includes an out-of-order processor core, while core 1102 includes an in-order processor core. However, cores 1101 and 1102 may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native Instruction Set Architecture (ISA), a core adapted to execute a translated Instruction Set Architecture (ISA), a co-designed core, or other known core. In a heterogeneous core environment (i.e. asymmetric cores), some form of translation, such as a binary translation, may be utilized to schedule or execute code on one or both cores. Yet to further the discussion, the functional units illustrated in core 1101 are described in further detail below, as the units in core 1102 operate in a similar manner in the depicted embodiment.As depicted, core 1101 includes two hardware threads 1101a and 1101b, which may also be referred to as hardware thread slots 1101a and 1101b. Therefore, software entities, such as an operating system, in one embodiment potentially view processor 1100 as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers 1101a, a second thread is associated with architecture state registers 1101b, a third thread may be associated with architecture state registers 1102a, and a fourth thread may be associated with architecture state registers 1102b. Here, each of the architecture state registers (1101a, 1101b, 1102a, and 1102b) may be referred to as processing elements, thread slots, or thread units, as described above. As illustrated, architecture state registers 1101a are replicated in architecture state registers 1101b, so individual architecture states/contexts are capable of being stored for logical processor 1101a and logical processor 1101b. In core 1101, other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block 1130 may also be replicated for threads 1101a and 1101b. Some resources, such as re-order buffers in reorder/retirement unit 1135, ILTB 1120, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page-table base register(s), low-level data-cache and data-TLB 1115, execution unit(s) 1140, and portions of out-of-order unit 1135 are potentially fully shared.Processor 1100 often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. In FIG. 11 , an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core 1101 includes a simplified, representative out-of-order (OOO) processor core. But an in-order processor may be utilized in different embodiments. The OOO core includes a branch target buffer 1120 to predict branches to be executed/taken and an instruction-translation buffer (I-TLB) 1120 to store address translation entries for instructions.Core 1101 further includes decode module 1125 coupled to fetch unit 1120 to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots 1101a, 1101b, respectively. Usually core 1101 is associated with a first ISA, which defines/specifies instructions executable on processor 1100. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode logic 1125 includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions on in the pipeline for processing as defined by the first ISA. For example, as discussed in more detail below decoders 1125, in one embodiment, include logic designed or adapted to recognize specific instructions, such as transactional instruction. As a result of the recognition by decoders 1125, the architecture or core 1101 takes specific, predefined actions to perform tasks associated with the appropriate instruction. It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions. Note decoders 1126, in one embodiment, recognize the same ISA (or a subset thereof). Alternatively, in a heterogeneous core environment, decoders 1126 recognize a second ISA (either a subset of the first ISA or a distinct ISA).In one example, allocator and renamer block 1130 includes an allocator to reserve resources, such as register files to store instruction processing results. However, threads 1101a and 1101b are potentially capable of out-of-order execution, where allocator and renamer block 1430 also reserves other resources, such as reorder buffers to track instruction results. Unit 1130 may also include a register renamer to rename program/instruction reference registers to other registers internal to processor 1100. Reorder/retirement unit 1135 includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of-order execution and later in-order retirement of instructions executed out-of-order.Scheduler and execution unit(s) block 1140, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units.Lower level data cache and data translation buffer (D-TLB) 1150 are coupled to execution unit(s) 1140. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages.Here, cores 1101 and 1102 share access to higher-level or further-out cache, such as a second level cache associated with on-chip interface 1110. Note that higher-level or further-out refers to cache levels increasing or getting further way from the execution unit(s). In one embodiment, higher-level cache is a last-level data cache-last cache in the memory hierarchy on processor 1100-such as a second or third level data cache. However, higher level cache is not so limited, as it may be associated with or include an instruction cache. A trace cache-a type of instruction cache-instead may be coupled after decoder 1125 to store recently decoded traces. Here, an instruction potentially refers to a macro-instruction (i.e. a general instruction recognized by the decoders), which may decode into a number of micro-instructions (micro-operations).In the depicted configuration, processor 1100 also includes on-chip interface module 1110. Historically, a memory controller, which is described in more detail below, has been included in a computing system external to processor 1100. In this scenario, on-chip interface 1110 is to communicate with devices external to processor 1100, such as system memory 1175, a chipset (often including a memory controller hub to connect to memory 1175 and an I/O controller hub to connect peripheral devices), a memory controller hub, a northbridge, or other integrated circuit. And in this scenario, bus 1105 may include any known interconnect, such as multi-drop bus, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g. cache coherent) bus, a layered protocol architecture, a differential bus, and a GTL bus.Memory 1175 may be dedicated to processor 1100 or shared with other devices in a system. Common examples of types of memory 1175 include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. Note that device 1180 may include a graphic accelerator, processor or card coupled to a memory controller hub, data storage coupled to an I/O controller hub, a wireless transceiver, a flash device, an audio controller, a network controller, or other known device.Recently however, as more logic and devices are being integrated on a single die, such as SOC, each of these devices may be incorporated on processor 1100. For example, in one embodiment, a memory controller hub is on the same package and/or die with processor 1100. Here, a portion of the core (an on-core portion) 1110 includes one or more controller(s) for interfacing with other devices such as memory 1175 or a graphics device 1180. The configuration including an interconnect and controllers for interfacing with such devices is often referred to as an on-core (or un-core configuration). As an example, on-chip interface 1110 includes a ring interconnect for on-chip communication and a high-speed serial point-to-point link 1105 for off-chip communication. Yet, in the SOC environment, even more devices, such as the network interface, co-processors, memory 1175, graphics processor 1180, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption.In one embodiment, processor 1100 is capable of executing a compiler, optimization, and/or translator code 1177 to compile, translate, and/or optimize application code 1176 to support the apparatus and methods described herein or to interface therewith. A compiler often includes a program or set of programs to translate source text/code into target text/code. Usually, compilation of program/application code with a compiler is done in multiple phases and passes to transform hi-level programming language code into low-level machine or assembly language code. Yet, single pass compilers may still be utilized for simple compilation. A compiler may utilize any known compilation techniques and perform any known compiler operations, such as lexical analysis, preprocessing, parsing, semantic analysis, code generation, code transformation, and code optimization.Larger compilers often include multiple phases, but most often these phases are included within two general phases: (1) a front-end, i.e. generally where syntactic processing, semantic processing, and some transformation/optimization may take place, and (2) a back-end, i.e. generally where analysis, transformations, optimizations, and code generation takes place. Some compilers refer to a middle, which illustrates the blurring of delineation between a front-end and back end of a compiler. As a result, reference to insertion, association, generation, or other operation of a compiler may take place in any of the aforementioned phases or passes, as well as any other known phases or passes of a compiler. As an illustrative example, a compiler potentially inserts operations, calls, functions, etc. in one or more phases of compilation, such as insertion of calls/operations in a front-end phase of compilation and then transformation of the calls/operations into lower-level code during a transformation phase. Note that during dynamic compilation, compiler code or dynamic optimization code may insert such operations/calls, as well as optimize the code for execution during runtime. As a specific illustrative example, binary code (already compiled code) may be dynamically optimized during runtime. Here, the program code may include the dynamic optimization code, the binary code, or a combination thereof.Similar to a compiler, a translator, such as a binary translator, translates code either statically or dynamically to optimize and/or translate code. Therefore, reference to execution of code, application code, program code, or other software environment may refer to: (1) execution of a compiler program(s), optimization code optimizer, or translator either dynamically or statically, to compile program code, to maintain software structures, to perform other operations, to optimize code, or to translate code; (2) execution of main program code including operations/calls, such as application code that has been optimized/compiled; (3) execution of other program code, such as libraries, associated with the main program code to maintain software structures, to perform other software related operations, or to optimize code; or (4) a combination thereof.Referring now to FIG. 12 , shown is a block diagram of a second system 1200 in accordance with an embodiment of the present invention. As shown in FIG. 12 , multiprocessor system 1200 is a point-to-point interconnect system, and includes a first processor 1270 and a second processor 1280 coupled via a point-to-point interconnect 1250. Each of processors 1270 and 1280 may be some version of a processor. In one embodiment, 1252 and 1254 are part of a serial, point-to-point coherent interconnect fabric, such as Intel's Quick Path Interconnect (QPI) architecture. As a result, the invention may be implemented within the QPI architecture.While shown with only two processors 1270, 1280, it is to be understood that the scope of the present invention is not so limited. In other embodiments, one or more additional processors may be present in a given processor.Processors 1270 and 1280 are shown including integrated memory controller units 1272 and 1282, respectively. Processor 1270 also includes as part of its bus controller units point-to-point (P-P) interfaces 1276 and 1278; similarly, second processor 1280 includes P-P interfaces 1286 and 1288. Processors 1270, 1280 may exchange information via a point-to-point (P-P) interface 1250 using P-P interface circuits 1278, 1288. As shown in FIG. 12 , IMCs 1272 and 1282 couple the processors to respective memories, namely a memory 1232 and a memory 1234, which may be portions of main memory locally attached to the respective processors.Processors 1270, 1280 each exchange information with a chipset 1290 via individual P-P interfaces 1252, 1254 using point to point interface circuits 1276, 1294, 1286, 1298. Chipset 1290 also exchanges information with a high-performance graphics circuit 1238 via an interface circuit 1292 along a high-performance graphics interconnect 1239.A shared cache (not shown) may be included in either processor or outside of both processors; yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1290 may be coupled to a first bus 1216 via an interface 1296. In one embodiment, first bus 1216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in FIG. 12 , various I/O devices 1214 are coupled to first bus 1216, along with a bus bridge 1218 which couples first bus 1216 to a second bus 1220. In one embodiment, second bus 1220 includes a low pin count (LPC) bus. Various devices are coupled to second bus 1220 including, for example, a keyboard and/or mouse 1222, communication devices 1227 and a storage unit 1228 such as a disk drive or other mass storage device which often includes instructions/code and data 1230, in one embodiment. Further, an audio I/O 1224 is shown coupled to second bus 1220. Note that other architectures are possible, where the included components and interconnect architectures vary. For example, instead of the point-to-point architecture of FIG. 12 , a system may implement a multi-drop bus or other such architecture.Computing systems can include various combinations of components. These components may be implemented as ICs, portions thereof, discrete electronic devices, or other modules, logic, hardware, software, firmware, or a combination thereof adapted in a computer system, or as components otherwise incorporated within a chassis of the computer system. However, it is to be understood that some of the components shown may be omitted, additional components may be present, and different arrangement of the components shown may occur in other implementations. As a result, the invention described above may be implemented in any portion of one or more of the interconnects illustrated or described below.A processor, in one embodiment, includes a microprocessor, multi-core processor, multithreaded processor, an ultra-low voltage processor, an embedded processor, or other known processing element. In the illustrated implementation, processor acts as a main processing unit and central hub for communication with many of the various components of the system. As one example, processor is implemented as a system on a chip (SoC). As a specific illustrative example, processor includes an Intel® Architecture Core™-based processor such as an i3, i5, i7 or another such processor available from Intel Corporation, Santa Clara, CA. However, understand that other low power processors such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, CA, a MIPS-based design from MIPS Technologies, Inc. of Sunnyvale, CA, an ARM-based design licensed from ARM Holdings, Ltd. or customer thereof, or their licensees or adopters may instead be present in other embodiments such as an Apple A5/A6 processor, a Qualcomm Snapdragon processor, or TI OMAP processor. Note that many of the customer versions of such processors are modified and varied; however, they may support or recognize a specific instruction set that performs defined algorithms as set forth by the processor licensor. Here, the microarchitectural implementation may vary, but the architectural function of the processor is usually consistent. Certain details regarding the architecture and operation of processor in one implementation will be discussed further below to provide an illustrative example.Processor, in one embodiment, communicates with a system memory. As an illustrative example, which in an embodiment can be implemented via multiple memory devices to provide for a given amount of system memory. As examples, the memory can be in accordance with a Joint Electron Devices Engineering Council (JEDEC) low power double data rate (LPDDR)-based design such as the current LPDDR2 standard according to JEDEC JESD 209-2E (published April 2009), or a next generation LPDDR standard to be referred to as LPDDR3 or LPDDR4 that will offer extensions to LPDDR2 to increase bandwidth. In various implementations the individual memory devices may be of different package types such as single die package (SDP), dual die package (DDP) or quad die package (13P). These devices, in some embodiments, are directly soldered onto a motherboard to provide a lower profile solution, while in other embodiments the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. And of course, other memory implementations are possible such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDIMMs, MiniDIMMs. In a particular illustrative embodiment, memory is sized between 2GB and 16GB, and may be configured as a DDR3LM package or an LPDDR2 or LPDDR3 memory that is soldered onto a motherboard via a ball grid array (BGA).To provide for persistent storage of information such as data, applications, one or more operating systems and so forth, a mass storage may also couple to processor. In various embodiments, to enable a thinner and lighter system design as well as to improve system responsiveness, this mass storage may be implemented via an SSD. However, in other embodiments, the mass storage may primarily be implemented using a hard disk drive (HDD) with a smaller amount of SSD storage to act as an SSD cache to enable non-volatile storage of context state and other such information during power down events so that a fast power up can occur on re-initiation of system activities. A flash device may be coupled to processor, e.g., via a serial peripheral interface (SPI). This flash device may provide for non-volatile storage of system software, including a basic input/output software (BIOS) as well as other firmware of the system.In various embodiments, mass storage of the system is implemented by an SSD alone or as a disk, optical or other drive with an SSD cache. In some embodiments, the mass storage is implemented as an SSD or as an HDD along with a restore (RST) cache module. In various implementations, the HDD provides for storage of between 320GB-4 terabytes (TB) and upward while the RST cache is implemented with an SSD having a capacity of 24GB-256GB. Note that such SSD cache may be configured as a single level cache (SLC) or multi-level cache (MLC) option to provide an appropriate level of responsiveness. In an SSD-only option, the module may be accommodated in various locations such as in a mSATA or NGFF slot. As an example, an SSD has a capacity ranging from 120GB-1TB.Various peripheral devices may couple to processor via a low pin count (LPC) interconnect. In the embodiment shown, various components can be coupled through an embedded controller. Such components can include a keyboard (e.g., coupled via a PS2 interface), a fan, and a thermal sensor. In some embodiments, touch pad may also couple to EC via a PS2 interface. In addition, a security processor such as a trusted platform module (TPM) in accordance with the Trusted Computing Group (TCG) TPM Specification Version 1.2, dated Oct. 2, 2003, may also couple to processor via this LPC interconnect. However, understand the scope of the present invention is not limited in this regard and secure processing and storage of secure information may be in another protected location such as a static random access memory (SRAM) in a security coprocessor, or as encrypted data blobs that are only decrypted when protected by a secure enclave (SE) processor mode.In a particular implementation, peripheral ports may include a high definition media interface (HDMI) connector (which can be of different form factors such as full size, mini or micro); one or more USB ports, such as full-size external ports in accordance with the Universal Serial Bus Revision 3.0 Specification (November 2008), with at least one powered for charging of USB devices (such as smartphones) when the system is in Connected Standby state and is plugged into AC wall power. In addition, one or more Thunderbolt™ ports can be provided. Other ports may include an externally accessible card reader such as a full-size SD-XC card reader and/or a SIM card reader for WWAN (e.g., an 8-pin card reader). For audio, a 3.5mm jack with stereo sound and microphone capability (e.g., combination functionality) can be present, with support for jack detection (e.g., headphone only support using microphone in the lid or headphone with microphone in cable). In some embodiments, this jack can be re-taskable between stereo headphone and stereo microphone input. Also, a power jack can be provided for coupling to an AC brick.System can communicate with external devices in a variety of manners, including wirelessly. In some instances, various wireless modules, each of which can correspond to a radio configured for a particular wireless communication protocol, are present. One manner for wireless communication in a short range such as a near field may be via a near field communication (NFC) unit which may communicate, in one embodiment with processor via an SMBus. Note that via this NFC unit, devices in close proximity to each other can communicate. For example, a user can enable system to communicate with another (e.g.,) portable device such as a smartphone of the user via adapting the two devices together in close relation and enabling transfer of information such as identification information payment information, data such as image data or so forth. Wireless power transfer may also be performed using an NFC system.Using the NFC unit described herein, users can bump devices side-to-side and place devices side-by-side for near field coupling functions (such as near field communication and wireless power transfer (WPT)) by leveraging the coupling between coils of one or more of such devices. More specifically, embodiments provide devices with strategically shaped, and placed, ferrite materials, to provide for better coupling of the coils. Each coil has an inductance associated with it, which can be chosen in conjunction with the resistive, capacitive, and other features of the system to enable a common resonant frequency for the system.Further, additional wireless units can include other short-range wireless engines including a WLAN unit and a Bluetooth unit. Using WLAN unit, Wi-Fi™ communications in accordance with a given Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard can be realized, while via Bluetooth unit, short range communications via a Bluetooth protocol can occur. These units may communicate with processor via, e.g., a USB link or a universal asynchronous receiver transmitter (UART) link. Or these units may couple to processor via an interconnect according to a Peripheral Component Interconnect Express™ (PCIe™) protocol, e.g., in accordance with the PCI Express™ Specification Base Specification version 3.0 (published January 17, 2007), or another such protocol such as a serial data input/output (SDIO) standard. Of course, the actual physical connection between these peripheral devices, which may be configured on one or more add-in cards, can be by way of the NGFF connectors adapted to a motherboard.In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, can occur via a WWAN unit which in turn may couple to a subscriber identity module (SIM). In addition, to enable receipt and use of location information, a GPS module may also be present. WWAN unit and an integrated capture device such as a camera module may communicate via a given USB protocol such as a USB 2.0 or 3.0 link, or a UART or I2C protocol. Again, the actual physical connection of these units can be via adaptation of a NGFF add-in card to an NGFF connector configured on the motherboard.In a particular embodiment, wireless functionality can be provided modularly, e.g., with a WiFi™ 802.11ac solution (e.g., add-in card that is backward compatible with IEEE 802. 11abgn) with support for Windows 8 CS. This card can be configured in an internal slot (e.g., via an NGFF adapter). An additional module may provide for Bluetooth capability (e.g., Bluetooth 4.0 with backwards compatibility) as well as Intel® Wireless Display functionality. In addition, NFC support may be provided via a separate device or multi-function device, and can be positioned as an example, in a front right portion of the chassis for easy access. A still additional module may be a WWAN device that can provide support for 3G/4G/LTE and GPS. This module can be implemented in an internal (e.g., NGFF) slot. Integrated antenna support can be provided for WiFi™, Bluetooth, WWAN, NFC and GPS, enabling seamless transition from WiFi™ to WWAN radios, wireless gigabit (WiGig) in accordance with the Wireless Gigabit Specification (July 2010), and vice versa.As described above, an integrated camera can be incorporated in the lid. As one example, this camera can be a high-resolution camera, e.g., having a resolution of at least 2.0 megapixels (MP) and extending to 6.0 MP and beyond.To provide for audio inputs and outputs, an audio processor can be implemented via a digital signal processor (DSP), which may couple to processor via a high definition audio (HDA) link. Similarly, DSP may communicate with an integrated coder/decoder (CODEC) and amplifier that in turn may couple to output speakers which may be implemented within the chassis. Similarly, amplifier and CODEC can be coupled to receive audio inputs from a microphone which in an embodiment can be implemented via dual array microphones (such as a digital microphone array) to provide for high quality audio inputs to enable voice-activated control of various operations within the system. Note also that audio outputs can be provided from amplifier/CODEC to a headphone jack.In a particular embodiment, the digital audio codec and amplifier are capable of driving the stereo headphone jack, stereo microphone jack, an internal microphone array and stereo speakers. In different implementations, the codec can be integrated into an audio DSP or coupled via an HD audio path to a peripheral controller hub (PCH). In some implementations, in addition to integrated stereo speakers, one or more bass speakers can be provided, and the speaker solution can support DTS audio.In some embodiments, processor may be powered by an external voltage regulator (VR) and multiple internal voltage regulators that are integrated inside the processor die, referred to as fully integrated voltage regulators (FIVRs). The use of multiple FIVRs in the processor enables the grouping of components into separate power planes, such that power is regulated and supplied by the FIVR to only those components in the group. During power management, a given power plane of one FIVR may be powered down or off when the processor is placed into a certain low power state, while another power plane of another FIVR remains active, or fully powered.In one embodiment, a sustain power plane can be used during some deep sleep states to power on the I/O pins for several I/O signals, such as the interface between the processor and a PCH, the interface with the external VR and the interface with EC. This sustain power plane also powers an on-die voltage regulator that supports the on-board SRAM or other cache memory in which the processor context is stored during the sleep state. The sustain power plane is also used to power on the processor's wakeup logic that monitors and processes the various wakeup source signals.During power management, while other power planes are powered down or off when the processor enters certain deep sleep states, the sustain power plane remains powered on to support the above-referenced components. However, this can lead to unnecessary power consumption or dissipation when those components are not needed. To this end, embodiments may provide a connected standby sleep state to maintain processor context using a dedicated power plane. In one embodiment, the connected standby sleep state facilitates processor wakeup using resources of a PCH which itself may be present in a package with the processor. In one embodiment, the connected standby sleep state facilitates sustaining processor architectural functions in the PCH until processor wakeup, this enabling turning off all of the unnecessary processor components that were previously left powered on during deep sleep states, including turning off all of the clocks. In one embodiment, the PCH contains a time stamp counter (TSC) and connected standby logic for controlling the system during the connected standby state. The integrated voltage regulator for the sustain power plane may reside on the PCH as well.In an embodiment, during the connected standby state, an integrated voltage regulator may function as a dedicated power plane that remains powered on to support the dedicated cache memory in which the processor context is stored such as critical state variables when the processor enters the deep sleep states and connected standby state. This critical state may include state variables associated with the architectural, micro-architectural, debug state, and/or similar state variables associated with the processor.The wakeup source signals from EC may be sent to the PCH instead of the processor during the connected standby state so that the PCH can manage the wakeup processing instead of the processor. In addition, the TSC is maintained in the PCH to facilitate sustaining processor architectural functions.Power control in the processor can lead to enhanced power savings. For example, power can be dynamically allocated between cores, individual cores can change frequency/voltage, and multiple deep low power states can be provided to enable very low power consumption. In addition, dynamic control of the cores or independent core portions can provide for reduced power consumption by powering off components when they are not being used.In different implementations, a security module such as a TPM can be integrated into a processor or can be a discrete device such as a TPM 2.0 device. With an integrated security module, also referred to as Platform Trust Technology (PTT), BIOS/firmware can be enabled to expose certain hardware features for certain security features, including secure instructions, secure boot, Intel® Anti-Theft Technology, Intel® Identity Protection Technology, Intel® Trusted Execution Technology (TXT), and Intel® Manageability Engine Technology along with secure user interfaces such as a secure keyboard and display.While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present invention.A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro-controller. Therefore, reference to a module, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of a module refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term module (in this example) may refer to the combination of the microcontroller and the non-transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.Use of the phrase 'to' or 'configured to,' in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still 'configured to' perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate 'configured to' provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term 'configured to' does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.Furthermore, use of the phrases 'capable of/to,' and or 'operable to,' in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example, the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.The following examples pertain to embodiments in accordance with this Specification. Example 1 is an apparatus including: physical layer (PHY) circuitry; a memory to implement a message bus register, where a set of control and status signals are mapped to bits of the message bus register, and the set of control and status signals includes a recalibration request signal mapped to a particular one of the bits of the message bus register; and an interface to couple to a controller, where the interface includes a PHY Interface for the PCI Express (PIPE)-based interface, and the interface includes: a set of data pins including transmit data pins to send data to the controller and receive data pins to receive data from the controller; a particular set of pins to implement a message bus interface, where a write command is to be received from the controller over the message bus interface to write a value to the particular bit; and recalibration circuitry to perform a recalibration of the PHY circuitry based on the value written to the particular bit.Example 2 includes the subject matter of example 1, where the write command includes a committed write.Example 3 includes the subject matter of any one of examples 1-2, where the value of the particular bit is to be reset automatically after a number of clock cycles.Example 4 includes the subject matter of any one of examples 1-3, further including detection circuitry to: detect one or more attributes of the PHY circuitry; and determine that the recalibration should be performed based on the one or more attributes.Example 5 includes the subject matter of example 4, where the PHY circuitry is to send a write command to the controller over the message bus interface to write a value to a message bus register of the controller to indicate to the controller a request to perform the recalibration.Example 6 includes the subject matter of example 5, where the write command from the controller is received based on the request to perform the recalibration.Example 7 includes the subject matter of example 6, where the PHY circuitry is to implement a link and the recalibration is to be performed while the link is in recovery, where the controller is to initiate the recovery.Example 8 includes the subject matter of any one of examples 1-7, where the PHY circuitry is to send a write command to the controller over the message bus interface to write a value to a message bus register of the controller to indicate to the controller that the recalibration is complete.Example 9 includes the subject matter of any one of examples 1-8, where the PIPE-based interface includes a PHY Interface for PCI Express, SATA, DisplayPort, and Converged IO Architectures.Example 10 is an apparatus including: a controller; and an interface to couple the controller to a physical layer (PHY) block, where the interface includes: a set of data pins including transmit data pins to send data to the PHY block and receive data pins to receive data from the PHY block; and a particular set of pins to implement a message bus interface, where the controller is to send a write command to the PHY block over the message bus interface to write a value to at least one particular bit of a PHY message bus register, bits of the PHY message bus register are mapped to a set of control and status signals, and the particular bit is mapped to a recalibration request signal to request that the PHY block perform a recalibration.Example 11 includes the subject matter of example 10, where the set of control and status signals includes a first set of control and status signals, and the apparatus further includes a memory to implement a controller message bus register, where bits of the controller message bus register are mapped to a second set of control and status signals, and the controller is to receive commands from the PHY block over the message bus interface to set values of bits in the controller message bus register to indicate particular signals in the second set of control and status signals.Example 12 includes the subject matter of example 11, where the second set of control and status signals includes a recalibration complete signal mapped to a particular one of the bits of the controller message bus register to indicate completion of the recalibration by the PHY block.Example 13 includes the subject matter of example 11, where the second set of control and status signals includes a PHY-initiated recalibration request signal mapped to a particular one of the bits of the controller message bus register to initiate the recalibration by the PHY block.Example 14 includes the subject matter of example 13, where the recalibration is to be performed while a link is in recovery, the controller is to initiate transition of the link to recovery, and the PHY-initiated recalibration request signal is to request the controller to initiate the transition of the link to recovery.Example 15 includes the subject matter of example 13, where the second set of control and status signals further includes a recalibration complete signal mapped to another one of the bits of the controller message bus register to indicate completion of the recalibration by the PHY block.Example 16 includes the subject matter of any one of examples 10-15, where the interface includes a PHY Interface for the PCI Express (PIPE)-based interface.Example 17 includes the subject matter of any one of examples 10-16, where the controller includes a media access controller (MAC).Example 18 includes the subject matter of any one of examples 10-17, where the write command includes a committed write.Example 19 includes the subject matter of any one of examples 1-18, where the value of the particular bit is to be reset automatically after a number of clock cycles.Example 20 includes the subject matter of any one of examples 1-19, where the PIPE-based interface includes a PHY Interface for PCI Express, SATA, DisplayPort, and Converged IO Architectures.Example 21 is a system including: a first device including a media access controller (MAC) circuitry; a second device including physical layer (PHY) circuitry, where the second device includes a PHY message bus register, bits of the PHY message bus register are mapped to a set of control and status signals, and a particular one of the bits of the PHY message bus register is mapped to a recalibration request signal; an interface to couple the first device to the second device, where the interface includes: a first pins to enable signaling of data from the first device to the second device; second pins to enable signaling of data from the second device to the first device; third pins to implement a message bus interface, where the first device is to send a write request to the second device over the message bus interface to write a value to the particular bit and indicate a request to perform a recalibration of the PHY circuitry, where the second device further includes recalibration circuitry to perform the recalibration based on writing the value to the particular bit.Example 22 includes the subject matter of example 21, where the set of control and status signals include a second set of control and status signals, the first device includes a controller message bus register, bits of the controller message bus register are mapped to a first set of control and status signals, a particular bit of the controller message bus register is mapped to a PHY-initiated recalibration request signal, and another bit of the controller message bus register is mapped to a recalibration complete signal.Example 23 includes the subject matter of example 21, where the interface includes a PHY Interface for PCI Express, SATA, DisplayPort, and Converged IO Architectures.Example 24 is a method including: receiving, over a message bus of a Physical Layer (PHY) Interface for the Peripheral Interconnect Express (PIPE)-based interface, a first write request from a controller, where the first write request is to set a value of a particular bit of a PHY message bus register, bits of the PHY message bus register are mapped to a first set of control and status signals, the PIPE-based interface couples the controller to a PHY block, and the particular bit is mapped to a recalibration request signal in the first set of control and status signals; performing a recalibration of at least a portion of the PHY block based on setting the value of the particular bit; and sending, over the message bus interface, a second write request to the controller, where the second write request is to set a value of a particular bit of a controller message bus register to indicate completion of the recalibration, where bits in the controller message bus register are mapped to a second set of control and status signals.Example 25 includes the subject matter of example 24, where at least one of the first write and the second write includes a committed write.Example 26 includes the subject matter of any one of examples 24-25, where the value of the particular bit of at least one of the PHY message bus register and the controller message bus register is to be reset automatically after a number of clock cycles.Example 27 includes the subject matter of any one of examples 24-26, further including: detecting one or more attributes of the PHY block; and determining that the recalibration should be performed based on the one or more attributes.Example 28 includes the subject matter of example 27, further including sending, over the message bus interface, another write request to the controller to write a value to another bits of the controller message bus register to indicate to the controller a request to perform the recalibration.Example 29 includes the subject matter of example 28, where the first write command is received from the controller based on the request to perform the recalibration.Example 30 includes the subject matter of any one of examples 24-29, where the PHY block is to implement a link and the recalibration is to be performed while the link is in recovery, where the controller is to initiate the recovery.Example 31 includes the subject matter of any one of examples 24-30, where the PIPE-based interface includes a PHY Interface for PCI Express, SATA, DisplayPort, and Converged IO Architectures.Example 32 is a system including means to perform the method of any one of examples 24-31.Example 33 is a method including: detecting one or more attributes of a physical layer (PHY) block; determining that at least a portion of the PHY block should be recalibrated based on the one or more attributes; and sending a first write request over a message bus of a PHY Interface for the Peripheral Interconnect Express (PIPE)-based interface, where the first write request is to set a value of a particular bit of a PHY message bus register, bits of the PHY message bus register are mapped to a first set of control and status signals, the PIPE-based interface couples a controller to the PHY block, and the particular bit is mapped to a recalibration request signal in the first set of control and status signals.Example 34 includes the subject matter of example 33, further including receiving, over the message bus interface, a second write request to the controller, where the second write request is to set a value of a particular bit of a controller message bus register to indicate completion of the recalibration, where bits in the controller message bus register are mapped to a second set of control and status signals.Example 35 includes the subject matter of any one of examples 33-34, where the recalibration is to be performed while a link is in recovery, and the method further includes initiating transition of the link to recovery.Example 36 includes the subject matter of any one of examples 33-35, further including receiving, over the message bus interface, another write request to the controller, where the other write request is to set a value of a defined bit of a controller message bus register to indicate a recalibration request, where the first write request is sent based on identifying that the value of the defined bit is set.Example 37 includes the subject matter of any one of examples 33-36, where the interface includes a PHY Interface for the PCI Express (PIPE)-based interface.Example 38 includes the subject matter of any one of examples 33-37, where the controller includes a media access controller (MAC).Example 39 includes the subject matter of any one of examples 33-38, where the write command includes a committed write.Example 40 includes the subject matter of any one of examples 33-39, where the value of the particular bit is to be reset automatically after a number of clock cycles.Example 41 includes the subject matter of any one of examples 33-40, where the PIPE-based interface includes a PHY Interface for PCI Express, SATA, DisplayPort, and Converged IO Architectures.Example 42 is a system including means to perform the method of any one of examples 33-41.The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc, which are to be distinguished from the non-transitory mediums that may receive information there from.Instructions used to program logic to perform embodiments of the invention may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment. |
A microelectronic device (100) has a bump bond structure (108) including an electrically conductive pillar (109) with an expanded head (111), and solder (112) on the expanded head (111). The electrically conductive pillar (109) includes a column (110) extending from an I/O pad (103) to the expanded head (111). The expanded head (111) extends laterally past the column (110) on at least one side ofthe electrically conductive pillar (109). In one aspect, the expanded head (111) may have a rounded side profile with a radius approximately equal to a thickness of the expanded head (111), and a flattop surface. In another aspect, the expanded head (111) may extend past the column (110) by different lateral distances in different lateral directions. In a further aspect, the expanded head (111) may have two connection areas for making electrical connections to two separate nodes. |
1.A microelectronic device, which includes:Substrate, which has an input/output I/O surface;I/O pads, which are on the I/O surface; andA bump bonding structure, which is on the I/O pad, and the bump bonding structure includes:Conductive pillar, which includes:A column, which is on the I/O pad;An enlarged head on the column, wherein: the enlarged head extends beyond the column on all lateral sides of the column by a lateral distance approximately equal to the vertical thickness of the enlarged head; the enlarged The head has a rounded side profile with a radius approximately equal to the thickness of the enlarged head; and the enlarged head has a flat contact surface, the contact surface being positioned to oppose the cylinder; andSolder on the contact surface of the enlarged head.2.The microelectronic device of claim 1, further comprising a seed layer between the pillar and the I/O pad.3.The microelectronic device according to claim 1, wherein the pillar and the enlarged head mainly comprise copper.4.A method of forming a microelectronic device, the method comprising:Providing a substrate with an input/output I/O surface and an I/O pad on the I/O surface;Forming a seed layer on the I/O surface so that the seed layer contacts the I/O pad, and the seed layer provides a conductive layer;Forming a plating mask on the seed layer so that the plating mask exposes the seed layer on the I/O pad;Metal is electroplated on the seed layer exposed by the plating mask so that the metal extends through the plating mask to form conductive pillars, the conductive pillars including:A pillar on the I/O pad, the pillar extends from the seed layer to the top surface of the plating mask, and the top surface of the plating mask is positioned to be in contact with the seed Opposite crystal layers; andAn enlarged head on the column, wherein: the enlarged head extends beyond the column on all lateral sides of the column by a lateral distance approximately equal to the vertical thickness of the enlarged head; the enlarged The head has a rounded side profile with a radius approximately equal to the thickness of the enlarged head; and the enlarged head has a flat contact surface, the contact surface being positioned to oppose the cylinder;Forming solder on the contact surface of the enlarged head;Removing the plating mask; andThe seed layer is removed where the pillars are exposed.5.The method according to claim 4, wherein forming the plating mask includes a photolithography process using a photoresist.6.The method of claim 4, wherein the pillar and the enlarged head mainly comprise copper.7.A microelectronic device, which includes:Substrate, which has an input/output I/O surface;I/O pads, which are on the I/O surface; andA bump bonding structure, which is on the I/O pad, and the bump bonding structure includes:Conductive pillar, which includes:A column, which is on the I/O pad;An enlarged head on the column, wherein: the enlarged head has a flat contact surface, the contact surface is positioned to be opposed to the column; the enlarged head extends laterally in a first transverse direction beyond The column reaches a first lateral distance and extends laterally beyond the column in a second lateral direction by a second lateral distance; and the first lateral distance is greater than the second lateral distance; andSolder on the contact surface of the enlarged head.8.The microelectronic device according to claim 7, wherein the pillar and the enlarged head mainly comprise copper.9.8. The microelectronic device of claim 7, wherein the flat contact surface extends to the lateral periphery of the enlarged head.10.A method of forming a microelectronic device, the method comprising:Providing a substrate with an input/output I/O surface and an I/O pad on the I/O surface;A conductive pillar is formed on the I/O pad, and the conductive pillar includes:A column on the I/O pad; andAn enlarged head on the column, wherein: the enlarged head has a flat contact surface, the contact surface is positioned to be opposed to the column; the enlarged head extends laterally in a first transverse direction beyond The column reaches a first lateral distance and extends laterally beyond the column in a second lateral direction by a second lateral distance;And the first lateral distance is greater than the second lateral distance; andA solder is formed on the contact surface of the enlarged head.11.The method of claim 10, wherein forming the conductive pillar comprises:Forming a seed layer on the I/O surface so that the seed layer contacts the I/O pad, and the seed layer provides a conductive layer;Forming a pillar plating mask on the seed layer, so that the pillar plating mask has pillar openings exposing the seed layer on the I/O pad;Forming a head plating mask on the pillar plating mask so that the head plating mask has a head opening that exposes the pillar openings in the pillar plating mask;Copper is electroplated on the seed layer exposed by the pillar plating mask so that the copper extends through the pillar opening to form the pillar and extends into the head opening to Forming the enlarged head;Removing the head plating mask;Removing the pillar plating mask; andThe seed layer is removed where the pillars are exposed.12.The method of claim 10, wherein forming the conductive pillar comprises:Forming a seed layer on the I/O surface so that the seed layer contacts the I/O pad, and the seed layer provides a conductive layer;A pillar plating mask is formed on the seed layer so that the pillar plating mask has pillar openings exposing the seed layer on the I/O pads and exposing the pillar openings 'S head openingMetal is electroplated on the seed layer exposed by the pillar plating mask so that the metal extends through the pillar opening to form the pillar and extends into the head opening to form The enlarged head;Removing the pillar plating mask; andThe seed layer is removed where the pillars are exposed.13.11. The method of claim 10, wherein forming the conductive pillar comprises forming the pillar body through a first addition process and forming the enlarged head through a second addition process.14.A microelectronic device, which includes:Substrate, which has an input/output I/O surface;A first I/O pad on the I/O surface; andA bump bonding structure, which is on the first I/O pad, and the bump bonding structure includes:Conductive pillar, which includes:The first column is on the I/O pad;An enlarged head on the first column, wherein: the enlarged head has a contact surface, the contact surface is positioned to be opposed to the first column; the enlarged head is in at least one lateral direction Extend laterally beyond the first column; the bump engagement structure has a first connection area on the contact surface;And the bump engagement structure has a second connection area on the enlarged head; andSolder on the contact surface of the enlarged head.15.The microelectronic device according to claim 14, wherein the second connection area is on the contact surface of the enlarged head.16.The microelectronic device of claim 14, wherein:The microelectronic device includes a second I/O pad on the I/O surface;The conductive pillar includes a second pillar on the second I/O pad; andThe second cylinder contacts the enlarged head at the second connection area.17.A method of forming a microelectronic device, the method comprising:Providing a substrate with an input/output I/O surface and an I/O pad on the I/O surface;A conductive pillar is formed on the I/O pad, and the conductive pillar includes:A column on the I/O pad; andAn enlarged head on the column, wherein: the enlarged head has a contact surface, the contact surface is positioned to be opposed to the first column; the enlarged head extends laterally in at least one lateral direction Beyond the first post; the bump engagement structure has a first connection area on the contact surface; and the bump engagement structure has a second connection area on the enlarged head; andA solder is formed on the contact surface of the enlarged head.18.The method of claim 17, wherein forming the conductive pillar comprises:Forming a first seed layer on the I/O surface such that the first seed layer contacts the I/O pad, and the first seed layer provides a conductive layer;Forming a pillar plating mask on the first seed layer, so that the pillar plating mask has pillar openings exposing the first seed layer on the I/O pad;Electroplating metal on the first seed layer to form the pillar in the pillar opening;Forming a second seed layer on the pillar plating mask so that the second seed layer contacts the pillar;Forming a head plating mask on the second seed layer so that the head plating mask has a head opening exposing the second seed layer on the pillar;Electroplating metal on the second seed layer exposed by the head plating mask to form the enlarged head;Removing the head plating mask;Removing the second seed layer exposed by the enlarged head;Removing the pillar plating mask; andRemoving the first seed layer exposed by the pillars.19.The method of claim 17, wherein forming the conductive pillar comprises:Forming a first seed layer on the I/O surface such that the first seed layer contacts the I/O pad, and the first seed layer provides a conductive layer;Forming a pillar plating mask on the first seed layer, so that the pillar plating mask has pillar openings exposing the first seed layer on the I/O pad;Electroplating copper on the first seed layer to form the pillar in the pillar opening;Forming a head trench layer on the pillar plating mask so that the head trench layer has a head trench exposing the pillar;Forming a second seed layer on the head trench layer so that the second seed layer contacts the pillar;Electroplating copper on the second seed layer to form a copper head layer in the head trench;The copper head layer and the second seed layer are removed from the head trench layer adjacent to the head trench, leaving the copper head layer in the head trench to provide the Enlarge headRemoving the head groove layer;Removing the pillar plating mask; andRemoving the first seed layer exposed by the pillars.20.19. The method of claim 19, wherein removing the copper head layer and the second seed layer from above the head trench layer adjacent to the head trench is performed by a copper chemical mechanical polishing CMP process of. |
Enlarged head post for bump engagementTechnical fieldThis relates generally to microelectronic devices, and more specifically, to bump bonding in microelectronic devices.Background techniqueSome microelectronic devices have bump bonding structures that contain conductive pillars for input/output (I/O) connections. As component sizes in continuous process nodes decrease and circuit density increases, the current density through bump bonding structures has increased in many cases, which increases electromigration and other degradation mechanisms.Summary of the inventionIn the described example, a microelectronic device has a bump bonding structure that includes a conductive pillar with an enlarged head and solder on the enlarged head. The conductive pillar includes a pillar extending from an input/output (I/O) pad to the enlarged head. The enlarged head extends laterally beyond the pillar body on at least one side of the conductive pillar. In one aspect, the enlarged head may have a rounded side profile and a flat top surface, the rounded side profile having a radius approximately equal to the thickness of the enlarged head. In another aspect, the enlarged head can extend beyond the column in a first lateral direction by a first lateral distance and in a second lateral direction beyond the column by a second lateral distance, wherein the The first lateral distance is greater than the second lateral distance. In another aspect, the enlarged head may have two connection areas for forming electrical connections to two separate nodes.Description of the drawingsFigures 1A to 1C are cross-sections of example microelectronic devices including conductive posts with enlarged heads.Figures 2A to 2G are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of an example formation method.Figure 3 is a cross-section of another example microelectronic device including conductive posts with enlarged heads.4A to 4F are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method.5A to 5C are cross-sections of another example microelectronic device including conductive posts with enlarged heads.6A to 6E are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in the stages of another example forming method.7A to 7D are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method.Figure 8 is a cross-section of another example microelectronic device including a conductive post with an enlarged head.9A to 9D are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method.10A and 10B are cross-sections of another example microelectronic device including conductive posts with enlarged heads.11A to 11K are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method.12A and 12B are cross-sections of another example microelectronic device including a pair of conductive posts with enlarged heads.13A to 13I are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method.14A and 14B are cross-sections of another example microelectronic device including conductive pillars with enlarged heads.15A to 15F are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in the stages of another example forming method.detailed descriptionThe drawings are not drawn to scale. This description is not limited by the order of the illustrated actions or events, as some actions or events can occur in a different order and/or occur simultaneously with other actions or events. In addition, some of the actions or events described are optional to implement the methods described herein.The microelectronic device has a bump junction structure on the input/output (I/O) pad of the microelectronic device. The I/O pads can be bond pads that are electrically coupled to the interconnects of the microelectronic device. The I/O pads can be part of the redistribution layer (RDL) above the interconnects of the microelectronic device. The I/O pads may be bump pads in the BOAC structure of microelectronic devices. Other performances of I/O pads are within the scope described here. The seed layer can be placed on the I/O pad. The seed layer is sometimes referred to as an under bump metallization (UBM) layer. The bump bonding structure includes conductive pillars. In some variations of microelectronic devices, the conductive pillars may essentially consist of only copper, or may mainly consist of copper and other materials (e.g., gold, silver, nickel, etc.). In other variations, the conductive pillar may include one or more metals, such as nickel or tungsten. In another variation, the conductive pillars may include conductive nanoparticles, graphene, carbon nanotubes, or conductive organic polymers. The conductive pillar has a pillar on the I/O pad, which contacts the seed layer (if present). In this description, if the pillar is described as "on the I/O pad", it can be directly on the I/O pad, or an intervening element (such as a seed layer) may be present. If the column is described as "directly on the I/O pad," then there are no other intervening elements that are intentionally placed. The conductive pillar includes an enlarged head on the pillar body. The expansion head is positioned to face the I/O pad so that the column extends from the I/O pad to the expansion head. The enlarged head extends laterally beyond the column on at least one side of the conductive column. In this description, the terms "lateral" and "laterally" refer to the direction parallel to the I/O surface of the microelectronic device on which the I/O pad is positioned.The bump joint structure includes solder on the enlarged head. The solder can be directly on the enlarged head, or the barrier layer can be positioned between the enlarged head and the solder. The area of the interface between the solder and the enlarged head is larger than the cross-sectional area of the column parallel to the I/O surface. During the operation of the microelectronic device, the current passing through the column may be dispersed through the enlarged head, so that the current density through the interface between the solder and the enlarged head is lower than the current density through the column. Passing a lower current density through the interface between the solder and the enlarged head can advantageously provide lower electromigration and lower void formation in the bump joint structure.In one aspect, the enlarged head may have a rounded side profile and a flat contact surface, the rounded side profile having a radius approximately equal to the thickness of the enlarged head. The term "side profile" refers to the boundary of the enlarged head along a plane perpendicular to the I/O surface of the microelectronic device. The term "contact surface" refers to the surface of the enlarged head positioned opposite the cylinder and parallel to the I/O surface of the microelectronic device. The enlarged head may extend laterally beyond the cylinder by approximately equal distances on all sides of the cylinder. In one sense, the term "about" can be understood to mean within 10%. In another sense, the term "about" can be understood to mean within manufacturing tolerances encountered during the manufacture of the microelectronic device. In another sense, the term "about" can be understood to mean within the measurement tolerance encountered when measuring the structure of the microelectronic device.In another aspect, the enlarged head may extend laterally beyond the column in a first lateral direction by a first lateral distance and in a second lateral direction extend beyond the column by a second lateral distance, wherein the first lateral distance is greater than the first lateral distance. Two horizontal distance. The enlarged head has a flat contact surface. The flat contact surface may extend to the lateral periphery of the enlarged head. Alternatively, the enlarged head may have a curved profile surrounding at least a part of the lateral periphery of the enlarged head.In another aspect, the enlarged head may have two connection areas for forming electrical connections to two separate nodes. The two connection areas can be positioned to form connections to package electrodes, such as leads. Alternatively, one connection area may be positioned to form a connection to the package electrode, and another connection area may form a connection through another pillar of the conductive pillar to another I/O pad of the microelectronic device.A method for forming a microelectronic device is disclosed. In some methods, conductive pillars may be formed by electroplating using a plating mask. The plating mask can be formed by a photolithography process, an additive process, or a combination thereof. In other methods, conductive pillars can be formed by an additive process. The term "addition process" refers to a process of placing materials (such as conductive nanoparticle inks or plating mask materials) in a desired area to produce the final desired shape of the conductive pillar. The additive process can form conductive pillars or plating masks to advantageously reduce manufacturing costs and complexity. Examples of additive processes include binder injection, material injection, directed energy deposition, material extrusion, powder bed melting, sheet lamination, vat photopolymerization, direct laser deposition, electrostatic deposition, laser sintering, electrochemistry Deposition and photopolymerization extrusion.Figures 1A to 1C are cross-sections of example microelectronic devices including conductive posts with enlarged heads. 1A, the microelectronic device 100 has a substrate 101 with an I/O surface 102. The microelectronic device 100 has an I/O pad 103 on the I/O surface 102. The substrate 101 may be, for example, a semiconductor wafer having components (such as transistors) and a dielectric layer extending to the I/O surface 102. Alternatively, the substrate 101 may be a wafer containing a microelectromechanical system (MEMS) device. The seed layer 104 can be disposed on each I/O pad 103. For example, the I/O pad 103 may include aluminum or copper. The seed layer 104 may include titanium, nickel, palladium, or other metals suitable for providing a surface for electroplating metals such as copper. The I/O pad 103 may be electrically coupled to the interconnect 105 in the substrate 101, for example, through a via 106. A protective coating (PO) layer 107 may optionally be disposed on the I/O surface 102, with openings exposing the I/O pad 103. The PO layer 107 may include, for example, silicon dioxide, silicon nitride, silicon oxynitride, aluminum oxide, polyimide, or other dielectric materials that reduce the diffusion of water vapor and pollutants. The seed layer 104 contacts the I/O pad 103 in the opening, and may partially extend over the PO layer 107 adjacent to the opening, as depicted in FIG. 1A.The microelectronic device 100 includes a bump bonding structure 108 on the I/O pad 103. Each bump bonding structure 108 includes a conductive pillar 109 having a pillar 110 and an enlarged head 111. The column 110 extends from the corresponding I/O pad 103 to the enlarged head 111. The cylinder 110 may have a substantially circular cross-section in a plane parallel to the I/O surface 102, as indicated in FIG. 1A. Alternatively, the cylinder 110 may have a rounded square cross section, an elliptical cross section, a rounded rectangular cross section, or a cross section of other shapes. The pillar 110 includes a conductive material, such as copper, tungsten, gold, nickel, metal nanoparticles, carbon nanotubes, graphene, or conductive organic polymers.The enlarged head 111 is positioned on the column 110 and opposite to the I/O pad 103. The enlarged head 111 includes a conductive material, for example, any of the materials disclosed with reference to the pillar 110. The cylinder 110 may be connected to the enlarged head 111, as indicated in FIG. 1A.Each bump bonding structure 108 includes solder 112 on the enlarged head 111. The solder 112 may include, for example, tin, silver, bismuth, or other metals. An optional barrier layer (not shown in FIG. 1A) may be placed between the solder 112 and the enlarged head 111.Referring to FIG. 1B, the enlarged head 111 of this example has a rounded side profile 113 with a radius 114 approximately equal to the vertical thickness 115 of the enlarged head 111. The radius 114 extends from the intersection point of the cylinder 110 and the enlarged head 111 to the lateral surface of the enlarged head 111. The enlarged head 111 has a flat contact surface 116 positioned opposite to the cylinder 110. The flat contact surface 116 may extend to the rounded side profile 113, as depicted in FIG. 1B. The enlarged head 111 of this example extends laterally beyond the column 110 on all lateral sides of the column 110 by a lateral distance 117 approximately equal to the vertical thickness 115 of the enlarged head 111.Figure 1C depicts the microelectronic device 100 assembled into a package structure, such as a lead frame or a chip carrier. The bump bonding structure 108 is soldered to the package electrode 118 of the package structure. The package electrode 118 may appear as a lead 118 of the package structure, as indicated in FIG. 1C. The solder 112 couples each conductive pillar 109 to the corresponding lead 118. The solder 112 covers the rounded side profile 113 and the flat contact surface 116.Modeling has indicated that the current passing through the conductive pillar 109 spreads across the rounded side profile 113 because the radius 114 of FIG. 1B is approximately equal to the vertical thickness 115 of the enlarged head 111 of FIG. 1B. Distributing the current across the rounded side profile 113 reduces the current density across the interface between the enlarged head 111 and the solder 112, which can advantageously reduce electromigration and void formation and thus improve the reliability of the microelectronic device 100. Making the contact surface 116 flat provides a uniform thickness of the solder 112 between the contact surface 116 and the lead 118. Because the solder 112 has a higher resistivity than the enlarged head 111, the uniform thickness of the solder 112 can advantageously provide a more uniform current density through the contact surface 116 to the solder 112 to further reduce electromigration and void formation.Figures 2A to 2G are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of an example formation method. 2A, the microelectronic device 200 has a substrate 201 and an I/O pad 203 on an I/O surface 202. The substrate 201 may have the properties disclosed with reference to the substrate 101 of FIG. 1A. The I/O pad 203 may be electrically coupled to the interconnect 205 in the substrate 201 through a via 206. The PO layer 207 may optionally be disposed on the I/O surface 202, with openings exposing the I/O pad 203.The seed layer 204 is formed on the I/O surface 202 and on the PO layer 207 (if present). The seed layer 204 contacts the I/O pad 203 through the opening in the PO layer 207. The seed layer 204 provides a conductive layer for subsequent electroplating processes. The seed layer 204 may include any of the metals disclosed with reference to the seed layer 104 of FIG. 1A. The seed layer 204 may be formed, for example, by a sputtering process, an evaporation process, or a combination thereof.The pillar plating mask 219 is formed on the seed layer 204. The pillar plating mask 219 exposes the seed layer 204 in the area above the I/O pad 203. The pillar plating mask 219 may include a photoresist and may be formed through a photolithography process. Alternatively, the pillar plating mask 219 may be formed by an additive process, such as material injection or material extrusion.2B, the electroplating process using the metal electroplating bath 220 forms conductive pillars 209 on the seed layer 204 exposed by the pillar plating mask 219. The metal electroplating bath 220 may include copper (for example, in the form of copper sulfate), and may include additives such as levelers, inhibitors (sometimes called inhibitors), and accelerators (sometimes called brighteners). The metal electroplating bath 220 may include other metals (such as silver or nickel) to improve the electrical or mechanical properties of the conductive pillar 209.The conductive pillar 209 includes a pillar 210 extending from the seed layer 204 to the top surface of the pillar plating mask 219. The top surface of the pillar plating mask 219 is positioned to be opposite to the seed layer 204. Figure 2B shows a partially completed conductive pillar 209.Referring to FIG. 2C, the electroplating process is continued using the metal electroplating bath 220 to form conductive pillars 209. Each conductive pillar 209 includes a pillar body 210 and an enlarged head 211 on the pillar body 210. The electroplating process is performed to mix the metal electroplating bath 220 enough to provide isotropic metal plating on the conductive pillars 209. The isotropic nature of the pillar electroplating process results in the enlarged head 211 extending laterally beyond the pillar 210 and having a rounded side profile 213 and a flat contact surface 216 with the properties disclosed with reference to FIG. 1B. FIG. 2C shows the conductive pillar 209 is completed.2D, a solder electroplating process using a solder electroplating bath 221 forms solder 212 on the enlarged head 211. The solder 212 may include the metal disclosed with reference to the solder 112 of FIG. 1A. The pillar plating mask 219 may be left in place during the solder plating process to prevent solder from forming on the side of the pillar 210. An optional barrier layer (not shown in FIG. 2D) may be formed on the conductive pillar 209 before the solder 212 is formed. The barrier layer can reduce the copper from the conductive pillar 209 and the metal (such as tin) from the solder 212 to form intermetallic compounds. The solder 212 and the conductive pillar 209 together provide the bump bonding structure 208.Referring to FIG. 2E, the pillar plating mask 219 is removed after the solder 212 is formed on the enlarged head 211. The cylinder plating mask 219 can be removed using, for example, oxygen radicals 222 from an oxygen downstream asher or ozone generator. FIG. 2E shows the pillar plating mask 219 partially removed. The pillar plating mask 219 may be removed by a combination of a wet cleaning process or a process using oxygen radicals and then a wet cleaning process.2F, the seed layer 204 exposed by the conductive pillar 209 is removed to leave the seed layer 204 between the pillar 210 and the I/O pad 203. The seed layer 204 may be removed by a wet etching process using an acid bath 223. The wet etching process can be timed to remove the seed layer 204 while keeping the etching of the solder 212 and the conductive pillar 209 within acceptable limits. Figure 2F shows the partially completed removal of the seed layer 204.2G depicts the microelectronic device 200 after the formation of the bump bonding structure 208 is completed. The microelectronic device 200 of this example may have a structure similar to that of the microelectronic device 100 of FIG. 1A, and may obtain advantages similar to those disclosed with reference to FIGS. 1A to 1C.Figure 3 is a cross-section of another example microelectronic device including conductive posts with enlarged heads. The microelectronic device 300 has a substrate 301 with an I/O surface 302 and an I/O pad 303 on the I/O surface 302. The substrate 301 may be, for example, a semiconductor wafer containing integrated circuits or discrete semiconductor components or a MEMS wafer containing MEMS devices. The PO layer 307 may optionally be disposed on the I/O surface 302, with openings exposing the I/O pad 303. The seed layer 304 can be disposed on each I/O pad 303.The microelectronic device 300 includes a bump bonding structure 308 on the I/O pad 303. Each bump bonding structure 308 includes a conductive pillar 309 having a pillar 310 and an enlarged head 311. The column 310 extends from the corresponding I/O pad 303 to the enlarged head 311. The cylinder 310 may have a longitudinally long cross-section in a plane parallel to the I/O surface 302, as indicated in FIG. 3. The enlarged head 311 is positioned on the column 310 and is opposite to the I/O pad 303. The column 310 can be connected to the enlarged head 311. The pillar 310 and the enlarged head 311 may include any of the conductive materials disclosed with reference to FIG. 1A. In a variation of this example, the pillar 310 and the enlarged head 311 may mainly comprise copper.The column 310 may have a tapered vertical profile, wherein the width of the column 310 adjacent to the I/O pad 303 is smaller than the width of the column 310 adjacent to the enlarged head 311, as depicted in FIG. 3. The term "vertical" refers to a direction perpendicular to the I/O surface 302. Compared to a similar cylinder with a constant width vertical profile, the tapered vertical profile can advantageously spread the current through the cylinder 310. The enlarged head 311 of this example extends laterally beyond the cylinder 310, and has a rounded side profile 313 and a flat contact surface 316 with the properties disclosed with reference to FIG. 1B.An optional barrier layer 324 can be placed on the enlarged head 311. The barrier layer 324 may include, for example, nickel, tungsten, cobalt, molybdenum, or other metals that reduce copper diffusion.Each bump bonding structure 308 includes solder 312 on the barrier layer 324. The solder 312 may include any of the metals disclosed with reference to the solder 112 of FIG. 1A. The barrier layer 324 can reduce the copper from the conductive pillar 309 and the metal (such as tin) from the solder 312 to form intermetallic compounds.The bump bonding structure 308 can obtain the advantages disclosed with reference to the bump bonding structure 108 of FIG. 1A. The longitudinal cross section of the pillar 310 can achieve a higher current at a specific lateral interval than a similar bump bonding structure having a pillar with a rounded cross section.4A to 4F are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method. Referring to FIG. 4A, a microelectronic device 400 has a substrate 401 and an I/O pad 403 on an I/O surface 402. The substrate 401 may have the properties disclosed with reference to the substrate 101 of FIG. 1A. The PO layer 407 can be disposed on the I/O surface 402 and has openings exposing the I/O pad 403.The seed layer 404 is formed on the I/O surface 402 and on the PO layer 407 (if present). The seed layer 404 contacts the I/O pad 403 through the opening in the PO layer 407. The seed layer 404 provides a conductive layer for subsequent electroplating processes. The seed layer 404 may include any of the metals disclosed with reference to the seed layer 104 of FIG. 1A. The seed layer 404 may be formed as disclosed with reference to the seed layer 204 of FIG. 2A.The pillar plating mask 419 is formed on the seed layer 404. The pillar plating mask 419 exposes the seed layer 404 in the area above the I/O pad 403. The pillar plating mask 419 may include a photoresist, and may be formed through a photolithography process. In this example, the pillar plating mask 419 may have a tapered vertical profile, which may provide a greater process latitude for the photolithography process than a similar plating mask with a constant width vertical profile to advantageously reduce manufacturing costs And complexity.Referring to FIG. 4B, a pillar plating process using a metal plating bath 420 forms conductive pillars 409 on the seed layer 404 exposed by the pillar plating mask 419. The metal plating bath 420 may have a formulation similar to the metal plating bath 220 of FIG. 2B. The conductive pillar 409 includes a pillar 410 extending from the seed layer 404 to the top surface of the pillar plating mask 419 and includes an enlarged head 411 on the pillar 410. The top surface of the pillar plating mask 419 is positioned to be opposite to the seed layer 404. The pillar electroplating process is configured to provide isotropic metal plating on the conductive pillar 409. The isotropic nature of the pillar electroplating process causes the enlarged head 411 to extend laterally beyond the pillar 410 and have a rounded side profile 413 and a flat contact surface 416 with the properties disclosed with reference to FIG. 1B. FIG. 4B shows the conductive pillar 409 is completed.Referring to FIG. 4C, a barrier plating process using a barrier plating bath 425 forms a barrier layer 424 on the enlarged head 411. The barrier electroplating bath 425 may include, for example, nickel, cobalt, tungsten, molybdenum, or combinations thereof, and optionally other metals. The barrier plating process may use a pulse plating process or a reverse pulse plating process to form the barrier layer 424 having a desired composition and structure. The barrier layer 424 may have the composition disclosed with reference to the barrier layer 324 of FIG. 3.Referring to FIG. 4D, a solder plating process using a solder plating bath 421 forms solder 412 on the barrier layer 424. The solder 412 may include the metal disclosed with reference to the solder 112 of FIG. 1A. The pillar plating mask 419 may be left in place during the solder plating process. The barrier layer 424 can reduce the copper from the conductive pillar 409 and the metal (such as tin) from the solder 412 to form intermetallic compounds. The solder 412, the barrier layer 424, and the conductive pillar 409 together provide the bump bonding structure 408.Referring to FIG. 4E, the pillar plating mask 419 is removed after the solder 412 is formed on the enlarged head 411. The pillar plating mask 419 may be removed using the wet cleaning solvent 426. The wet cleaning solvent 426 may include a solvent such as n-methyl-2-pyrrolidine (NMP) or dimethyl sulfoxide (DMSO). Proprietary formulations of resist removal chemicals for wet cleaning solvent 426 are commercially available from several suppliers. Figure 4E shows the partially completed removal of the pillar plating mask 419. The wet cleaning solvent 426 may be used in combination with other processes (such as an ashing process) to remove the pillar plating mask 419.Referring to FIG. 4F, the seed layer 404 exposed by the conductive pillar 409 is removed to leave the seed layer 404 between the pillar 410 and the I/O pad 403. The seed layer 404 may be removed as disclosed with reference to FIG. 2F.5A to 5C are cross-sections of another example microelectronic device including conductive posts with enlarged heads. The microelectronic device 500 has a substrate 501 with an I/O surface 502 and an I/O pad 503 on the I/O surface 502. The substrate 501 may be, for example, a semiconductor wafer containing integrated circuits or discrete semiconductor components or a MEMS wafer containing MEMS devices. The PO layer 507 may optionally be disposed on the I/O surface 502, with openings exposing the I/O pad 503. The seed layer 504 can be disposed on each I/O pad 503. The seed layer 504 may have a composition similar to that of the seed layer 104 of FIG. 1A.The microelectronic device 500 includes a bump bonding structure 508 on the I/O pad 503. Each bump bonding structure 508 includes a conductive pillar 509 having a pillar 510 and an enlarged head 511. The column 510 extends from the corresponding I/O pad 503 to the enlarged head 511. The enlarged head 511 is positioned on the column 510 and opposite to the I/O pad 503. The column 510 can be connected to the enlarged head 511. The pillar 510 and the enlarged head 511 may include any of the conductive materials disclosed with reference to FIG. 1A. In a variation of this example, the pillar 510 and the enlarged head 511 may mainly comprise copper. Each bump joint structure 508 further includes solder 512 disposed on the enlarged head 511.Each enlarged head 511 has a flat contact surface 516. In this example, the flat contact surface 516 may extend to the lateral periphery of the enlarged head 511, as depicted in Figure 5A. The solder 512 may similarly extend to the lateral periphery of the enlarged head 511.FIG. 5B is a top view of the bump bonding structure 508. Each enlarged head 511 of this example extends laterally beyond the corresponding column 510 in the first lateral direction 528 by a first lateral distance 527 and in the second lateral direction 530 by a second lateral distance 529, wherein the first lateral distance 527 is greater than The second lateral distance 529. The second lateral distance 529 may optionally be close to zero or about zero in some cases, so that the lateral surface of the enlarged head 511 in the second lateral direction 530 is substantially flush with the lateral surface of the cylinder 510 in the second direction 530 . The second lateral direction 530 extends in a direction different from the first lateral direction 528. In one example, the first lateral direction 528 and the second lateral direction 530 may be positioned at right angles to each other. In another example, the first lateral direction 528 and the second lateral direction 530 may be oriented in opposite directions. The relative orientation of the first lateral direction 528 and the second lateral direction 530 varies between examples of the bump engagement structure 508 of the same microelectronic device 500.Figure 5C depicts the microelectronic device 500 assembled into a package structure, such as a lead frame or a chip carrier. The bump bonding structure 508 is soldered to the package electrode 518 of the package structure, which can be represented as a lead 518. The solder 512 couples each conductive pillar 509 to the corresponding lead 518. The configuration with the enlarged head 511 relative to the corresponding cylinder 510 may provide a more space-saving arrangement of the I/O pad 503 with respect to the lead 518 to thus advantageously realize a smaller package structure for a given size of the microelectronic device 500.6A to 6E are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in the stages of another example forming method. Referring to FIG. 6A, a microelectronic device 600 has a substrate 601 with an I/O surface 602 and an I/O pad 603 on the I/O surface 602. The substrate 601 may be similar to the substrate 101 of FIG. 1A. The PO layer 607 may optionally be disposed on the I/O surface 602, with openings exposing the I/O pad 603.The seed layer 604 is formed on the I/O surface 602 and on the PO layer 607 (if present). The seed layer 604 contacts the I/O pad 603 through the opening in the PO layer 607. The seed layer 604 provides a conductive layer for subsequent electroplating processes, and may include any of the metals disclosed with reference to the seed layer 104 of FIG. 1A. The seed layer 604 may be formed as disclosed with reference to the seed layer 204 of FIG. 2A.The pillar plating mask 619 is formed on the seed layer 604. The pillar plating mask 619 has a pillar opening 631 exposing the seed layer 604 in the region above the I/O pad 603. The pillar plating mask 619 may include a first photoresist and may be formed through a first photolithography process. In one case, the first photoresist may be a negative photoresist, which becomes insoluble in the developer after being exposed to ultraviolet (UV) light, and is therefore very resistant to the second exposure to UV light. Not sensitive to light. In another case, the first photoresist can be a positive photoresist, which becomes soluble in the developer after being exposed to UV light; in this case, it can be Exposure to UV light and then a baking operation makes the pillar plating mask 619 insensitive to UV light.6B, a head plating mask 632 is formed on the pillar plating mask 619. The head plating mask 632 has a head opening 633 exposing the pillar openings 631 in the pillar plating mask 619 and exposing the area on the top surface of the pillar plating mask 619 around each pillar opening 631, such as Shown in Figure 6B. The head plating mask 632 may include a second photoresist and may be formed by a second photolithography process. The first photoresist and the first photolithography process can be selected for compatibility with the second photoresist and the second photolithography process. For example, both the first photoresist and the second photoresist can be negative photoresists, or both can be positive photoresists. The use of a photolithography process to form the head plating mask 632 and the pillar plating mask 619 can advantageously be compatible with existing conductive pillar processes in the manufacturing facility where the microelectronic device 600 is made.6C, a pillar plating process using a metal plating bath 620 forms conductive pillars 609 on the seed layer 604 in the pillar opening 631 of the pillar plating mask 619 and the head opening 633 of the head plating mask 632. The metal electroplating bath 620 may have a formulation similar to the metal electroplating bath 220 of FIG. 2B. The conductive pillar 609 includes a pillar 610 extending from the seed layer 604 to the top surface of the pillar plating mask 619 and includes an enlarged head 611 on the pillar 610 in the head opening 633 of the head plating mask 632. The top surface of the pillar plating mask 619 is positioned to face the seed layer 604. The pillar electroplating process can be configured to provide a flat contact surface 616 on the enlarged head 611. The enlarged head 611 extends laterally beyond the column 610 and has the configuration disclosed with reference to FIG. 5B. FIG. 6C shows the conductive pillar 609 is completed.Referring to FIG. 6D, a solder plating process using a solder plating bath 621 forms solder 612 on the contact surface 616. The solder 612 may include the metal disclosed with reference to the solder 112 of FIG. 1A. The pillar plating mask 619 and the head plating mask 632 may be left in place during the solder electroplating process so that the solder 612 extends to the lateral periphery of the enlarged head 611. The solder 612 and the conductive pillar 609 together provide a bump bonding structure 608.6E, after the solder 612 is formed on the enlarged head 611, the pillar plating mask 619 and the head plating mask 632 of FIG. 6D are removed. The pillar plating mask 619 and the head plating mask 632 may be removed, for example, using any of the methods disclosed with reference to FIG. 2E or FIG. 4E. Subsequently, the seed layer 604 exposed by the conductive pillar 609 is removed to leave the seed layer 604 between the pillar 610 and the I/O pad 603. The seed layer 604 can be removed as disclosed with reference to FIG. 2F.7A to 7D are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method. Referring to FIG. 7A, a microelectronic device 700 has a substrate 701 including an I/O surface 702 and an I/O pad 703 on the I/O surface 702. The substrate 701 may be similar to the substrate 101 of FIG. 1A. I/O pads 703 may be placed on top-level interconnects 705, as depicted in Figure 7A. The PO layer 707 may optionally be disposed on the I/O surface 702, with openings exposing the I/O pad 703. The seed layer 704 is formed on the I/O surface 702 and on the PO layer 707 (if present). The seed layer 704 contacts the I/O pad 703 through the opening in the PO layer 707. The seed layer 704 provides a conductive layer for subsequent electroplating processes, and may include any of the metals disclosed with reference to the seed layer 104 of FIG. 1A. The seed layer 704 may be formed as disclosed with reference to the seed layer 204 of FIG. 2A.The pillar plating mask 734 is formed on the seed layer 704. The pillar plating mask 734 has pillar openings 731 that expose the seed layer 704 in the area above the I/O pad 703, and has a head opening that exposes the pillar openings 731 and the area around each pillar opening 731 733, as shown in Figure 7A. The pillar plating mask 734 may be formed by an additive process, such as a material injection process using an inkjet device 735 that dispenses the mask material 736. For example, the mask material may include an organic polymer (such as a novolak resin) and a solvent to improve flow characteristics. The post plating mask 734 may be baked after the addition process is completed to remove volatile materials (such as solvents) or cross-linking mask materials 736. Compared with using two photolithography processes, the use of an additive process to form the pillar plating mask 734 can advantageously reduce manufacturing costs.7B, a pillar electroplating process using a metal plating bath 720 forms conductive pillars 709 on the pillar openings 731 of the pillar plating mask 734 and the seed layer 704 in the head opening 733. The metal plating bath 720 may have a formulation similar to the metal plating bath 220 of FIG. 2B. The conductive pillar 709 includes the pillar 710 in the pillar opening 731 and includes the enlarged head 711 on the pillar 710 in the head opening 733. In this example, the post electroplating process may be configured to provide a flat contact surface 716 on the enlarged head 711 with rounded edges, as depicted in FIG. 7B. The enlarged head 711 can extend laterally beyond the column 710 and has the configuration disclosed with reference to FIG. 5B. FIG. 7B shows the conductive pillar 709 is completed.Referring to FIG. 7C, a solder plating process using a solder plating bath 721 forms solder 712 on the contact surface 716. The solder 712 may include the metal disclosed with reference to the solder 112 of FIG. 1A. The pillar plating mask 734 may be left in place during the solder plating process so that the solder 712 extends to the lateral periphery of the enlarged head 711. The solder 712 and the conductive pillar 709 together provide the bump bonding structure 708.Referring to FIG. 7D, the pillar plating mask 734 of FIG. 7C is removed after the solder 712 is formed on the enlarged head 711. The pillar plating mask 734 may be removed, for example, using any of the methods disclosed with reference to FIG. 2E or FIG. 4E. Subsequently, the seed layer 704 exposed by the conductive pillar 709 is removed to leave the seed layer 704 between the pillar 710 and the I/O pad 703. The seed layer 704 can be removed as disclosed with reference to FIG. 2F.Figure 8 is a cross-section of another example microelectronic device including a conductive post with an enlarged head. The microelectronic device 800 has a substrate 801 with an I/O surface 802 and an I/O pad 803 on the I/O surface 802. The substrate 801 may be similar to the substrate 101 disclosed with reference to FIG. 1A. The I/O pad 803 may be electrically coupled to the interconnect 805 through a via 806. The PO layer 807 may optionally be disposed on the I/O surface 802, with openings exposing the I/O pad 803. In this example, the I/O pad 803 may include a base pad of aluminum or copper and a cap layer of a protective metal (for example, nickel, palladium, platinum, or gold).The microelectronic device 800 includes a bump bonding structure 808 on the I/O pad 803. Each bump bonding structure 808 includes a conductive pillar 809 having a pillar 810 and an enlarged head 811. In this example, the conductive pillar 809 can be directly disposed on the corresponding I/O pad 803 without an interposer seed layer. The column 810 extends from the corresponding I/O pad 803 to the enlarged head 811. The enlarged head 811 is positioned on the column 810 and opposite to the I/O pad 803. The column 810 can be connected to the enlarged head 811. The pillar 810 and the enlarged head 811 may include any of the conductive materials disclosed with reference to FIG. 1A. In a variation of this example, the pillar 810 and the enlarged head 811 may mainly comprise copper.The enlarged head 811 has a flat contact surface 816 positioned to oppose the cylinder 810. The cylinder 810 may have a tapered vertical profile, as depicted in FIG. 8. Alternatively, the pillar 810 may have a constant width vertical profile.The enlarged head 811 may have the configuration disclosed with reference to the enlarged head 511 of FIGS. 5A and 5B. Alternatively, the enlarged head 811 may have the configuration disclosed with reference to the enlarged head 111 of FIGS. 1A and 1B.In this example, the conductive pillars 809 may include conductive nanoparticles adhered together. The nanoparticles can be melted together so that the conductive pillars 809 are substantially void-free.Each bump bonding structure 808 further includes a solder 812 disposed on the enlarged head 811. The bump bonding structure 808 may further include a barrier layer (not shown in FIG. 8) disposed between the solder 812 and the conductive pillar 809. The barrier layer may have the properties disclosed with reference to the barrier layer 324 of FIG. 3.9A to 9D are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method. 9A, the microelectronic device 900 has a substrate 901 with an I/O surface 902 and an I/O pad 903 on the I/O surface 902. The substrate 901 may be similar to the substrate 101 disclosed with reference to FIG. 1A. The I/O pad 903 may be electrically coupled to the interconnect 905 through a via 906. The PO layer 907 may optionally be disposed on the I/O surface 902, with openings exposing the I/O pad 903. In this example, the I/O pad 903 may include a base pad of aluminum or copper and a cap layer for protecting metal, as disclosed with reference to FIG. 8 for example.The pillar body 910 of the conductive pillar 909 is directly formed on the I/O pad 903 through the first addition process. The first addition process may include an electrostatic deposition process using an electrostatic distribution device 937 that distributes conductive nanoparticles 938, as depicted in FIG. 9A. The conductive nanoparticles 938 may include, for example, metal nanoparticles, carbon nanoparticles, graphene nanoparticles, or carbon nanotube nanoparticles. Alternatively, the first addition process may include, for example, a material injection process, a laser sintering process, or an electrochemical deposition process. Figure 9A shows a partially completed cylinder 910. Compared with electroplating using a plating mask, using the first addition process to form the pillar 910 can reduce the manufacturing cost and complexity of the microelectronic device 900.Referring to FIG. 9B, the enlarged head 911 of the conductive pillar 909 is formed on the pillar 910 through the second addition process. The enlarged head 911 is formed to have a flat contact surface 916. The flat contact surface 916 may extend to the lateral periphery of the enlarged head 911, as depicted in FIG. 9B. Alternatively, the flat contact surface 916 may be recessed from the lateral periphery of the enlarged head 911, and the enlarged head 911 may be formed with rounded corners or polygonal contours around the lateral periphery. The second addition process may include an electrochemical deposition process of an electrochemical deposition device 939 that uses an electrolytic fluid 940 to plate a metal (eg, copper) on the conductive pillar 909, as depicted in FIG. 9B. Alternatively, the second addition process may include, for example, a material injection process, a laser sintering process, or an electrostatic deposition process. The second addition process may be a continuation of the first addition process described with reference to FIG. 9A. The enlarged head 911 may include any of the conductive materials disclosed with reference to the conductive pillar 109 of FIG. 1A. Using the second additive process to form the enlarged head 911 can further reduce the manufacturing cost and complexity of the microelectronic device 900.Referring to FIG. 9C, the conductive pillar 909 may optionally be heated, for example, by a radiant heating process 941. The conductive pillars 909 may be heated to remove volatile materials from the conductive pillars 909, melt the nanoparticles in the conductive pillars 909, or increase the density of the conductive pillars 909. As an alternative to the radiant heating process 941, the conductive pillar 909 may be heated by a hot plate process, a forced air heating process, or a furnace process. The use of nanoparticles to form the conductive pillars 909 can achieve melting of the nanoparticles at a lower temperature than the melting point of the corresponding bulk metal. For example, it is reported that copper nanoparticles smaller than 30 nanometers can be melted at less than 500°C.Referring to FIG. 9D, solder 912 is formed on the enlarged head 911 to cover the flat contact surface 916. The solder 912 may be formed through a third addition process. The third addition process may include a material extrusion process using a material extrusion device 942 that dispenses the solder paste 943 onto the enlarged head 911, as depicted in FIG. 9D. Alternatively, the third addition process may include, for example, a binder injection process, a material injection process, a material extrusion, an electrostatic deposition process, or an electrochemical deposition process. If needed, then the solder 912 can be heated to remove volatile materials. The solder 912 may be heated at a temperature (for example, 100° C. to 300° C.) lower than the temperature required to melt the nanoparticles in the conductive pillar 909.The conductive pillar 909 and the solder 912 provide the bump bonding structure 908 of the microelectronic device 900. The cylinder 910 and the enlarged head 911 may have shapes other than those depicted in FIGS. 9A to 9D. For example, the cylinder 910 and the enlarged head 911 may have the shape depicted in FIGS. 1A and 1B.10A and 10B are cross-sections of another example microelectronic device including conductive posts with enlarged heads. Referring to FIG. 10A, the microelectronic device 1000 has a substrate 1001 with an I/O surface 1002 and an I/O pad 1003 on the I/O surface 1002. The substrate 1001 may be similar to the substrate 101 disclosed with reference to FIG. 1A. The I/O pad 1003 may be electrically coupled to the interconnect 1005 through a via 1006. The PO layer 1007 may optionally be disposed on the I/O surface 1002, with openings exposing the I/O pad 1003. The first seed layer 1004 may be disposed on the I/O pad 1003. The first seed layer 1004 may have a composition similar to that of the seed layer 104 of FIG. 1A.The microelectronic device 1000 includes a bump bonding structure 1008 on an I/O pad 1003. The bump bonding structure 1008 includes a conductive pillar 1009 having a pillar 1010 and an enlarged head 1011. In this example, the bump bonding structure 1008 may include a second seed layer 1044 between the pillar 1010 and the enlarged head 1011. The second seed layer 1044 may have a composition of a conductive material suitable for an electroplating process. The post 1010 and the enlarged head 1011 may include any of the conductive materials disclosed with reference to FIG. 1A. In a variation of this example, the pillar 1010 and the enlarged head 1011 may mainly comprise copper. In one case, the enlarged head 1011 may have the same composition as the cylinder 1010. In another case, the enlarged head 1011 may have a different composition from the cylinder 1010. The pillar 1010 extends from the first seed layer 1004 to the second seed layer 1044. The enlarged head 1011 is positioned on the second seed layer 1044 and opposite to the pillar 1010. The enlarged head 1011 laterally extends beyond the column 1010 in at least one lateral direction. The enlarged head 1011 has a flat contact surface 1016. The contact surface 1016 is positioned to oppose the cylinder 1010.The bump bonding structure 1008 of this example has a first connection area 1045 on the contact surface 1016 and a second connection area 1046 on the contact surface 1016. The bump bonding structure 1008 includes solder 1012 on the enlarged head 1011 in the first connection area 1045 and the second connection area 1046. The bump bonding structure 1008 may optionally include an insulator layer 1047 on the enlarged head 1011 between the first connection area 1045 and the second connection area 1046.Referring to FIG. 10B, the microelectronic device 1000 is assembled into a package structure (such as a lead frame or a chip carrier). The bump bonding structure 1008 is connected to the first package electrode 1018a and the second package electrode 1018b of the package structure, which may be the first lead 1018a and the second lead 1018b. The solder 1012 couples the conductive pillar 1009 in the first connection area 1045 to the first lead 1018a and couples the conductive pillar 1009 in the second connection area 1046 to the second lead 1018b. Connecting the enlarged head 1011 to the first lead 1018a and the second lead 1018b can advantageously realize a smaller package structure of a given size for the microelectronic device 1000.11A to 11K are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method. 11A, a microelectronic device 1100 has a substrate 1101 with an I/O surface 1102 and an I/O pad 1103 on the I/O surface 1102. The substrate 1101 may be similar to the substrate 101 disclosed with reference to FIG. 1A. The I/O pad 1103 may be electrically coupled to the interconnect 1105 through one or more vias 1106. The PO layer 1107 can optionally be disposed on the I/O surface 1102, with openings exposing the I/O pad 1103.The first seed layer 1104 is formed on the I/O surface 1102 and on the PO layer 1107 (if present). The first seed layer 1104 contacts the I/O pad 1103 through the opening in the PO layer 1107. The first seed layer 1104 provides a conductive layer for the subsequent first electroplating process, and may include any of the metals disclosed with reference to the seed layer 104 of FIG. 1A. The first seed layer 1104 may be formed as disclosed with reference to the seed layer 204 of FIG. 2A.The pillar plating mask 1119 is formed on the first seed layer 1104. The pillar plating mask 1119 has a pillar opening 1131 that exposes the first seed layer 1104 in the region above the I/O pad 1103. The pillar plating mask 1119 may be formed, for example, by the photolithography process disclosed with reference to FIG. 6A or may be formed by the addition process disclosed with reference to FIG. 7A. Other methods of forming the pillar plating mask 1119 are within the scope of this example.Referring to FIG. 11B, the pillar electroplating process forms pillars 1110 on the first seed layer 1104 in the pillar openings 1131 (ie, where they are exposed by the pillar plating mask 1119). The pillar plating process may be similar to the pillar plating process disclosed with reference to FIG. 2B. The pillar 1110 extends from the first seed layer 1104 to close to the top surface of the pillar plating mask 1119. The top surface of the pillar plating mask 1119 is positioned opposite to the first seed layer 1104. In a variation of this example, the pillars 1110 may extend a few microns below the top surface of the pillar plating mask 1119, as depicted in FIG. 11B. In another variation, the pillar 1110 may extend to the top surface of the pillar plating mask 1119. In another variation, the pillars 1110 may extend several microns above the top surface of the pillar plating mask 1119. The pillar 1110 is part of the conductive pillar 1109.Referring to FIG. 11C, a second seed layer 1144 is formed on the pillar 1110 and on the pillar plating mask 1119. The second seed layer 1144 provides a conductive layer for the subsequent second electroplating process. The second seed layer 1144 may include any of the metals disclosed with reference to the seed layer 104 of FIG. 1A. The second seed layer 1144 may be formed as disclosed with reference to the seed layer 204 of FIG. 2A.Referring to FIG. 11D, a head plating mask 1132 is formed on the second seed layer 1144. The head plating mask 1132 has a head opening 1133 exposing the second seed layer 1144 around the pillar 1110, as shown in FIG. 11D. The head plating mask 1132 may be formed, for example, by a process similar to the process used to form the pillar plating mask 1119. Other methods of forming the head plating mask 1132 are within the scope of this example.Referring to FIG. 11E, the head electroplating process forms an enlarged head 1111 on the second seed layer 1144 in the head opening 1133 (ie, where it is exposed by the head plating mask 1132). The head plating process may be similar to the metal plating process disclosed with reference to FIG. 2B. The enlarged head 1111 extends from the second seed layer 1144 to close to the top surface of the head plating mask 1132. The top surface of the head plating mask 1132 is positioned to face the second seed layer 1144. In a variation of this example, the enlarged head 1111 may extend a few microns below the top surface of the head plating mask 1132, as depicted in FIG. 11E. In another variation, the enlarged head 1111 may extend to the top surface of the enlarged head 1111. In another variation, the enlarged head 1111 may extend above the top surface of the head plating mask 1132. The enlarged head 1111 is part of the conductive pillar 1109. The enlarged head 1111 has a contact surface 1116 positioned opposite to the second seed layer 1144. The enlarged head 1111 of this example has a first connection area 1145 on the contact surface 1116 and a second connection area 1146 on the contact surface 1116.Referring to FIG. 11F, an insulator layer 1147 may be formed on the enlarged head 1111. The insulator layer 1147 may be positioned on the contact surface 1116 between the first connection area 1145 and the second connection area 1146. The insulator layer 1147 may include an organic dielectric material (for example, polymer material, silicone material) or an inorganic dielectric material (for example, silicon dioxide or aluminum oxide). The insulator layer 1147 may be formed by a photolithography process using a photosensitive polymer material (for example, photosensitive polyimide). Alternatively, the insulator layer 1147 may be formed by depositing a layer of dielectric material and then masking and etching processes. In another example, the insulator layer 1147 may be formed through an additive process.Referring to FIG. 11G, solder 1112 is formed on the contact surface 1116 in the first connection area 1145 and the second connection area 1146. The solder 1112 may be formed, for example, by an electroplating process or an additive process. The solder 1112 may include any of the metals disclosed with reference to the solder 112 of FIG. 1A.Referring to FIG. 11H, the head plating mask 1132 of FIG. 11G is removed. The head plating mask 1132 may be removed, for example, using any of the methods disclosed in reference to FIG. 2E or FIG. 4E. In this example, the second seed layer 1144 can prevent removal of the pillar plating mask 1119, as indicated in FIG. 11H.Referring to FIG. 11I, the second seed layer 1144 exposed by the enlarged head 1111 is removed so that the second seed layer 1144 remains at least in an appropriate position between the pillar 1110 and the enlarged head 1111. The second seed layer 1144 may be removed by, for example, a wet etching process or a plasma process. The removal of the second seed layer 1144 makes the pillar plating mask 1119 substantially intact, as indicated in FIG. 11I.Referring to FIG. 11J, the pillar plating mask 1119 of FIG. 11I is removed. The pillar plating mask 1119 may be removed, for example, using any of the methods disclosed with reference to FIG. 2E or FIG. 4E. In this example, removing the pillar plating mask 1119 can leave the first seed layer 1104 and the second seed layer 1144 substantially intact, as indicated in FIG. 11J.Referring to FIG. 11K, the first seed layer 1104 exposed by the pillar 1110 is removed to leave the first seed layer 1104 between the pillar 1110 and the I/O pad 1103. The first seed layer 1104 may be removed by a wet etching process or a plasma process to leave at least a part of the PO layer 1107 in an appropriate position on the substrate 1101. Removing the first seed layer 1104 can also remove the portion of the second seed layer 1144 exposed to the reagent for removing the first seed layer 1104 to leave a gap between the pillar 1110 and the enlarged head 111 The second seed layer 1144. The pillar 1110, the second seed layer 1144, the enlarged head 1111, and the solder 1112 provide the bump bonding structure 1108 of the microelectronic device 1100.12A and 12B are cross-sections of another example microelectronic device including a pair of conductive posts with enlarged heads. Referring to FIG. 12A, a microelectronic device 1200 has a substrate 1201 with an I/O surface 1202. The microelectronic device 1200 includes a first I/O pad 1203a and a second I/O pad 1203b on an I/O surface 1202. The substrate 1201 may be similar to the substrate 101 disclosed with reference to FIG. 1A. The I/O pads 1203a and 1203b may be electrically coupled to the interconnect 1205 through vias 1206. The PO layer 1207 can optionally be disposed on the I/O surface 1202, with openings exposing the I/O pads 1203a and 1203b. The first seed layer 1204 may be disposed on the I/O pads 1203a and 1203b. The first seed layer 1204 may have a composition similar to that of the seed layer 104 of FIG. 1A.The microelectronic device 1200 includes bump bonding structures 1208 on I/O pads 1203a and 1203b. The bump bonding structure 1208 includes a conductive pillar 1209 having a first pillar 1210a on the first I/O pad 1203a and a second pillar 1210b on the second I/O pad 1203b. The bump joint structure 1208 further includes an enlarged head 1211 on the first pillar 1210a and the second pillar 1210b. In this example, the bump bonding structure 1208 may include a second seed layer 1244 between the first pillar 1210 a and the enlarged head 1211 and between the second pillar 1210 b and the enlarged head 1211. The second seed layer 1244 may have a composition suitable for an electroplating process. In this example, the second seed layer 1244 may extend onto the lateral surface of the enlarged head 1211, as depicted in FIG. 12A. The pillars 1210a and 1210b and the enlarged head 1211 may include any of the conductive materials disclosed with reference to FIG. 1A. In a variation of this example, the pillars 1210a and 1210b and the enlarged head 1211 may mainly comprise copper. The first pillar 1210a extends from the first seed layer 1204 to the second seed layer 1244, and the second pillar 1210b is similar. The enlarged head 1211 is positioned on the second seed layer 1244 and is opposed to the first pillar 1210a and the second pillar 1210b. The enlarged head 1211 has a flat contact surface 1216. The contact surface 1216 is positioned to oppose the first cylinder 1210a and the second cylinder 1210b. The bump bonding structure 1208 of this example has a first connection area 1245 on the contact surface 1216. The bump bonding structure 1208 of this example has a second connection region 1246 at the boundary between the enlarged head 1211 and the second pillar 1210b, wherein the enlarged head 1211 contacts the second pillar 1210b through the second seed layer 1244. The bump bonding structure 1208 includes solder 1212 on the contact surface 1216 of the enlarged head 1211. The solder 1212 may have the composition disclosed with reference to the solder 112 of FIG. 1A.Figure 12B depicts the microelectronic device 1200 after being assembled into a package structure, such as a lead frame or a chip carrier. The bump bonding structure 1208 is connected to the package electrode 1218 of the package structure, which may be a lead 1218. The solder 1212 couples the conductive pillar 1209 to the package electrode 1218 at the first connection area 1245. Therefore, the bump bonding structure 1208 electrically connects the first I/O pad 1203a to the package electrode 1218 through the first connection area 1245 and electrically connects to the second I/O pad 1203b through the second connection area 1246. The bump bonding structure 1208 can advantageously realize a smaller package structure of a given size for the microelectronic device 1200.13A to 13I are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in stages of another example forming method. Referring to FIG. 13A, a microelectronic device 1300 has a substrate 1301 with an I/O surface 1302. The microelectronic device 1300 includes a first I/O pad 1303a and a second I/O pad 1303b on an I/O surface 1302. The substrate 1301 may be similar to the substrate 101 disclosed with reference to FIG. 1A. The I/O pads 1303a and 1303b may be electrically coupled to the interconnect 1305 through vias 1306. The PO layer 1307 can optionally be disposed on the I/O surface 1302, with openings exposing the I/O pads 1303a and 1303b.The first seed layer 1304 may be formed on the I/O surface 1302 and on the PO layer 1307 (if present). The first seed layer 1304 contacts the I/O pads 1303a and 1303b through the openings in the PO layer 1307. The first seed layer 1304 may have a composition similar to that of the first seed layer 1104 of FIG. 11A and may be formed by a similar process.The pillar plating mask 1319 is formed on the first seed layer 1304. The pillar plating mask 1319 has a first pillar opening 1331a that exposes the first seed layer 1304 in the first region above the first I/O pad 1303a, and has a first pillar opening 1331a that exposes the second I/O pad 1303b. The second pillar opening 1331b of the first seed layer 1304 in the second region. The first cylindrical opening 1331a and the second cylindrical opening 1331b may have a tapered vertical profile, as depicted in FIG. 13A. Compared with the process of forming a similar plating mask having an opening with a constant width vertical profile, the pillar plating mask 1319 can be formed by a photolithography process with relaxed specifications. Using a photolithography process with relaxed specifications can advantageously reduce the manufacturing cost of the microelectronic device 1300.13B, the column electroplating process forms a first column 1310a on the first seed layer 1304 in the first column opening 1331a and a second column 1310a on the first seed layer 1304 in the second column opening 1331b.柱体1310b. The pillar plating process may be similar to the pillar plating process disclosed with reference to FIG. 2B. The first pillar 1310a and the second pillar 1310b are part of the conductive pillar 1309.Referring to FIG. 13C, a head trench layer 1332 is formed on the pillar plating mask 1319. The head trench layer 1332 has a head trench 1333 that exposes the first pillar 1310a and the second pillar 1310b and exposes the pillar plating mask 1319 around the first pillar 1310a and the second pillar 1310b, as shown in FIG. 13C Show. The head trench layer 1332 may be formed, for example, by a process compatible with the pillar plating mask 1319.Referring to FIG. 13D, a second seed layer 1344 is formed on the head trench layer 1332 to extend into the head trench 1333 and contact the first pillar 1310a and the second pillar 1310b. The second seed layer 1344 provides a conductive layer for the subsequent head electroplating process. The second seed layer 1344 may include any of the metals disclosed with reference to the seed layer 104 of FIG. 1A, and may be formed as disclosed with reference to the seed layer 204 of FIG. 2A.Referring to FIG. 13E, the head electroplating process forms a copper head layer 1348 on the second seed layer 1344. The copper header layer 1348 fills the header trench 1333 and extends over the header trench layer 1332 adjacent to the header trench 1333. The head electroplating process can use additives (such as levelers, inhibitors (sometimes referred to as inhibitors), and accelerators (sometimes referred to as brighteners)) to form head trenches 1333 that have a larger size than adjacent head trenches. The head trench layer 1332 of 1333 is above the copper head layer 1348 of greater thickness.13F, the copper header layer 1348 above the head trench layer 1332 adjacent to the header trench 1333 is removed, leaving the copper header layer 1348 in the head trench 1333 to provide an enlarged head 1311 of the conductive pillar 1309. The second seed layer 1344 on the head trench layer 1332 adjacent to the head trench 1333 is also removed. The copper head layer 1348 above the head trench layer 1332 can be removed, for example, by a copper chemical mechanical polishing (CMP) process (which uses a polishing pad and a copper removal slurry). The second seed layer 1344 on the head trench layer 1332 can also be removed by a copper CMP process, or can be removed by a selective wet etching process. The enlarged head 1311 has a contact surface 1316 positioned to oppose the first cylinder 1310a and the second cylinder 1310b. The method of forming the enlarged head 1311 disclosed with reference to FIGS. 13C to 13F is sometimes referred to as a damascene process, specifically, a copper damascene process.Referring to FIG. 13G, solder 1312 is formed on the contact surface 1316. The solder 1312 may be formed, for example, by an electroplating process or an additive process. The solder 1312 may include any of the metals disclosed with reference to the solder 112 of FIG. 1A. The first pillar 1310a, the second pillar 1310b, the second seed layer 1344, the enlarged head 1311, and the solder 1312 provide the bump bonding structure 1308 of the microelectronic device 1300.Referring to FIG. 13H, the head trench layer 1332 and the pillar plating mask 1319 of FIG. 13G are removed to leave the bump bonding structure 1308 in place on the first I/O pad 1303a and the second I/O pad 1303b in. The head trench layer 1332 and the pillar plating mask 1319 can be removed by a single process, such as an oxygen plasma process or a downstream asher process. Alternatively, the head trench layer 1332 may be removed by a first removal process suitable for the material of the head trench layer 1332, and the pillar plating mask 1319 may be subsequently passed through a material suitable for the pillar plating mask 1319 The second removal process removes. For example, the first removal process may include a dry process using oxygen radicals, and the second removal process may include a wet removal process using one or more organic solvents.13I, the first seed layer 1304 exposed by the first pillar 1310a and the second pillar 1310b is removed to leave the first pillar 1310a and the first I/O pad 1303a and the second pillar 1310b The first seed layer 1304 between and the second I/O pad 1303b. The first seed layer 1304 may be removed by a wet etching process or a plasma process to leave at least a part of the PO layer 1307 in an appropriate position on the substrate 1301. In a variant of this example in which the second seed layer 1344 has a different composition from the first seed layer 1304, removing the first seed layer 1304 can leave the second seed layer 1344 substantially intact, as shown in FIG. Instructed in 13I. The first pillar 1310a and the second pillar 1310b, the second seed layer 1344, the enlarged head 1311, and the solder 1312 provide the bump bonding structure 1308 of the microelectronic device 1300.14A and 14B are cross-sections of another example microelectronic device including conductive pillars with enlarged heads. Referring to FIG. 14A, a microelectronic device 1400 has a substrate 1401 with an I/O surface 1402. The microelectronic device 1400 includes a first I/O pad 1403a and a second I/O pad 1403b on the I/O surface 1402. The substrate 1401 may be similar to the substrate 101 disclosed with reference to FIG. 1A. The I/O pads 1403a and 1403b may be electrically coupled to the interconnect 1405 through vias 1406. The PO layer 1407 can optionally be disposed on the I/O surface 1402, with openings exposing the I/O pads 1403a and 1403b.The microelectronic device 1400 of this example may further include an auxiliary pad 1449. The auxiliary pad 1449 may optionally have no electrical connection through one of the vias 1406 to one of the interconnects 1405, as depicted in Figure 14A. The auxiliary pad 1449 may have the same composition and structure as the first I/O pad 1403a and the second I/O pad 1403b. The PO layer 1407 has an opening that exposes the auxiliary pad 1449.The first seed layer 1404 can be disposed on the I/O pads 1403a and 1403b and on the auxiliary pad 1449. The first seed layer 1404 may have a composition similar to that of the seed layer 104 of FIG. 1A.The microelectronic device 1400 includes a bump bonding structure 1408. The bump bonding structure 1408 includes a conductive pillar 1409. The conductive pillar 1409 includes a pillar 1410 on the first seed layer 1404; the pillar 1410 extends on the first I/O pad 1403a and the second I/O pad 1403b. The conductive pillar 1409 of this example also includes an auxiliary pillar 1450 on the first seed layer 1404 on the auxiliary pad 1449. The auxiliary cylinder 1450 is separated from the cylinder 1410. In this example, the first seed layer 1404 may extend onto the lateral surfaces of the pillars 1410 and the auxiliary pillars 1450, as depicted in FIG. 14A.The conductive pillar 1409 further includes a pillar 1410 and a second seed layer 1444 on the auxiliary pillar 1450. The conductive pillar 1409 includes an enlarged head 1411 on the second seed layer 1444. The enlarged head 1411 of this example extends above the column 1410 and the auxiliary column 1450. The second seed layer 1444 may extend onto the lateral surface of the enlarged head 1411, as depicted in FIG. 14A. The enlarged head 1411 has a flat contact surface 1416. The contact surface 1416 is positioned to oppose the cylinder 1410 and the auxiliary cylinder 1450. The auxiliary column 1450 can provide mechanical support for the enlarged head 1411.The bump bonding structure 1408 of this example has a first connection area 1445 on the contact surface 1416 and a second connection area 1446 on the contact surface 1416. The bump bonding structure 1408 includes solder 1412 on the enlarged head 1411 in the first connection area 1445 and the second connection area 1446. The bump bonding structure 1408 may optionally include an insulator layer 1447 on the enlarged head 1411 between the first connection region 1445 and the second connection region 1446.Referring to FIG. 14B, the microelectronic device 1400 is assembled into a package structure (such as a lead frame or a chip carrier). The packaging structure has a first packaging electrode 1418a, a second packaging electrode 1418b, and a third packaging electrode 1418c positioned between the first packaging electrode 1418a and the second packaging electrode 1418b. The first package electrode 1418a, the second package electrode 1418b, and the third package electrode 1418c may be represented as a first lead 1418a, a second lead 1418b, and a third lead 1418c. The bump bonding structure 1408 is connected to the first lead 1418a and the second lead 1418b. The solder 1412 couples the conductive pillar 1409 in the first connection area 1445 to the first lead 1418a and couples the conductive pillar 1409 in the second connection area 1446 to the second lead 1418b. The third lead 1418c may be isolated from the enlarged head 1411 by the insulator layer 1447 to advantageously prevent undesired electrical contact between the third lead 1418c and the bump bonding structure 1408. Compared with the bump bonding structure with one connection area per bump bonding structure, connecting the enlarged head 1411 to the first lead 1418a and the second lead 1418b can realize the application of the microelectronic device 1400 or the leads 1418a, 1418b, and 1418c. More efficient layout.15A to 15F are cross-sections of a microelectronic device including conductive pillars with enlarged heads depicted in the stages of another example forming method. Referring to FIG. 15A, a microelectronic device 1500 has a substrate 1501 with an I/O surface 1502. The microelectronic device 1500 includes a first I/O pad 1503a, a second I/O pad 1503b, and an auxiliary pad 1549 on an I/O surface 1502. The substrate 1501 may be similar to the substrate 101 disclosed with reference to FIG. 1A. I/O pads 1503a and 1503b may be electrically coupled to interconnect 1505 through vias 1506. The auxiliary pad 1549 may optionally have no electrical connection through one of the vias 1506 to one of the interconnects 1505, as depicted in FIG. 15A. The PO layer 1507 can optionally be disposed on the I/O surface 1502, and has openings exposing the I/O pads 1503a and 1503b and the auxiliary pad 1549.The pillar trench layer 1519 is formed on the substrate 1501 and the PO layer 1507 (if present). The pillar trench layer 1519 has a pillar trench 1531 exposing the first I/O pad 1503a and the second I/O pad 1503b. The pillar trench layer 1519 has auxiliary pillar trenches 1551 exposing the auxiliary pad 1549. The pillar trench layer 1519 may include organic polymer or other materials suitable for the copper CMP process and subsequent removal processes. The pillar trench layer 1519 may be formed, for example, by a photolithography process or an addition process.The first seed layer 1504 is subsequently formed on the pillar trench layer 1519. The first seed layer 1504 extends into the column trench 1531 and contacts the first I/O pad 1503a and the second I/O pad 1503b. The first seed layer 1504 also extends into the auxiliary pillar trench 1551 and contacts the auxiliary pad 1549. The first seed layer 1504 provides a conductive layer for the subsequent column electroplating process.The pillar electroplating process forms a copper pillar layer 1552 on the first seed layer 1504. The copper pillar layer 1552 fills the pillar trench 1531 and the auxiliary pillar trench 1551 and extends on the pillar trench layer 1519 adjacent to the pillar trench 1531 and the auxiliary pillar trench 1551. The column electroplating process can use additives to form a larger diameter in the column trench 1531 and the auxiliary column trench 1551 than on the column trench layer 1519 adjacent to the column trench 1531 and the auxiliary column trench 1551. Thickness of the copper pillar layer 1552.Referring to FIG. 15B, the copper pillar layer 1552 on the pillar trench layer 1519 is removed, for example, by a copper CMP process to leave the copper pillar layer 1552 in the pillar trench 1531 and the auxiliary pillar trench 1551. The copper pillar layer 1552 in the pillar trench 1531 provides the pillar 1510 of the conductive pillar 1509. The copper pillar layer 1552 in the auxiliary pillar trench 1551 provides the auxiliary pillar 1550 of the conductive pillar 1509. The first seed layer 1504 above the pillar trench layer 1519 is also removed, as indicated in FIG. 15B. The first seed layer 1504 may be removed from the pillar trench layer 1519 by a process for removing the copper pillar layer 1552 from the pillar trench layer 1519. Alternatively, the first seed layer 1504 may be removed from the pillar trench layer 1519 by a separate process (for example, a wet etching process).Referring to FIG. 15C, a head trench layer 1532 is formed on the pillar trench layer 1519. The head trench layer 1532 has a head trench 1533 exposing the pillar 1510 and the auxiliary pillar 1550 and exposing the pillar 1510 and the pillar trench layer 1519 around the auxiliary pillar 1550, as shown in FIG. 15C. The head trench layer 1532 may be formed, for example, by a photolithography process or an additive process. The head trench layer 1532 may be formed by a process similar to the process used to form the pillar trench layer 1519.The second seed layer 1544 is formed on the head trench layer 1532 to extend into the head trench 1533 and contact the pillar 1510 and the auxiliary pillar 1550. The second seed layer 1544 provides a conductive layer for the subsequent head electroplating process.The head electroplating process forms a copper head layer 1548 on the second seed layer 1544. The copper header layer 1548 fills the header trench 1533 and extends above the header trench layer 1532 adjacent to the header trench 1533. The head electroplating process may use additives similar to those used in the pillar electroplating process to form a copper head layer 1548 having a greater thickness in the head trench 1533 than over the head trench layer 1532 adjacent to the head trench 1533 .15D, the copper header layer 1548 on the head trench layer 1532 adjacent to the head trench 1533 is removed by, for example, a copper CMP process, leaving the copper header layer 1548 in the head trench 1533 to provide the enlarged head 1511 of the conductive pillar 1509 . The second seed layer 1544 on the head trench layer 1532 adjacent to the head trench 1533 is also removed. The second seed layer 1544 on the head trench layer 1532 can also be removed by a copper CMP process, or can be removed by a separate process. The enlarged head 1511 has a contact surface 1516 positioned to oppose the cylinder 1510 and the auxiliary cylinder 1550.Referring to FIG. 15E, the contact surface 1516 includes a first connection area 1545 and a second connection area 1546. In this example, the first connection area 1545 and the second connection area 1546 are laterally separated. The insulator layer 1547 may be formed on the contact surface 1516 between the first connection region 1545 and the second connection region 1546. The insulator layer 1547 may include any of the materials disclosed with reference to the insulator layer 1547 of FIG. 13F, and may be formed by any of the processes disclosed with reference to the insulator layer 1547 of FIG. 13F.The solder 1512 is formed on the contact surface 1516 in the first connection area 1545 and the second connection area 1546. The solder 1512 may be formed, for example, by an electroplating process or an additive process. The solder 1512 may include any of the metals disclosed with reference to the solder 112 of FIG. 1A. The solder 1512 may be formed after the insulator layer 1547 is formed, or may be formed before the insulator layer 1547 is formed. The first seed layer 1504, the pillar 1510, the auxiliary pillar 1550, the second seed layer 1544, the enlarged head 1511, the solder 1512, and the insulator layer 1547 provide a bump bonding structure 1508.15F, the head trench layer 1532 and the pillar trench layer 1519 of FIG. 15E are removed to leave the bump bonding structure 1508 on the first I/O pad 1503a, the second I/O pad 1503b, and the auxiliary pad 1549 In the appropriate location. The head trench layer 1532 and the column trench layer 1519 can be removed by a single process, which is facilitated by direct contact between the head trench layer 1532 and the column trench layer 1519. The head trench layer 1532 and the pillar trench layer 1519 may be removed, for example, by a dry removal process using oxygen radicals or a wet removal process using one or more organic solvents.The various features of the examples disclosed herein can be combined in other manifestations of the example microelectronic devices. For example, referring to FIG. 1A and FIG. 1B, FIG. 3, FIG. 5A and FIG. 5B, FIG. 8, FIG. 10A, FIG. 12A, and FIG. 4A to 4F, 6A to 6E, 7A to 7D, 9A to 9D, 11A to 11K, 13A to 13I, and 15A to 15F. 2A, FIG. 4A, FIG. 6A, FIG. 6B, FIG. 7A, FIG. 11A, FIG. 11D, and FIG. 13A can be disclosed by any method of forming a plating mask or trench layer disclosed herein者 formed. Similarly, the trench layer disclosed with reference to FIGS. 13C, 15A, and 15C can be formed by any of the methods of forming a trench layer or a plating mask disclosed herein. The bump bonding structure disclosed in FIGS. 1A and 1B, FIG. 3, FIG. 5A and FIG. 5B, FIG. 8, FIG. 10A, FIG. 12A, and FIG. 14A may include the barrier layer disclosed with reference to FIG. 3.Within the scope of the claims, it is possible to modify the described embodiments and other embodiments are possible. |
Methods and apparatus for accelerating OpenCL (Open Computing Language) applications by utilizing a virtual OpenCL device as interface to compute clouds are described. In one embodiment, one or more computing operations may be offloaded from a local processor to a virtual device that represents available resources of a cloud. Other embodiments are also described. |
1.A method for accelerating opencl application, the method includes:In response to selecting a virtual device from among multiple devices available to the application, offloading one or more computing operations to the virtual device,Wherein the selection of the virtual device is based on a comparison of one or more properties of the virtual device with one or more requirements to be determined by the application, andThe one or more properties of the virtual device represent available resources of the cloud.2.The method of claim 1, wherein the plurality of devices include a processor.3.The method of claim 2, further comprising the processor determining whether to offload the one or more computing operations from the processor to the virtual device.4.The method of claim 2, further comprising the processor executing the application.5.The method of claim 1, wherein the offloading is performed according to OpenCL (Open Computing Language).6.The method of claim 1, further comprising generating a device context of the virtual device in response to selecting a virtual device among the plurality of devices.7.The method of claim 6, further comprising interacting with the virtual device based on the generated device context.8.The method of claim 1, further comprising receiving one or more properties of the plurality of devices in response to a request from the application.9.An apparatus for accelerating opencl applications, the apparatus includes:A memory for storing data corresponding to a virtual device, wherein the virtual device represents available resources of the cloud; andA processor, configured to determine whether to offload one or more computing operations from the processor to the virtual device.10.The device according to claim 9, wherein the memory is used to store one or more of the following: an OpenCL client application, an OpenCL API (application programming interface), and an OpenCL driver.11.The apparatus of claim 10, wherein the OpenCL driver includes the virtual device.12.The apparatus of claim 9, further comprising one or more links for coupling network services of the virtual device to network services of available resources of the cloud.13.The apparatus of claim 9, wherein the cloud is coupled to the processor via a network.14.The apparatus of claim 9, wherein the processor includes one or more processor cores.15.The apparatus of claim 9, further comprising a resource agent for determining which available resources of the cloud will be used for one or more computing operations offloaded by the service. |
Accelerate OpenCL applications by using virtual OpenCL devices as an interface to the computing cloudfieldThe invention generally relates to the field of computing. More specifically, embodiments of the present invention generally relate to techniques for accelerating OpenCL applications by using virtual OpenCL devices as an interface with a computing cloud.backgroundOpenCL (Open Computing Language) is the first open, royalty-free standard for general parallel programming for heterogeneous systems. OpenCL provides software developers with a unified programming environment for various hybrids using multi-core CPUs (central processing units), GPUs (graphics processing units), cellular-type architectures, and other parallel processors such as DSPs (digital signal processors) High-performance computing servers, desktop computer systems and handheld devices to write efficient portable code. The standard was developed by the Khronos Group.Brief description of the drawingsA detailed description is provided with reference to the drawings. In the drawings, the left-most digit of a reference number identifies the drawing in which the reference number first appears. The use of the same reference symbols in different drawings indicates similar or identical items.1 and 3-4 show block diagrams of embodiments of computing systems that can be used to implement some embodiments discussed herein.FIG. 2 shows a flowchart according to an embodiment of the invention.A detailed descriptionIn the following description, numerous specific details are set forth to provide a thorough understanding of various embodiments. However, various embodiments of the present invention can be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the specific embodiments of the present invention. In addition, various aspects of embodiments of the present invention may be performed using various means, such as integrated semiconductor circuits ("hardware" also known as "HW"), computer-readable instructions organized into one or more programs ("software" also Called "SW") or some combination of hardware and software. For the purposes of this disclosure, a reference to "logic" shall mean hardware, software (including, for example, microcode that controls the operation of a processor), or some combination thereof.Reference in the specification to "one embodiment" or "an embodiment" means that a specific feature, structure, or characteristic described in connection with the embodiment can be included in at least one implementation. The phrase "in one embodiment" that appears throughout this specification may or may not all refer to the same embodiment.In addition, in this description and claims, the terms "coupled" and "connected" along with their derivatives may be used. In some embodiments of the invention, the term "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but still cooperate or interact with each other.In OpenCL, a parallel computer core can be offloaded from a host (usually a CPU) to an accelerator device (eg, GPU, CPU, or FPGA (field programmable gate array)) in the same system. In addition, OpenCL explicitly covers mobile and embedded devices to facilitate the development of portable computing-intensive applications. However, in the foreseeable future, the parallel computing capabilities of mobile devices may be quite limited. Although this may be sufficient for small, low-latency graphics workloads, attempting to run computationally intensive OpenCL applications (such as simulations in scientific, engineering, and commercial computing, complex data analysis, etc.) will lead to disappointing user experiences. In addition, it is likely that there will be ultra-lightweight or embedded platforms that do not contain OpenCL-capable devices at all and have very limited CPU performance. Complex OpenCL applications will simply not run on these systems.Even on standard desktops and workstations, compute-intensive OpenCL applications can be accelerated by offloading OpenCL workloads to server farms in the computing cloud. However, existing interfaces that make running workloads in the cloud may require significant modifications to the application itself. These modifications may be limited to specific cloud computing systems, which even further hinders the adoption of cloud computing in the industry.To this end, some of the embodiments discussed herein provide techniques for accelerating OpenCL applications by utilizing virtual OpenCL devices as an interface to the computing cloud. In an embodiment, the computationally intensive OpenCL application is accelerated by offloading one or more computing cores of the application to a computing cloud on a local network (such as the Internet or intranet). In one embodiment, uninstallation can be performed so that it is transparent to the application; therefore, there will be no need to modify the application code. This allows OpenCL applications to run on light-weight systems and access the performance potential of large servers in the back-end cloud.FIG. 1 shows a computing system 100 including a virtual OpenCL device according to an embodiment. As shown, one or more clients 102 may include an OpenCL client application 104 (which may be an application compatible with OpenCL), an OpenCL API (application programming interface) 106, an OpenCL driver 108, a virtual OpenCL device 110, and Client network service 112.The web service 112 is coupled to the network 120 via a link (eg, operating according to SOAP (Simple Object Access Protocol)). In one embodiment, the network 120 may include a computer network (including, for example, the Internet, an intranet, or a combination thereof) that allows agents (such as computing devices) to communicate data. In an embodiment, the network 120 may include one or more interconnects (or interconnected networks) and / or shared communication networks that communicate via serial (eg, point-to-point) links.In one embodiment, the system 100 may support a layered protocol scheme, which may include a physical layer, a link layer, a routing layer, a transport layer, and / or a protocol layer. For a point-to-point or shared network, the network 120 may also facilitate the transfer of data (eg, in the form of packets) from one protocol (eg, a cache processor or cache-aware memory controller) to another protocol. Additionally, in some embodiments, the network 120 may provide communications that follow one or more cache coherency protocols.In addition, the network 120 can utilize any type of communication protocol, such as Ethernet, Fast Ethernet, Gigabit Ethernet, Wide Area Network (WAN), Fiber Distributed Data Interface (FDDI), Token Ring, leased line, analog modem, Digital subscriber line (DSL and its variants, such as high bit rate DSL (HDSL), integrated services digital network DSL (IDSL), etc.), asynchronous transmission mode (ATM), cable modem and / or FireWire.The wireless communication in the network 120 may be based on one or more of the following: wireless local area network (WLAN), wireless wide area network (WWAN), code division multiple access (CDMA) cellular radiotelephone communication system, global system for mobile communications (GSM) cellular radiotelephone Systems, North American Digital Cellular (NADC) cellular radiotelephone systems, time division multiple access (TDMA) systems, extended TDMA (E-TDMA) cellular radiotelephone systems, third-generation partnership project (3G) systems such as wideband CDMA (WCDMA) and many more. In addition, network communication may be performed by an internal network interface device (eg, located in the same physical enclosure as the computing system) or an external network interface device such as a network interface card or controller (NIC) (eg, having a separate Physical enclosure and / or power supply).As shown in FIG. 1, network 120 may be coupled to resource agent logic 122, which determines which of one or more available servers (or computing resources) 126-1 to 126-Z of cloud 130 are available to customers Terminal 102 provides computing offloading services. Links 131-1 to 131-Z (operating according to SOAP, for example) may couple servers 126-1 to 126-Z to resource agent 122. Each of the servers 126-1 to 126-Z may include network services (132-1 to 132-Z), OpenCL APIs (134-1 to 134-Z), and OpenCL drivers (136-1 to 136-Z ).In an embodiment, the virtual OpenCL device 110 may be integrated into the computing cloud with the OpenCL framework. The virtual device 110 may be implemented within the OpenCL driver 108 that handles communication with the cloud 130 infrastructure. The OpenCL driver 108 can be installed separately on the client system or can be used as an extension of the existing OpenCL driver. The driver 108 may appear as a standard OpenCL driver to the application 104, and in an embodiment may transparently handle all communications with the cloud 130 infrastructure. Users can turn cloud support on and off at the driver system level. In addition, the application itself may not notice any differences, except for new devices that appear in the list of available devices when cloud support is enabled, for example.In an embodiment, the virtual OpenCL device 110 may represent resources available to the client 102 in the cloud 130. If the application 104 is looking for a device with the highest performance, for example, it can select the virtual device 110 from the list and use it as a local device in the same OpenCL function. In an embodiment, a special property of the virtual device is that it may not perform OpenCL functions locally, but forward them to the computing cloud 130 on the network. The OpenCL driver 108 on the host / client platform 102 may act as a client (eg, via the network service 112) that communicates with network service interfaces (eg, services 132-1 to 132-Z) provided by the cloud.In order to transparently handle kernel offload and data transfer between the client 102 and the cloud 130, API calls defined in the OpenCL runtime can be implemented as Web / network services. For example, every time when the application 104 performs an API function, the virtual device 110 may detect this and call the corresponding Web / network service in the cloud 130. In some embodiments, the cloud 130 may be composed of a heterogeneous collection of computing systems. The only requirement may be the support of these computing systems for OpenCL. Each system can run network services corresponding to OpenCL runtime calls. The network service may in turn perform OpenCL functions on OpenCL devices locally available on the server (eg, locally available on one or more servers 126-1 through 126-Z).FIG. 2 shows a method 200 for accelerating OpenCL applications via a virtual device according to an embodiment. In some embodiments, one or more components discussed herein (eg, with reference to FIGS. 1 or 3-4) may be used to perform one or more operations of method 200.Referring to FIGS. 1-2, in operation 202, it is determined whether an application (eg, application 104) has requested an available device of the platform, for example, via an API call clGetDeviceIds (). In operation 204, a platform (eg, a processor, such as those discussed with reference to FIGS. 3 or 4) may query the nature of available devices, for example, by calling clGetDeviceInfo (). In operation 206, the application may perform a comparison between device properties and application requirements. Based on the comparison result, the application may then select a device at operation 208. In operation 210, the application may create a context on the device, for example, by calling clCreateContext (). This context can then be used for further interaction with the device at operation 212. In one embodiment, the cloud enhanced driver 108 adds the virtual device 110 to the returned list of available devices, for example, in response to calling clGetDeviceIds (). The virtual device represents the available resources in the cloud, and its nature describes the hardware characteristics of the corresponding system.In some embodiments, the cloud 130 consists of a server farm with a powerful and / or multi-core CPU, so the nature of the virtual device CL_DEVICE_TYPE (CL_DEVICE_TYPE) will be set to CL_DEVICE_TYPE_CPU (CL_DEVICE_TYPE_CPU). However, the cloud system may include a GPU (graphics processing unit), an accelerator, etc. In this case, the device type will be CL_DEVICE_TYPE_GPU (CL_device_type_GPU) or CL_DEVICE_TYPE_ACCELERATOR (CL_device_type_accelerator), respectively. This means that each virtual device can represent a set of heterogeneous physical systems of the same type and with the same properties in the cloud. In some embodiments, the cloud may implement a virtual device of type CL_DEVICE_TYPE_CPU by deploying the same virtual machine on a heterogeneous physical system. Therefore, the nature of the virtual appliance will actually reflect the configuration of the virtual machine that will be deployed on the physical system in the cloud. To use the virtual device, the application will select the device from the list and use it as a local device in the same OpenCL function. Therefore, applications can query these properties to determine whether it makes sense to run a given OpenCL kernel on a cloud system or locally. In some embodiments, application code does not need to be modified to take advantage of the cloud. Instead, the cloud can be seamlessly integrated into the OpenCL framework and selected by the application based solely on its OpenCL nature.Therefore, some embodiments utilize both local computing offload and cloud computing. For example, resource abstraction / management and data transmission capabilities and protocols (such as web / network services) provided by cloud computing can be utilized and integrated into the OpenCL framework via the virtual OpenCL device 110. Therefore, the potential of the cloud becomes available to OpenCL applications 104, and there is little or no need to adapt these applications to use the cloud in general or specific cloud implementations. Moreover, the interaction with the cloud interface can be encapsulated in the virtual OpenCL device 110 and handled by the OpenCL driver 108. In addition, the "cloud-enabled" OpenCL framework can allow OpenCL applications to take advantage of the computing power available on the server platform, resulting in super-functions and / or user experiences across a wide range of client form factors. For example, the computing power of thin devices can be expanded to include the capabilities typically provided by server farms. In addition, the OpenCL cloud service can be provided as a new commercial service. For example, the OpenCL driver can be provided free of charge and charged according to usage.3 shows a block diagram of an embodiment of a computing system 300. In various embodiments, one or more components of system 300 may be provided in various electronic devices capable of performing one or more operations discussed herein with reference to some embodiments of the invention. For example, one or more components of the system 300 may be used to perform the operations discussed with reference to FIGS. 1-2, such as to accelerate OpenCL applications by using a virtual OpenCL device as an interface with a computing cloud. In addition, various storage devices discussed herein (for example, with reference to FIGS. 3 and / or 4) may be used to store data, operation results, and the like. In one embodiment, data associated with the operation of method 300 of FIG. 3-including the sequence of instructions executed by processor 302-may be stored in a memory device (such as processor 312 or processor 302 of FIG. 3) Or one or more caches present in the processors 402/404 of FIG. 4 (eg, L1 cache in one embodiment).In addition, the computing system 300 may include one or more central processing units (CPUs) 302 or processors that communicate via an interconnection network (or bus) 304. The processor 302 may include a general-purpose processor, a network processor (processing data communicated on the computer network 120), or other types of processors (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processor 302 may have a single-core or multi-core design. The processor 302 with a multi-core design may integrate different types of processor cores on the same integrated circuit (IC) die. In addition, the processor 302 with a multi-core design may be implemented as a symmetric or asymmetric multi-processor. In addition, the processor 302 may utilize a SIMD (Single Instruction Multiple Data) architecture.The chipset 306 may also communicate with the interconnection network 304. The chipset 306 may include a memory control hub (MCH) 308. The MCH 308 may include a memory 312 (when the system 300 is a client, the memory 312 may store one or more of items 104-112 of FIG. 1; and where the system 300 is a cloud resource / server, the memory 312 One or more of items 132-136 of FIG. 1) can be stored in memory controller 310 in communication. The memory 312 may store data, including sequences of instructions executed by the processor 302 or any other device included in the computing system 300. In one embodiment of the invention, the memory 312 may include one or more volatile storage (or memory) devices, such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM) or other types of storage devices. Non-volatile memory, such as a hard disk, can also be used. Other devices such as multiple CPUs and / or multiple system memories can communicate via the interconnection network 304.The MCH 308 may also include a graphical interface 314 that communicates with the display 316. The display 316 may be used to display the results of the operations discussed herein to the user. In an embodiment of the present invention, the display 316 may be a flat panel display that communicates with the graphic interface 314 through, for example, a signal converter that represents a digital representation of an image stored in a storage device such as video memory or system memory Converted to a display signal interpreted and displayed by the display 316. The display signal generated by the interface 314 can pass through various control devices, and then be interpreted by the display 316 and then displayed on the display 316.Hub interface 318 may allow MCH 308 to communicate with input / output control hub (ICH) 320. The ICH 320 may provide an interface to I / O devices that communicate with the computing system 300. The ICH 320 can communicate with the bus 322 through a peripheral bridge (or controller) 324 such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 324 may provide a data path between the CPU 302 and peripheral devices. Other types of topologies can be used. In addition, multiple buses can communicate with the ICH 320 through multiple bridges or controllers, for example. Moreover, in various embodiments of the present invention, other peripheral devices that communicate with the ICH 320 may include an integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive, USB port, keyboard, mouse, parallel port, serial Line port, floppy disk drive, digital output support (eg, digital video interface (DVI)) or other devices.The bus 322 can communicate with an audio device 326, one or more disk drives 328, and a network interface device 330 that can communicate with the computer network 120. In an embodiment, the device 330 may be a NIC capable of wireless communication. Other devices may communicate via the bus 322. In addition, in some embodiments of the invention, various components (such as network interface device 330) may communicate with MCH 308. In addition, the processor 302 and the MCH 308 may be combined to form a single chip. In addition, in other embodiments of the present invention, the graphical interface 314 may be included in the MCH 308.In addition, the computing system 300 may include volatile and / or non-volatile memory (or storage). For example, non-volatile memory may include one or more of the following: read only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrical EPROM (EEPROM), disk drive (such as 328), Floppy disk, compact disk ROM (CD-ROM), digital versatile disk (DVD), flash memory, magneto-optical disk, or other types of non-volatile machine-readable media capable of storing electronic data (eg, including instructions). In an embodiment, the components of system 300 may be arranged in a point-to-point (PtP) configuration such as discussed with reference to FIG. 4. For example, processors, memory, and / or input / output devices may be interconnected through multiple point-to-point interfaces.FIG. 4 shows a computing system 400 arranged in a point-to-point (PtP) configuration according to an embodiment of the invention. Specifically, FIG. 4 shows a system in which processors, memory, and input / output devices are interconnected through multiple point-to-point interfaces. The operations discussed with reference to FIGS. 1-3 may be performed by one or more components of system 400.As shown in FIG. 4, the system 400 may include several processors, only two of which are shown as processors 402 and 404 for clarity. Processors 402 and 404 may each include local memory controller hubs (MCH) 406 and 408 to couple with memories 410 and 412. The memories 410 and / or 412 may store various data, such as the memory 312 with reference to FIG. 3 (in the case where the system 400 is a client, the memory 312 may store one or more of items 104-112 of FIG. 1; and in the system In the case where 400 is a cloud resource / server, the memory 312 may store data discussed in one or more of items 132-136 of FIG. 1).The processors 402 and 404 may be any suitable processors, such as those discussed with reference to the processor 302 of FIG. 3. Processors 402 and 404 can exchange data via PtP interface 414 using point-to-point (PtP) interface circuits 416 and 418, respectively. Processors 402 and 404 can each use peer-to-peer interface circuits 426, 428, 430, and 432 to exchange data with chipset 420 via respective PtP interfaces 422 and 424. The chipset 420 may also use the PtP interface circuit 437 to exchange data with the high-performance graphics circuit 434 via the high-performance graphics interface 436.At least one embodiment of the present invention may be provided by utilizing processors 402 and 404. For example, the processors 402 and / or 404 may perform one or more operations of FIGS. 1-3. However, other embodiments of the invention may exist in other circuits, logic units, or devices within the system 400 of FIG. 4. In addition, other embodiments of the present invention may be distributed among several circuits, logic units, or devices shown in FIG. 4.Chipset 420 may be coupled to bus 440 using PtP interface circuit 441. The bus 440 may have one or more devices coupled thereto, such as a bus bridge 442 and I / O devices 443. The bus bridge 442 may be coupled to other devices via the bus 444, such as a keyboard / mouse 445, the network interface device 430 discussed with reference to FIG. 4 (such as a modem, network interface card (NIC), or similar device that may be coupled to the computer network 120), Audio I / O device and / or data storage device 448. The data storage device 448 may store code 449 executable by the processors 402 and / or 404.In various embodiments of the present invention, the operations discussed herein with reference to FIGS. 1-4, for example, may be implemented as hardware (e.g., logic circuits), software (including, for example, controlling processors such as those discussed with reference to FIGS. 1-4 Microcode of the operation of the processor), firmware, or a combination thereof, which may be provided as a computer program product, including, for example, a computer (eg, a computing device processor or other logic) stored thereon to perform the discussion Tangible machine-readable or computer-readable medium of instructions (or software programs) for operation. Machine-readable media may include storage devices such as those discussed herein.In addition, such a tangible computer-readable medium can be downloaded as a computer program product, in which the program can be transferred from a remote computer (via a communication link (eg, bus, modem, or network connection) via a data link contained in the tangible propagation medium For example, the server) transmits to the requesting computer (for example, the client).Thus, although embodiments of the invention have been described in language specific to structural features and / or method actions, it should be understood that the claimed subject matter may not be limited to the specific features or actions described. Rather, these specific features and actions are disclosed as sample forms of implementing the claimed subject matter. |
Methods and apparatuses relating to balanced transmittal of data are described. In one embodiment, an apparatus includes an encoder to encode input data into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel, and a decoder to decode the at least one data group into output data. In another embodiment, a method includes encoding input data with an encoder into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel, and decoding the at least one data group into output data with a decoder. |
CLAIMSWhat is claimed is:1. An apparatus comprising:an encoder to encode input data into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel; anda decoder to decode the at least one data group into output data.2. The apparatus of claim 1, wherein the first level signal is a positive voltage and the second, lower level signal is a negative voltage.3. The apparatus of claim 2, wherein the positive voltage and the negative voltage are equal and opposite voltages.4. The apparatus of claim 1, wherein the first level signal is a positive voltage and the second, lower level signal is about zero volts.5. The apparatus of claim 1, wherein each single conductor comprises a transmitter on a first end and a receiver on a second, opposing end.6. The apparatus of any one of claims 1-5, wherein the at least one data group is a plurality of data groups.7. The apparatus of claim 1, wherein the input data is a byte and each single conductor is a conductor of a twelve conductor parallel bus.8. A method comprising:encoding input data with an encoder into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel; anddecoding the at least one data group into output data with a decoder.9. The method of claim 8, wherein the encoding comprises providing the first level signal as a positive voltage and the second, lower level signal as a negative voltage.10. The method of claim 9, wherein the positive voltage and the negative voltage are equal and opposite voltages.11. The method of claim 8, wherein the encoding comprises providing the first level signal as a positive voltage and the second, lower level signal at about zero volts.12. The method of claim 8, wherein each single conductor comprises a transmitter on a first end and a receiver on a second, opposing end.13. The method of any one of claims 8-12, wherein the at least one data group is a plurality of data groups.14. The method of claim 8, wherein the input data is a byte and each single conductor is a conductor of a twelve conductor parallel bus.15. A system comprising:a processor comprising an encoder to encode input data into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel; anda hardware component comprising a decoder to decode the at least one data group into output data.16. The system of claim 15, wherein the hardware component is memory.17. The system of claim 15, wherein the hardware component is an application-specific integrated circuit.18. The system of claim 15, wherein the first level signal is a positive voltage and the second, lower level signal is a negative voltage.19. The system of claim 18, wherein the positive voltage and the negative voltage are equal and opposite voltages.20. The system of claim 15, wherein the first level signal is a positive voltage and the second, lower level signal is about zero volts.21. The system of claim 15, wherein each single conductor comprises a transmitter on a first end and a receiver on a second, opposing end.22. The system of any one of claims 15-21, wherein the at least one data group is a plurality of data groups.23. The system of claim 15, wherein the input data is a byte and each single conductor is a conductor of a twelve conductor parallel bus.24. An apparatus comprising:means to encode input data into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel; andmeans to decode the at least one data group into output data. |
APPARATUSES AND METHODS FOR BALANCED TRANSMITTAL OF DATATECHNICAL FIELD[0001] The disclosure relates generally to electronics, and, more specifically, an embodiment of the disclosure relates to an encoder and decoder for balanced transmittal of data.BACKGROUND[0002] Electronics (e.g., computer systems) generally employ one or more electrical connections to facilitate the transmittal of data (e.g., communication) between system components, such as between a processor and memory. Electrical connections may also be used to facilitate the transmittal of data between on-die and/or off-die components, such as input and output (I/O) devices, peripherals, etc.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The present disclosure is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:[0004] Figure 1 illustrates a hardware system having an encoder and a decoder according to embodiments of the disclosure.[0005] Figure 2 illustrates a schematic diagram of a two conductors per group encoding according to embodiments of the disclosure.[0006] Figure 3 illustrates a schematic diagram of a four conductors per group encoding according to embodiments of the disclosure.[0007] Figure 4 illustrates a schematic diagram of a six conductors per group encoding according to embodiments of the disclosure.[0008] Figure 5 illustrates a schematic diagram of a twelve conductors per group encoding according to embodiments of the disclosure.[0009] Figure 6 illustrates a block diagram of a system according to embodiments of the disclosure.[0010] Figure 7 illustrates a flow diagram according to embodiments of the disclosure.DETAILED DESCRIPTION[0011] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.[0012] References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, orcharacteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.[0013] Electronics (e.g., computer systems) generally employ one or more electrical connections (e.g., an interconnect or bus) to facilitate the transmittal of data (e.g.,communication) between components, such as, but not limited to, between a processor and (e.g., random-access) memory, a first processor and a second processor, a first semiconductor chip and a second semiconductor chip (e.g., chip-to-chip), a processor (e.g., a central processing unit (CPU)) and an application- specific integrated circuit (ASIC), a processor (e.g., a central processing unit (CPU)) and a field-programmable gate array (FPGA), a processor and a peripheral, etc. Electrical connections may also be used to facilitate the transmittal of data between on-die and/or off-die components, such as input and output (I/O) devices, peripherals, etc. Certain electrical connections include parallel conductors (e.g., parallel wires, trenches, vias, or other electrically conductive paths). One embodiment of an electrical connection is a multiple conductor parallel bus, for example, where the conductors allow parallel (e.g., concurrent) transmittal of data thereon. The term electrical connection (e.g., interconnect or bus) may generally refer to one or more separate physical connections, communication lines and/or interfaces, shared connections, and/or point-to-point connections, which may be connected by appropriate bridges, adapters, and/or controllers.[0014] However, in certain embodiments a conductor of a bundle of conductors of an electrical connection operating in parallel may experience interference, e.g., noise, caused by one or more of the other conductors. Interference may be electromagnetic interference, for example, crosstalk. Crosstalk may generally refer to the inductive coupling between two or more adjacent conductors (e.g., lines, lanes, or channels), for example, where a data signal from one or more conductors interferes with the data signal on a nearby conductor, for example, that changes the signal (e.g., voltage) on the conductor sufficiently to cause an error. Interference may be from a current fluctuation in power delivery, for example, the change in current (i) over a change in time (t), which may be referred to as (di/dt) or simultaneous switching noise. In certain embodiments, current fluctuations associated with rapid changes in power (e.g., current) levels may cause an error (e.g., an incorrect bit value). In one embodiment, the encoder, transmitter(s), receiver(s), and/or decoder are powered in the same power domain (e.g., the same local area power grid).[0015] Certain embodiments may utilize differential signaling. Differential signaling may include having two conductors (e.g., a differential pair) for each signal to be transmitted, for example, such that for each signal the transmitting component sends on a first conductor, a compliment of the signal is sent on a second conductor (e.g., such that the two components are 180 degrees out of phase with each other), e.g., having a coding efficiency of 0.5 bits per conductor (1 bit / 2 conductors). Doubling the conductors used (e.g., pin count) may cause increased die and/or system size and larger routing real estate on a die and/or system.[0016] In contrast with differential signaling, single ended signaling transmits data over a single conductor. For example, with a first level signal (e.g., voltage or current) representing one of a logic (binary) value of zero and one and a second, lower level signal representing the other of the zero and the one. In one embodiment, only two levels of signals are utilized on each conductor to represent data, which may be generally referred to as two-level signaling. Each component of a signal may be transitioned between two particular voltages (e.g., from a power supply or amplifier) that represent logical (e.g., digital) values of zero and one. In one embodiment, each (e.g., first and second level) signal to be transmitted has its own conductor. Although not depicted, a data buffer or buffers may be utilized herein.[0017] Certain embodiments disclosed herein include hardware apparatuses (e.g., a hardware encoder and/or a hardware decoder) and methods to encode (e.g., convert) input data (e.g., with an encoder) into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group (e.g., over single conductors in parallel), and/or decode the at least one data group into output data (e.g., with a decoder). For example, each group has an even number of signals therein. A set of groups may transmit the entire data input. Certain embodiments herein provide a balanced data coding scheme (e.g., for parallel conductors, including, but not limited to a bus), for example, to maximize the signaling bandwidth per conductor. [0018] Figure 1 illustrates a hardware system 100 having an encoder 104 and a decoder 112 according to embodiments of the disclosure. Data input is provided at 102 and data output at 114 (which may be the same format or a different format from the data input at 102). For example, a first computing component (e.g., a processor) may provide the data input at 102 and a second computing component (e.g., a memory) may receive the data output at 114, e.g., as a control signal. Data input at 102 and/or data output at 114 may include multiple discrete elements (e.g., each element being a bit). The number of discrete elements may be any number, e.g., represented as Mintotal number of input elements and Mout total number of output elements in Figure 1. In one embodiment, Min is equal to Mout. In one embodiment, Min is different than Mout. Encoder and/or decoder may be integrated into a hardware component (e.g., processor, memory, network resources, etc.).[0019] Encoder 104 may take the data input at 102 (e.g., which in one embodiment may be a single or multiple bits or bytes of data) and covert the data input at 102 into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal, e.g., with examples discussed below in reference to Figures 2-5. In one embodiment, the first level signal is a positive voltage and the second, lower level signal is a negative voltage. The positive voltage and the negative voltage may be equal and opposite voltages, for example, according to a non-return to zero (NRZ) signaling scheme, e.g., with no neutral or rest voltages. In one embodiment, the first level signal is a positive voltage and the second, lower level signal is zero or about zero volts, e.g., according to a return to zero (RZ) signaling scheme.[0020] The data group(s) from the encoder 104 may be output to a decoder 112, for example, over single conductors with transmission in parallel, e.g., as single ended signaling. As depicted in Figure 1, the data group(s) from the encoder 104 are sent to the transmitter 106 to transmit the signals of each data group across the conductors 108 to a receiver 110. Figure 1 depicts each single conductor (e.g., conductor 1 through conductor N, with N being the total number of signals to be transmitted) having a transmitter on a first end and a receiver on a second, opposing end. In one embodiment, a single transmitter and/or receiver may be utilized for a subset or all of the conductors 108. In one embodiment, a transmitter is part of an encoder. In one embodiment, a receiver is part of a decoder. In one embodiment, a transmitter is an amplifier or other driver. In one embodiment, a receiver is a sensor detecting a (e.g., voltage) value and providing an according output. In one embodiment, each component of a hardware system includes a decoder and an encoder, for example, in a decoder and encoder unit (DEU). [0021] The receiver 110 may output the received signals of the data group(s) to the decoder, e.g., to convert the data group(s) back into the form of the data input at 102. In one embodiment, the transmitter and/or receiver may include a ground connection (not depicted), for example, where binary digits are represented as two different voltage (or current) levels on a single wire and the transmitter and/or receiver may include a built-in or external reference voltage (or current) to compare the received signal against to determine the binary value. In one embodiment, if there are N signals to transmit, a system may include N+l conductors, e.g., with one conductor for each signal and the plus one being the common ground. Encoding and/or decoding according to this disclosure may be achieved with hardware, software, and/or firmware.[0022] In one embodiment, an encoder is a hardware component (e.g., a finite state machine, a linear feedback shift register, or a mapping table) that includes a plurality of (e.g., digital) inputs for receiving (e.g., digital) data from an electronic component. The output of an encoder may be connected (e.g., electrically coupled) to a plurality of transmitters, e.g., each of which receives a signal from the encoder and transmits a corresponding voltage (or current) signal on its respective conductor (e.g., signal line). The encoder may encode the input data for balanced transmittal over the conductors (e.g., bus). The conductors may include receivers coupled to each of the respective conductors. Each receiver may receive the (e.g., analog) signal transmitted by a respective transmitter and may provide an input signal to a decoder. A decoder may decode the data transmitted over the conductors (e.g., bus) and transmit (e.g., digital) output data to a receiving electronic component. In one embodiment, each decoder of a plurality of decoders used is paired with a respective encoder. In one embodiment, a decoder is a hardware component (e.g., a finite state machine, a linear feedback shift register, or a mapping table) that includes a plurality of (e.g., digital) outputs for sending (e.g., digital) data to an electronic component.[0023] In one embodiment, a decoder and/or encoder may switch between a first mode to encode input data and/or decode output data according to this disclosure and a second mode, for example, without encoding input data and/or decoding output data according to this disclosure (e.g., data may pass through an encoder without being encoded and/or pass through a decoder without being decoded in the second mode).[0024] Turning now to Figures 2-5, examples of encoding (e.g., converting) input data into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal is discussed, for example, where all groups have an equal number of logical ones and logical zeros within each group, e.g., at all transmission and/or read times. Although twelve signals (e.g., one for each conductor 1-12 of a twelve conductor electrical connection) are depicted, any number of signals and/or conductors may be utilized. The number of groups may be given by the total number of conductors (e.g., 12 in Figures 2-5) divided by the number of conductors in each group. In one embodiment, the total number of conductors divided by the number of conductors in each group does not produce a remainder (e.g., having a modulo of zero).[0025] Figure 2 illustrates a schematic diagram of a two conductors per group encoding 200 according to embodiments of the disclosure. In this embodiment, encoding produces six groups A-F of two conductors per group. This embodiment provides for 64 unique code words (e.g., data elements that may be represented by the different combinations of balanced signals that are to be transmitted across the conductors). In this encoding, each group may have a single logical one and a single logical zero, for example, (i) a logical one for conductor 1 and a logical zero for conductor 2 in group A and (ii) a logical zero for conductor 1 and a logical one for conductor 2 in group A in Figure 2. Thus each group here has two combinations of signals and as there are 6 groups, 2(combinations per group)69rowPs) = 64 code words (e.g., unique combinations). The equivalent number of conductors for a parallel electrical connection to transmit 64 code words without the encoding disclosed herein is log264 (code words) = 6 conductors (e.g., pins). Thus the conductor (e.g., pin) efficiency is the number of conductors for a parallel electrical connection without encoding(6) divided by the number of pins used with the encoding of this embodiment (12), i.e., ^ =0.5 conductor efficiency. This data is also shown in Table 1 below.[0026] Figure 3 illustrates a schematic diagram of a four conductors per group encoding 300 according to embodiments of the disclosure. In this embodiment, encoding produces three groups A-C of four conductors per group. This embodiment provides for 216 unique code words (e.g., data elements that may be represented by the different combinations of balanced signals that are to be transmitted across the conductors). In this encoding, each group may have six different balanced combinations, i.e., 1100, 1010, 1001, 0101, 0110, and 0011, and as there are 3 groups, 63= 216 code words (e.g., unique combinations). The equivalent number of conductors for a parallel electrical connection to transmit 216 code words without the encoding disclosed herein is log2216 = 7.8 conductors (e.g., pins). Thus the conductor (e.g., pin) efficiency is the number of conductors for a parallel electrical connection without encoding (7.8) divided by the number of pins used with the encoding of this embodiment (12), i.e.,— = 0.65 conductor efficiency. This data is also shown in Table 1 below.[0027] Figure 4 illustrates a schematic diagram of a six conductors per group encoding 400 according to embodiments of the disclosure. In this embodiment, encoding produces two groups A-B of six conductors per group. This embodiment provides for 400 unique code words (e.g., data elements that may be represented by the different combinations of balanced signals that are to be transmitted across the conductors). In this encoding, each group may have twenty different balanced combinations and as there are 2 groups, 202= 400 code words (e.g., unique combinations). The equivalent number of conductors for a parallel electrical connection to transmit 400 code words without the encoding disclosed herein is log2400 = 8.64 conductors (e.g., pins). Thus the conductor (e.g., pin) efficiency is the number of conductors for a parallel electrical connection without encoding (8.64) divided by the number of pins used with the encoding of this embodiment (12), i.e., = 0.72 conductor efficiency. This data is also shown in Table 1 below.[0028] Figure 5 illustrates a schematic diagram of a twelve conductors per group encoding 500 according to embodiments of the disclosure. In this embodiment, encoding produces one group A of all of the twelve conductors in the group. This embodiment provides for 924 unique code words (e.g., data elements that may be represented by the different combinations of balanced signals that are to be transmitted across the conductors). In this encoding, each group may have 924 different balanced combinations and as there is 1 groups, 9241= 924 code words (e.g., unique combinations). The equivalent number of conductors for a parallel electrical connection to transmit 924 code words without the encoding disclosed herein is log2924 = 9.85 conductors (e.g., pins). Thus the conductor (e.g., pin) efficiency is the number of conductors for a parallel electrical connection without encoding (9.85) divided by the number of pins used with the encoding of this embodiment9 85(12), i.e., -— = 0.82 conductor efficiency. This data is also shown in Table 1 below.Table 13 4 216 7.8 0.654 6 400 8.64 0.725 12 924 9.85 0.82[0029] In one embodiment, conductor efficiency may be maximized. In one embodiment, each code word represents a (e.g., unique) combination of information, for example, such that more combinations of code words allow a higher bandwidth (e.g., either data signal group or command address signal group) for the conductors to transmit information. In one embodiment, each code word represents a (e.g., unique) request or command (e.g., a load or a store). In one embodiment of double data rate (DDR) synchronous dynamic random- access memory (SDRAM) architecture, a byte of data without the encoding disclosed herein may be transmitted by 8 data signal (DQ) conductors (e.g., wires), 2 data strobe signal (DQS) conductors, and 1 data masking signal (DM) conductor for 11 conductors (e.g., wires) in total. Certain embodiments herein may utilize 12 conductors (e.g., 1 added to the 11 conductors to form a byte of data without the balanced transmission encoding discussed herein). Other balanced encoding schemes may be utilized according to this disclosure, e.g., for a specific application.[0030] The conductors are illustrated as extending longitudinally in the same plane in Figures 2-5. In another embodiment, one or more conductors (e.g., in a group) may be grouped together (e.g., in an equally distributed manner), for example, extendinglongitudinally along the periphery of a circle, square, rectangle, or other polygon. The conductors may be spaced adjacent to one or more other conductors, e.g., less than about one or two diameters of a conductor apart. The depicted conductors have a circular cross- sectional profile, although others (e.g., a square or a rectangle) may be utilized.[0031] Referring now to Figure 6, shown is a block diagram of a system 600 according to embodiments of the disclosure. The system (e.g., any, multiple, or all components thereof) may be a single system-on-a-chip (SoC). Any component may transmit data to one or more of the other components according to this disclosure. A component may transmit data with a built-in or separate encoder to a built-in or separate decoder of one of more othercomponents, e.g., via an electrical connection (e.g., conductor) therebetween. Although certain interconnects (e.g., 695) are depicted in Figure 6, any component (depicted or not) to communicate with another component may utilize an electrical connection with the balanced transmission encoding discussed herein. For example, processor 610 may communicate with co-processor (e.g., core) 615 via a conductor extending directly therebetween (not depicted). Each depicted component in Figure 6 includes an optional decoder and encoder unit (DEU). In one embodiment, a DEU includes only a decoder or only an encoder. In one embodiment, a DEU includes a decoder and an encoder, e.g., to transmit and received data. In one embodiment, a DEU is not utilized for each component and a single, centralized decoder and encoder unit may perform any encoding and/or decoding. In one embodiment, a decoder is optional.[0032] System 600 may include one or more processors (or cores) 610, 615, which are coupled to an electrical connection unit (e.g., having parallel conductors). In Figure 6, the electrical connection unit is an interconnect unit 620. In one embodiment, the interconnect unit 620 includes controller hub(s) for one or more components. In one embodiment, the interconnect unit 620 may include conductors, for example, and an encoder connected (e.g., in series) to a component that is to transmit data across the conductors. Interconnect unit 620 may connect memory 640, co-processor 615, peripheral(s) 650, network 630 (e.g., internet), and/or input/output (I/O) 660 devices. In one embodiment, peripheral 650 is an application- specific integrated circuit (ASIC) and/or a field-programmable gate array (FPGA).Additionally or alternatively, one or both of memory and graphics controllers may be integrated within the processor. Memory 640 and/or the co-processor 615 may be connected directly to the processor 610. Memory 640 may include a decoder/encoder module 640A, for example, to store code that when executed causes a processor to perform any method of this disclosure. Each processor 610, 615 may include one or more processing cores. The memory 640 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. In one embodiment, the interconnect unit 620 communicates with the processor(s) 610, 615 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface (e.g., QuickPath Interconnect (QPI)), or other conductor 695. In one embodiment, the processor(s) 610, 615 executes instructions that control data processing operations of a general type, e.g., to cause the balanced transmission encoding and/or decoding discussed herein. In certain embodiments, the balanced transmission encoding and/or decoding discussed herein may be utilized with memory interface (i/f), on- package I/O (OPIO), universal peripheral interface (UPI), Ethernet, Peripheral Component Interconnect express (PCIe), Universal Serial Bus (USB), display, and/or wireless links, e.g., a relay-link (r-link), data transmission. [0033] Figure 7 illustrates a flow diagram 700 according to embodiments of the disclosure. Depicted flow 700 includes encoding input data with an encoder into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel 702, and decoding the at least one data group into output data with a decoder 704.[0034] In certain embodiments, a balanced encoding scheme causes the net di/dt for a conductor group or a set of conductor groups to be at or about zero at any given time (e.g., to the first order), for example, to minimize any (power delivery) simultaneous switch noise (SSN). In certain embodiments, a balanced encoding scheme may reduce crosstalk noise, for example, where the encoded data patterns (e.g., code words) are a (e.g., small) subset of all possible data patterns without encoding. In certain embodiments, a balanced, 2-level signaling (e.g., in contrast to 4-level signaling) encoding scheme may allow usage of existing circuits and/or use less power and die area. In certain embodiments, a balanced, single ended encoding scheme may have a performance advantage over non-balanced, single ended signaling, e.g., with both partial discharge (PD) and crosstalk impacts considered. In certain embodiments, a balanced encoding scheme may be used on (e.g., high speed) memory I/O and other I/O interfaces for computing components. In one embodiment, a balanced, single ended encoding scheme may replace a differential interface.[0035] In one embodiment, an apparatus includes an encoder to encode input data into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel, and a decoder to decode the at least one data group into output data. The first level signal may be a positive voltage and the second, lower level signal may be a negative voltage. The positive voltage and the negative voltage may be equal and opposite voltages. The first level signal may be a positive voltage and the second, lower level signal may be about zero volts. Each single conductor may include a (e.g., its own) transmitter on a first end and a receiver (e.g., its own) on a second, opposing end. For example, such that each conductor may be powered (e.g., to send a signal) separately from the other conductors. The at least one data group may be a plurality of data groups. The input data may be a byte. Each single conductor may be a conductor of a twelve conductor parallel bus.[0036] In another embodiment, a method includes encoding input data with an encoder into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel, and decoding the at least one data group into output data with a decoder. The encoding may include providing the first level signal as a positive voltage and the second, lower level signal as a negative voltage. The positive voltage and the negative voltage may be equal and opposite voltages. The encoding may include providing the first level signal as a positive voltage and the second, lower level signal at about zero volts. Each single conductor may include a (e.g., its own) transmitter on a first end and a receiver (e.g., its own) on a second, opposing end. The at least one data group may be a plurality of data groups. The input data may be a byte. Each single conductor may be a conductor of a twelve conductor parallel bus.[0037] In yet another embodiment, a system includes a processor comprising an encoder to encode input data into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel, and a hardware component comprising a decoder to decode the at least one data group into output data. The hardware component may be memory. The hardware component may be an application-specific integrated circuit. The first level signal may be a positive voltage and the second, lower level signal may be a negative voltage. The positive voltage and the negative voltage may be equal and opposite voltages. The first level signal may be a positive voltage and the second, lower level signal may be about zero volts. Each single conductor may include a (e.g., its own) transmitter on a first end and a receiver (e.g., its own) on a second, opposing end. The at least one data group may be a plurality of data groups. The input data may be a byte. Each single conductor may be a conductor of a twelve conductor parallel bus.[0038] In another embodiment, an apparatus includes means to encode input data into at least one data group with each data group having an equal number of a first level signal and a second, lower level signal to transmit the at least one data group over single conductors in parallel, and means to decode the at least one data group into output data.[0039] In yet another embodiment, an apparatus comprises a data storage device that stores code that when executed by a hardware processor causes the hardware processor to perform any method disclosed herein. An apparatus may be as described in the detailed description. A method may be as described in the detailed description.[0040] Embodiments (e.g., of the mechanisms) disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches.Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[0041] Program code may be executed to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[0042] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. The mechanisms described herein are not limited in scope to any particular programming language. The language may be a compiled or interpreted language.[0043] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a non-transitory, machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, which may be generally referred to as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that make the logic or processor.[0044] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable' s (CD- RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0045] Accordingly, embodiments of the disclosure also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. |
PROBLEM TO BE SOLVED: To provide a method and a system for providing user-level multithreading.SOLUTION: A method according to the present techniques comprises receiving programming instructions to execute one or a plurality of shared resource threads (shreds) via an instruction set architecture (ISA). One or a plurality of instruction pointers is configured and set via the ISA; and the one or the plurality of shreds is executed simultaneously with a microprocessor, where the microprocessor includes a plurality of instruction sequencers.SELECTED DRAWING: Figure 1 |
A microprocessor comprising a plurality of user-level multithreading registers, wherein each of the plurality of user-level multithreading register not replicated, first produced by the one or more user-level instructions a plurality of shared application state local threads are possible direct addressing by one or more application program instructions to support communication with, the user-level multithreading registers the first from other threads do not share the application state is a proprietary, a microprocessor, coupled to said microprocessor, a system having a memory for storing said one or more application program instructions, the micro processor is further, to run in parallel the plurality of local threads, system.The plurality of user-level multithreading register is further configured to facilitate communication between a plurality of shreds, with a plurality of shared shreds registers for facilitating synchronization between the plurality of shreds, claim 1 system described.Wherein the plurality of user-level multithreaded register further includes a plurality of shred control register for managing the plurality of shreds, according to claim 1 or 2 system described.The microprocessor: receiving programming instructions for performing one or more shreds in accordance with the instruction set architecture (ISA), and configuring the one or more instruction sequences via the ISA, the one or more system of the parallel run a shred, according to claim 1 or 2.Said there is provided an apparatus having a processor with a multi-threading capabilities of the user-level processor is: for the first shred that is generated by the first user-level instructions that can be executed by the first thread, a shared resource threads (shreds ) a first group of resources to hold the application state of each of the second shred generated by the second user-level instruction, holds the application state for each shred, the resources of the first group wherein a second group of resources, including replication, holds the shared application state that is not duplicated, and a third group of resources that are shared by the first shred and the second shredding, the third at least part of the group of resources from the first shred and the second shred, directly and atomically by the user instruction for communication between the first shred and the second shredding accessible, it said first, second, resources of the third group of the not also accessible by the first of any thread created by privileged certain software entities other than the thread, wherein the processor is further , including the execution resources to run in parallel with the first shred and the second shred, equipment.It said first group of resources, including said second resource, and the third resource is a register group of the group, the first user-level instruction and said second user-level instruction of, for generating a shred instruction set architecture is the instruction, respectively associated with the first user software entities and the second user software entities, apparatus according to claim 5.Assign the register of said first group as a proprietary register based on the bit vector, further comprising, according to claim 6 hardware renaming means for allocating registers of the third group as a shared register.The shared application state is shared by the first thread, the first thread is generated by the software entity with privileged apparatus of any one of claims 5 to 7.The said second copy resource is a resource of the first group of the group of resources of the first group of general purpose registers, floating point registers, SSE registers is selected from the group consisting of instruction pointer and flag a combination of resources, resources of the third group of general purpose registers, floating point registers, shared communication registers, flags, memory management registers, the address conversion table, the resource being selected from the group consisting of privilege level and control registers including combinations, apparatus of any one of claims 5 to 7. |
Methods and systems to provide a multi-user-level threadsEmbodiments of the here of the present invention relates to the field of computer systems. In particular, embodiments herein are a method and system for providing multi-threaded user-level.Multi-threaded, program or OS (operating system) is the ability to run temporarily in the two or more sequences of instructions. (Here the user can also be another program) the user request of a program or system service from is tracked as a thread having a separate identity. The program works for the initial request for that thread, because they are interrupted by other requests, the state of the work for that thread is tracked until the work is completed.The type of computer processing, but contains a single instruction stream single data stream, which is a conventional serial type von Neumann computer which includes a single stream of instructions. The second of processing type is a single instruction stream multiple data stream (SIMD: single instruction stream, multiple data streams) is a process. The processing method includes multiple arithmetic logic processors and a single control processor. Each of the arithmetic logic processor executes in lockstep operations on data are synchronized by the control processor. The third type is multiple instruction stream, single data stream (MISD: multiple instruction streams, single data stream) is a process, which is handled through the linear array of various processor to execute different instruction stream the flow of the same data stream involved in that. The fourth processing type is multiple instruction stream multiple data stream (MIMD: multiple instruction streams, multiple data streams) is a process, which uses a plurality of processors, data each of which is supplied to the respective processor to run their own instruction stream for processing the stream. MIMD processor some instruction processing unit can have a plurality of instruction sequencer, thus the number of data streams.Programming model employed by today's multithreaded microprocessor is the same as traditional shared-memory multiprocessor: a plurality of threads are either programmed as if it were run on separate CPU. Communication between thread runs through the main memory, thread creation / destruction / scheduling is performed by the OS. Multithreading is far programmer that can directly access the thread has not been supplied in the manner architecturally visible. Embodiments of the here of the present invention will become more fully understood from the fact that the reference in conjunction with the drawings to the following detailed description, will be appreciated.Method and system for providing a multi-user-level threads are disclosed. The method according to the present technique, the instruction set architecture (ISA: instruction set architecture) one or more shared resource threads (shared resource thread) through (shred [shred]) to receive programming instructions for performing including. One or more instruction pointer is configured set via ISA, the one or more shreds are executed simultaneously by a microprocessor. Here, the microprocessor includes a plurality of instruction sequencer.According to an embodiment of the present invention, it is a block diagram of an exemplary computer system utilizing the present method and apparatus.According to an embodiment of the present invention, it is a diagram showing the microprocessor of an exemplary chip-level.According to an embodiment of the present invention, it is a diagram illustrating an exemplary simultaneous multi-threaded processor.According to an embodiment of the present invention, it is a diagram illustrating an exemplary asymmetric multiprocessor.According to an embodiment of the present invention, it is a diagram illustrating an exemplary execution environments for providing a multithreading capabilities of user-level.According to an embodiment of the present invention, it is a diagram illustrating an exemplary relationship between the shared memory thread as shreds.According to an embodiment of the present invention, it is a flow diagram of an exemplary process user-level multithreading functionality.In the following description, discrete predicates for purposes of explanation is described. However, these specific details are not required will be apparent to those skilled in the art. Hereinafter Some portions of the detailed description of which is presented with a symbolic representations of operations on data bits algorithms and within a computer memory. These algorithmic descriptions and representations are the means by those skilled in the data processing field is used to most effectively convey the substance of their work to others skilled in the art. Here, the algorithm, and generally, conceived to be a sequence of series of self-contained processing leading to a desired result. The process are those requiring physical manipulations of physical quantities. Although not required usually those amounts are stored, transferred, it takes the form of electrical or magnetic signals capable combinations and other operations. Sometimes primarily for conventional reasons, to refer to these signals as bits, values, elements, symbols, characters, terms, be referred to as the number has been found to be convenient.However, all of these, and similar terms are associated with the appropriate physical quantities and simply it should be noted that merely convenient labels applied to these quantities. As will be apparent from the following discussion, unless otherwise specified in particular, throughout this description, discussions utilizing terms such as "processing" or "computing" or "count" or "determining" or "displaying", the computer system physical registers and memory to operate the data represented as physical (electronic), a computer system memories or registers or other such information storage, as well as physical quantities within the transmission or display device it is understood that said computer system, or similar operations and processes of an electronic computing device into other data are expressed in.Embodiments of the present invention also relates to apparatus for performing the individual operations. It may be one which is specially constructed for the purpose of this apparatus is required, or also be a general purpose computer being selectively activated or reconfigured set by a computer program stored in the computer good. Such a computer program may be stored in a computer-readable storage medium. Such media include, but are not limited to, floppy disks, optical disks, CD-ROM, any kind of magneto-optical disks, read-only memory (ROM), a random access memory (RAM), EPROM, EEPROM, there is coupled to a computer system bus each be any type of media suitable for storing a magnetic or optical cards, or electronic instructions.Operation and display is presented herein nor are related inherently to any particular computer or other apparatus. It Various general purpose systems may be used with programs in accordance with this thought, or be constructed an apparatus specialized to perform the required and Behavior it may also have to prove useful. The required structure for a variety of these systems will appear from the following description. Furthermore, embodiments of the present invention nor that described in connection with any particular programming language. The variety of programming languages may be used in order to realize the idea of the embodiments of the invention described herein will be understood.This "user" as used throughout the specification states that the application program, user-level software, such as non-privileged code and similar software. User-level software is distinguished from the OS or similar privileged software. According to an embodiment of the present invention, the following description applies to MIMD processor described above.1, according to an embodiment of the present invention, illustrates a block diagram of an exemplary computer system 100 utilizing the present method and apparatus. The computer system includes a processor 105. Chip set 110 is to provide a memory and I / O functions on the system 100. More specifically, the chip set 110 graphics and memory controller hub: including the (GMCH Graphics and Memory Controller Hub) 115. GMCH115 is, acts as a host controller in communication with the processor 105, further acts as a controller for the main memory 120. According to an embodiment of the present invention, the processor 105 allows for expansion of the multi-threaded to the user level. GMCH115 also advanced graphics port that is coupled to it: also provides (AGP Advanced Graphics Port) interface to the controller 125. Chipset 110 further, I / O controller hub to perform a large number of I / O functions: including the (ICH I / O Controller Hub). ICH135 system management bus: is coupled to the (SM bus System Management Bus) 140.ICH135 the peripheral component interconnect (PCI: Peripheral Component Interconnect) is coupled to bus 155. Super I / O (S / O: super I / O) controller 170 is coupled to ICH135, provide connectivity to input devices such as a keyboard and a mouse 175. General-purpose I / O (GPIO: general-purpose I / O) bus 195 is coupled to ICH135. USB port 200 is coupled to ICH135 as shown in Fig. Printers, scanners, USB devices such as a joystick can be added to the system configuration on this bus. Integrated drive electronics in order to connect the IDE drive 210 to the computer system (IDE: integrated drive electronics) bus 205 is coupled to ICH135. Logically ICH135 looks like a plurality PCI devices in a single physical component.It contains the instruction set architecture to the processor 105. Instruction Set Architecture: an abstract model of a microprocessor, such as (ISA instruction set architecture) processor 105, of instructions to operate the state element (registers) and their state element. Instruction set architecture, by providing an abstract specification of the behavior of the microprocessor, for both the programmer and the microprocessor designers to serve the boundary between software and hardware.Improved number of transistors available on a silicon chip allowed the introduction of a multi-threaded into the general purpose microprocessor. Multithreading can be implemented in two different ways: chip-level multiprocessor (CMP: chip-level multiprocessor) and simultaneous multi-threaded processor: a (SMT simultaneous multithreaded processor). This either can also be used as a processor 105.2, according to an embodiment of the present invention, illustrates an exemplary chip-level multiprocessor. In the chip-level multiprocessor such as processor 200, a plurality of CPU cores 210 to 213 are integrated on a single silicon chip 200. Although each of the CPU cores 210 to 213 can execute independent execution threads 220-223, some resources (such as cache) may be shared by two or more of the CPU core 210-213.3, according to an embodiment of the present invention, illustrates an exemplary simultaneous multithreaded processor 300. Processor 105 can be a simultaneous multi-threaded processor, such as processor 300. In the simultaneous multi-threaded processor 300, a single CPU core 310 can execute multiple threads of execution. CPU core 310, (which determines often whether to processing which thread in each resource for each clock) to share the CPU resources in extremely fine particle size by, the software appears to be of two or more processors.4, according to an embodiment of the present invention, shows an exemplary asymmetric multiprocessor 400. The processor 105 may be asymmetric multiprocessor such as multi-processor 400. Has a microarchitecture CPU core 410-427 is different ISA is possible to build a chip-level multiprocessor 400 as the same. For example, a small number of high-performance CPU core 410 to 411 may be integrated together with a large number of low-power CPU core 420-427. This kind of design can achieve high aggregate throughput with a high scalar performance. The two types of CPU core software, as a normal shared memory thread, or as shred, or appear as some sort of combination of the two. Instruction Set Architecture (ISA) is an abstract model of a microprocessor, such as processor 105, of instructions that act on state element (registers) and their state element. ISA by providing an abstract specification of the behavior of the microprocessor, for both the programmer and the microprocessor designer, to the work of the boundary between the software and the hardware. This programming model, application program, directly makes it possible to control a plurality of asymmetric CPU core.<Shared memory programming model> conventional multithreaded microprocessor employs the same programming model and traditional shared-memory multiprocessor systems. Programming model is as follows. Micro-processor provides multiple threads of execution to the OS. OS is run in parallel with these threads run concurrently a plurality of applications ( "process"), and / or multiple threads from a single application ( "Multithreaded"). In either case, the plurality of threads software looks like a separate CPU. Main memory is shared by all threads, communication between threads is performed through the main memory. Hardware resources in the CPU can also be shared, but the share is hidden from the software by the micro architecture.Programming model traditional shared memory microprocessors widely understood, but is supported by a number of OS and application program, the model has some disadvantages. It: 1) communication between threads is performed via the main memory, thus significantly slower. It can alleviate some of the delay by caching, but there is a need to be passed to another CPU core from the CPU core that often there is a cache line in order to facilitate the sharing. 2) synchronization between threads is performed by using a memory-based semaphores, thus significantly slow. 3) thread generation of, destroy, suspend and resume will require the intervention of the OS, thus significantly slow. 4) In order to improve the CPU multi-threading capabilities would be diluted by the memory delay and OS delay mentioned above, microprocessor Publisher is not able to provide the most efficient multi-threading capabilities.For the reasons stated above about <multi-threaded architecture extension> conventional system, the method and system, to extend the architecture of the processor, to include the multi-threading capabilities that appear to architecturally-eye through a multi-threaded architecture extension to. Multiple simultaneous execution threads, a plurality of instruction pointers, multiple copies of certain application state (registers) within a single processing element is provided. Multiple threads of execution are those that can be distinguished from the existing shared memory thread, it referred to as a shred (shred) that is a shared resource threads (shared resource thread).The multithreaded architecture extension (multithreading architecture extensions) (an example is referred to as MAX in the following) contains existing architectural features, will support multiple concurrent shreds added. Each shred has its own instruction pointer, general register, FP register, branch register, the predicate register and certain applications register. Non-privileged instruction is generated in order to generate and destroy shred. Communication between shreds are performed through the shared register in addition to the shared memory. The multi-threaded architecture extension in order to guarantee the atomic access to the share register, the need for semaphore is reduced. In addition, the present multi-threaded architecture extensions, and 32-bit architecture, such as the 32-bit architecture by Intel (registered trademark), also a 64-bit architecture, such as the 64-bit architecture by Intel (registered trademark), or a 16-bit architecture even it may also be used.According to an embodiment of the present invention, showing a comparison between the traditional shared-memory multiprocessor thread and shred in the following table.Table 1The multithreaded architecture extension would notice that is fundamentally different from the conventional architecture extension. When all of the traditional architecture extension was providing more instructions and more of the register (state), multi-threaded architecture extensions to provide more of the execution units.CPU state to appear in the <application and system state> programmer can be divided into two categories: an application state and system state. Application state is used by both the application program and OS, while being controlled, the system state is controlled only by the OS.5, according to an embodiment of the present invention, illustrates an exemplary execution environments for providing a multi-threaded user-level. Application state of the registers contained in the execution environment 600 can be summarized as the following table.Table 2User-level multithreaded register 650-665 is, following in, described in more detail.32-bit architecture system state can be summarized as follows.Table 3For each shred, application state is divided into two categories: a per application state and the shared application state shred. Here will be described MAX programming model while providing a unique instance of shred each application state, shared application state is shared between multiple shreds. Copy of the system state is only one without, all of shred corresponding to a given thread share the same system state. Approximate division of the application and the state are presented in the following table.Table 4The multi-threaded architecture extensions, provides a programmable share of most of the application state (sharing) or proprietary (privacy), software can select the best of the division. The program performed by the bit vector, can be individual registers are selected as either shared or exclusive. Hardware names changing means may assign register from either the shared pool or proprietary pool specified by the bit vector.Whole as a storage request of the MAX is less than that of the conventional simultaneous multi-threaded processor and chip-level multiprocessor. Simultaneous multi-threaded processor or while in the chip-level multiprocessor it is necessary to the entire application and system state is replicated, proprietary application of every shred the MAX to implement the traditional shared-memory multiprocessor programming model state only is replicated.<Shred / thread hierarchy> each shared memory thread is composed of a plurality of shred. Shred and shared memory thread forms a two-level hierarchy. In another embodiment, a three-level hierarchy can be constructed from the cluster shared memory MAX processor. Cluster communicate using message passing. OS to handle the scheduling of threads, whereas, the application program handles the scheduling of shreds. Other shred viewed from any given shred in the sense that it is local or remote, shreds are nonuniform. Shred each application state is replicated for each shred. Shared application and system state is common to local shreds, it is replicated for each shared memory thread. Memory state has only one copy.6, according to an embodiment of the present invention, illustrates an exemplary relationship between the shared memory thread as shreds. Shred each application state 510 is replicated for each shred. Shared application and system state 520 is common to the local shreds, is replicated for each shared memory thread. Memory state 530 has only one copy.Because the system state 520 is in MAX programming model is shared between the plurality of shreds, shred wherein the plurality of belonging to the same process. The multithreaded architecture extensions are intended to be used by multithreaded applications, libraries and virtual machines. MAX programming model for this type of software, giving the performance potential that can not be achieved in a controlled and shared memory of degree unprecedented for that shred.Shred ran all at the same privilege level, because sharing the same address translation, the protection check is not required between shred. Therefore, traditional protection mechanism can be avoided between the communication between the shred.Because of the shared system state, you can not use the MAX programming model in order to run different processes on the same thread. For this reason, MAX programming model and a conventional shared-memory programming model is to co-exist in the same system.Given the CPU to provide a finite number of physical shred, software is the number of available shred, to visualize in a similar manner and visualization of hardware threads. The results of visualization, a finite number, give a virtual shred of number no potential only in conjunction with the physical shreds running in parallel.<System Calls> OS calls, transfers control from the application program to the OS, by executing a context switch may be processed by conventional manner. In the MAX architecture, one major difference of, even by calling the OS in any shred, is that the execution of all shreds associated with a given thread is suspended. Save the state of all of the shreds belonging to the same thread, it is the responsibility of the OS to restore.For additional state, context switching overhead is increased. Memory footprint of context switching is increased in proportion to the number of shred. However, context switching time is not so much increased. Each shred is, is because the save / restore their own state in parallel with the other shred. Context switching mechanism allows for parallel such, save / restore the state using a plurality of sequencers. OS itself is the use of multiple shreds.Since the cost of calling an OS is increased, certain functionality that has been executed by the OS is migrated to the application program. This functionality includes the processing of thread maintenance and certain exceptions and interrupts.Alternative embodiments for implementing the system call, while the context switch is becoming expensive, the thread is based on the observation that is becoming cheaper. In this embodiment, a thread is dedicated to running the OS, the second thread is dedicated to running the application program. When the shred of an application program executes a system call, it (via a shared memory) to OS shred sends a message and waits for a response message. In this way, the message exchange and the waiting mechanism, replaces the conventional control transfer context switching mechanism. Any thread address conversion also changes is not required. Benefit is that the message that was sent to the OS by a shred does not disturb the local other shred.<Exception> In a conventional architecture, the exception to suspend the execution of the application program, call the OS exception handler. Under the MAX programming model, this behavior is undesirable. Calling the OS suspend certain shred is because resulting in (associated with a given thread) Suspend be any shreds.To solve this problem, providing a first opportunity to pay the exception of a number of types in an application program to introduce the exception mechanism of the new user level. Exceptions mechanism of the user level, some existing exception type is based on the observation that ultimately are allowances by the application itself.For user-level exception mechanism, or an exception is if reported, are distinguished from one exception is if allowance. Exceptions are divided into three categories as follows. 1. Exceptions are reported to the application program, is allowance by the application program. For example, divide exception by zero is reported to the application that caused the exception, allowance is also made by the application. Neither involved the need of the OS, nor desirable. 2. Are reported to the application program, then, exceptions that need an application program to call the OS for the allowance. Page faults caused by the application may be reported to the application but, in order to swap in the page application program needs to call the OS. 3. Be reported to the OS, exceptions that need to be allowances by the OS. For security reasons, the hardware interrupts must be reported to the OS. System call (software interrupt) is clearly there is a need to be reported to the OS.The following table shows the exception of each of the three categories described above. Exception Type "Loading exception of when a cache miss" and "fine grain Timer" is given as an exception type related to an embodiment of the present invention.Table 5Exception reported to the application program is processed by selectively in an application, or passed to the OS for processing. In the latter case, the application program executes a system call to explicitly request the allowance to the OS in response to an exception (such as a page fault). This, OS implicitly is in contrast to the traditional approach to performing such treatment on behalf of the application. In order to avoid the nested exception, the application code to relay the exception to the OS is a special provision is provided that it does not itself cause an additional exception. Exception mechanism of user-level, to save the minimum number and processor vector of CPU registers in the shadow of the register set in a fixed position.And embodiments of the here of <Virtual Machine> Virtual machines and multi-threaded architecture extensions, impose constraints each other. The virtual machine always generates an exception when the software attempts to access a resource that is virtualized, exception handling has the effect on the significant performance for shredding.In the virtual machine, access to the execution or privileged processor state of the privileged instruction is to generate an exception. Exceptions are reported to the virtual machine monitor (and thereby are allowances) must. In MAX, exception that is allowance by the OS (and virtual machine monitor) it is, to suspend any shred associated with a given thread. The virtual machine monitor is to understand the presence of a plurality of shreds. Architecture of the virtual machine, to minimize the number of exceptions that are generated for non-privileged instructions and processor resources.Since it is possible to shred is suspended by the local other shred is <deadlock> MAX architecture, deadlock avoidance becomes complicated. Application software, also suffered an exception or system call one of the shred is allowance to the OS, to ensure that there will be no deadlock that would suspend all local shreds.Local (between shred) communication and synchronization, (between threads) remote is distinguished from the communication and synchronization. Local communications, are performed using either or shared memory (in Aru shown in FIG. 5) share register 655. Remote communication is performed by using a shared memory. Local data synchronization atomic register update, register semaphore or by using a memory semaphore is executed. Remote data synchronization is performed using memory semaphore.Local and remote shred control (generation, destruction) is performed using both MAX instruction. Shred control, wait () or not able to call the OS for yield (). It is because may have the unintended effect of suspend any shred on a given thread. OS calls that are used for thread maintained, is replaced by a call to the user-level shred library. Shred library in turn, calls the OS in order to generate and destroy threads as needed.<Shred and fiber> shred is different from the fiber, which is implemented in a conventional OS. Differences are summarized in the following table.Table 6Of micro-processors that support <hardware implementation> multi-threaded architecture extension implementation, chip-level multiprocessor (CMP: chip-level multiprocessor) and simultaneous multi-threaded processor (SMT: simultaneous multithreaded processor) form to take the can. Conventional CMP and SMT processor, trying to hide a share of the CPU resources from the software. In contrast, when it is mounted to the embodiments here multithreaded architecture extension, the processor expose shared as part of the architecture.To implement the MAX processor as a chip-level multiprocessor, in order to maintain multiple copies of the system state to a synchronization state between each other CPU core, a broadcast mechanism is used. High-speed communication bus is introduced for application and system state is shared. The on-chip communication is a high speed compared to the memory of the off-chip, these communication bus to the MAX processor, give the superiority of performance for shared memory multiprocessor.Implementing the MAX processor as a simultaneous multi-threaded processor is possible because the hardware is already providing the necessary resources sharing. MAX the implementation of, it is also possible almost entirely be carried out in a micro-code on 32-bit processors in a multi-thread.According to an embodiment, the method and system allows for prioritization system calls and exceptions (reported things OS) among a plurality of shreds. Thus, at any point in time, only the request of one of the shred is allowance. Because the system state can not handle only one of the OS service request to a temporary, selection of prioritization and one of the request is necessary. For example, it shreds 1 and shred 2 is to make the system call at the same time. Prioritization means, but the system calls shred 1 was performed to ensure that the system call shreds 2 has not yet begun to run. For consideration of fairness, prioritization means is to employ a round robin selection algorithm may use other selection algorithms.Scalability of <scalability> MAX programming model is determined by the following. 1) resulting from it to suspend any shred associated with a given thread between the amount 2) context switching of state is feasible is to save / restore the time of context switching, reduction of the degree of parallelism 3) shred between as the number of communication shred increases, an increase in the amount of state that needs to be saved / restored during context switching, potential degree of parallelism is increased to be lost as a result of suspending all shreds. These two factors will limit the practical number of shred.Shred communication also limits the scalability. The communication is because is performed using the on-chip resources. In contrast, the scalability of traditional shared-memory multiprocessor model is limited by the off-chip communication.The <share classification> The following table, shred of architecture, classification of a variety of degrees of freedom in the implementation and software use is presented.Table 7Two different types of MAX architecture are distinguished: a uniform and non-uniform. Uniform shred, all of shred is the same as in the uniform multi-processor in that they run the same instruction set. In a manner similar to the heterogeneous multi-processor, non-uniform shred is also possible. For example, non-uniform shred: • 32-bit processor and network processor, • 32-bit and 64-bit processors, can be constructed between.Similarly, micro-architecture that underlies may have either symmetric or asymmetric. Examples of the latter include some large high-performance CPU core and a number of small, low-power CPU core, there is a chip-level multiprocessor shown in FIG.<Applications model> The following table summarizes several uses models for the embodiments of the present multithreading architecture extension.Table 8The <prefetch> prefetch application model, the main thread begat one or more helper threads, it is used to pre-fetch cache line from main memory. The produced helper thread is in response to a cache miss on the main thread. Since access to the main memory requires a 1000CPU clock from a few hundred to complete, unless the provided architectural to proceed to the main memory read became cache miss as a failure, execution of the scalar code It will be stopped on the facts between the main memory access.<Replacement of conventional thread> shredding, as a high-performance replacement of conventional thread, may be used by the multi-threaded applications. User-level software library, previous shred management functions performed by the OS (generation, destruction, etc.) are provided in order to perform. Library to request additional threads, in addition to call OS if necessary, use the shred instruction. Call of the software library, is much faster than the OS call because there is no need for context switching.Compiler <dedicated execution resources for the compiler> may use the shreds in the same manner as using other processor resources, such as registers. For example, the compiler eight integer registers processors, eight floating-point registers can be viewed as having eight SSE registers and four shreds. By handling the shred as a resource, the compiler allocates the shreds in a similar manner as register allocation. As well as register, there is a need for some sort of mechanism for dispensing / filling the shreds in the auxiliary storage if the application program requires a lot of virtual shred than provided by the hardware. In conventional architectures, the flow of control because there is only one, not usually considered processor resources.In a managed runtime environment <dedicated thread for the managed runtime environment>, shred is a dedicated garbage collection, just-in-time (just-in-time) features, such as compilation and profiling. Shred the "free of charge" essentially such a function to perform. Shred is because they are provided as part of the instruction set architecture (ISA). ISA, of the processor is the portion visible to the programmer or compiler author. ISA is a function of the boundary between the software and the hardware.<Parallel programming language> MAX is direct support for parallel programming languages and hardware description language. For example, iHDL or Verilog compiler, because the source code is explicitly parallel, to generate the code for the direct multiple shreds.Growth thread made possible by the chip-level multiprocessor leads to language support for multithreading. Such support is provided through the call of the OS and run-time library. Language support for multi-thread is transferred to the general-purpose programming language of the main stream.Shred <CPU with an integrated I / O function> is used to implement the I / O functions such as network co-processor. One of the important differences between the network co-processor implemented as shreds rather than as an I / O device is that appear as part of the CPU.In the conventional system, when the application program requests input and output, the application program calls the OS using the API (application program interface [application program interface]). OS in turn calls the device driver, send the device driver is the request to the I / O device. The OS matrixes or serialize wait for I / O requests from multiple application programs, it bears the responsibility for ensuring that I / O devices to process a single (or a finite number) request only for temporary. This is because the state of the I / O device to the CPU state is time-multiplexed among multiple applications is global to the system is that required.The I / O device to be implemented as a heterogeneous shred, the state of the I / O device is treated as an extension of the application state of the CPU. Application program, to control both the application state and the I / O device state of the CPU directly. Both application state and I / O status is when context switching is saved / restored by the OS. I / O device is configured so that at the state is time-multiplexed with no adverse effects among several applications.<Simultaneous multi-ISA of the CPU> 64-bit architecture is defined to include a 32-bit architecture, application architecture, as well as new 64-bit instruction set through a mechanism known as "seamless". The compatibility with 32-bit architecture instruction set, can be 64-bit architecture of the processor is to also run other existing 32-bit architecture of the application of the application of the new 64-bit architecture.Under the current definition, CPU 64-bit architecture, at any time, to run any of the 64-bit architecture thread or 32-bit architecture of the thread. Switching between two of the ISA is accomplished via a 64-bit architecture br.ia (32 branch to the bit architecture) and 32-bit architecture jmpe (jump to a 64-bit architecture). Since the 32-bit architecture registers are mapped to the 64-bit architecture register, a copy of the state is not the only need one.It is possible to create a multi-ISA of the CPU running is more than one instruction set architecture at any time. This may be achieved by using a second shred for shredding and 32-bit architecture ISA for 64-bit architecture ISA. As in the case of the uniform shred, there is a need to provide a different application state for both shred of shred and 32-bit architecture of the 64-bit architecture. 64 shred of shred and 32-bit architecture of the bit architecture is running at the same time.Since describe the features of the method and system for providing a user-level multithreading through multithreading architecture extensions described above, the following provides an embodiment for a 32-bit system.Although described with reference to the IA-32 architecture <Embodiment 32 bit architecture>, the methods and systems described herein are readers that may be applied to other architectures such as IA-64 architecture to understand. Furthermore, the reader, in order to understand the exemplary execution environment in accordance with an embodiment of the present invention is instructed to return to FIG. In order to bring the multi-threading capabilities of the user level to IA-32, along with some of the registers 650 to 660, a small number of the instruction is added to the ISA of the IA-32.Multi-threaded architecture extension consists of the following terms and conditions: - model-specific register 650 (MAX_SHRED_ENABLE) · processor that is used by the OS or BIOS to enable / disable the extension is available and whether it implements the extension show and a physical number of shred, 3-bit each shred of CPUID extended function information is to have its own copy of the proprietary application state, application state (EAX, EBX, etc.) communication and synchronization between most of the replication shred of control register SC0-SC4 660 pairs of which are used for the use that may be, the set-shred management of shared register SH0-SH7 655 forThis multi-threaded architecture extension has the following instruction. Shred of creation / destruction: forkshred, haltshred, killshred, joinshred, getshred · communication: transfer from / to the share register 655 (mov), of the synchronous mobile and synchronization from / to the share register 655 (semaphore): cmpxshgsh, xaddsh, xchgsh · signal transmission: signalshred · multi-shredded mode transition from the (multi-shredded mode) to the /: entermsm, exitmsm · state management: shsave, shrestore · miscellaneous: transfer from to shred control register of /In addition, the following function is provided in the IA-32 mechanism. · IA-32 exception mechanism is, (if applicable is) When exception multi-shredded mode to terminate the IRET of · IA-32 to save the entire shred state instruction (where applicable) to restore the whole shred state multi Gerhard Back to Red-mode user-level exception mechanism has been introduced.<Configuration> model specific register (MSR: model specific register) 650 is used to allow for multi-threaded architecture extension. MSR are described below.Table 9Model specific registers such as shred MSR650 is, is only writing and reading at privilege level 0 is done. If it is not enabled multithreaded architecture expansion, execution of legacy code is limited to shred number 0.Table 10<CPUID> IA-32 CPUID instructions, along with the counting of the number of physical threads are provided, it is modified to return the indication that the processor supports multi-threaded architecture extension. This is done by adding 3 bits (NSHRED) Enhancements information returned in ECX. Information that is returned by the CPUID instruction is given in the following table.Table 11Table 12If a multi-threaded architecture extension (through the MAX_SHRED_ENABLE MSR) has not been enabled, the extension information for the NSHRED return a 000.<Architectural state> multithreading architecture extensions, put the entire state into one of three categories. • Each of the breakdown of the category of shared IA-32 state between sharing and all shred between proprietary local shred of each shred are shown in Table 2, supra. State that proprietary shred is repeated once every shred. State that proprietary shred is a complete proprietary for each shred. In particular, the architecture does not provide any instructions individually for reading and writing proprietary register shreds from elsewhere shreds. shsave and shrestore instruction architecture provided, which is intended for reading and writing to memory proprietary state of all shreds collectively, these instructions will not be executed only in a single shredded mode. Shared state shreds are shown in Table 3, supra.Set SH0-SH7 655 of the share register is used for communication and synchronization between shred. These registers 655 writing and reading are carried out through the MOV instruction from the MOV instruction and the share register of the share register. SH0-SH7 register 655 to store the 32-bit integer value. According to one embodiment, the 80-bit floating-point 625 and 128-bit SSE data 640 is shared through the main memory.Set SC0-SC4 660 of the shred control register is provided. These registers are defined as follows.Table 13Table 14Flag marked Y is replicated for each shred. Flag marked N has a single copy that is shared by all shreds.32-bit EFLAGS register 615 contains a group of state flag, the one control flag and a group of system flags. Initialization of the processor 105 (due to the fact that asserting the RESET pin or INIT pin) immediately after the EFLAGS register 615 is 00000002H. To 31 bits 1,3,5,15 and 22 of the register 615 are reserved, the software should not or depending or use even any of these bit states.Some of the flag in the EFLAGS register 615, it can be modified directly using a dedicated instruction. Instruction is not in the order that the whole register can investigate or modified directly. However, the following various instructions, a group of flag to the procedure stack or EAX register, and can be used to move from them: LAHF, SAHF, PUSHF, PUSHFD, POPF and POPFD. After the contents of the EFAGS register 615 has been transferred to the procedure stack or EAX register, the flag, can investigate and modified using bit manipulation instructions of the processor (BT, BTS, BTR, BTC).When you suspend a task (using the multi-tasking capabilities of the processor), the processor automatically, the state of the EFLAGS register 615, the task state segment for the task to be suspended: to save to (TSS task state segment). When the processor bind itself to a new task, to load the data from the TSS of the new task in the EFLAGS register 615.When a call to an interrupt or exception handler procedure is performed, the processor automatically, to save the state of the EFLAGS register 615 in the procedure stack. When an interrupt or exception is handled by using the task switch, the state of the EFLAGS register 615 are stored in the TSS for the task to be suspended.<Shred creation / destruction> shred may be created using the forkshred instruction. The format is as follows.forkshred imm16, target IP forkshred r16, target IP 2 one of the form has been given. One has a shred number as immediate operand (immediate operand), with a second shred number is intended as a register operand (register operand). For any shape, target IP is specified as an immediate operand, the value is the current IP without referenced to the beginning of the code segment (nominally 0).forkshred imm16, encode that target IP is the same as the long-distance jump (far jump) instruction, instead of shred number is a 16-bit selector, the target IP has become instead of 16-bit and 32-bit offset.forkshred instruction sets the appropriate execution bit in SC0, to start a run at the specified address. Unlike the Unix of the fork () system call, forkshred instruction does not copy the state of the parent shred. Running new shred is initiated using EIP that is updated with the current values of all other proprietary register. Initialized by the new shred loads the ESP the stack, incoming it should get the parameters from the shared register or memory it has been expected. forkshred instructions are not able to automatically pass parameters.If the target shred is running already, forkshred is #SNA (shred unavailable) to generate an exception. This is an exception to the user level as will be described later. Software, or to ensure that it does not attempt to start the shreds running already, or alternatively, back to again forkshred execution provides #SNA handler to stop the existing shred. If the maximum number of shred greater than the shred number is supported by the hardware, # GP (0) exception occurs.To terminate execution of the current shred, haltshred instruction is used. haltshred clears the run bits of the current shred in SC0, end the execution of the current shred. Exclusive state of the shred is maintained even stopped. Because shred even absent any mechanism for accessing the proprietary status of another shred, not visible occupied state of the shred stopped. However, the condition persists, the shred becomes visible when the execution starts again through forkshred.In order to abort in the middle of the execution of another shred, killshred instruction is introduced. The format is as follows.killshred imm16 killshred r16According to one embodiment, shred number is a 16-bit register or immediate operand. killshred clears the execution bit of the specified shred in SC0, abort the execution of the shred. During stop, proprietary state of the shred is maintained.When the target shred is not running is, killshred is silently ignored. This behavior is necessary to avoid conflicts between shred ending killshred usual. After you run the killshred, software is guaranteed to be the target shred is not running anymore. Shreds abort themselves instead of running the haltshred (kill) it is also possible. If the maximum number of shred greater than the shred number is supported by the hardware, # GP (0) exception occurs.To wait until the specified shred ends (SC 0 bit is indicated by being cleared), joinshred instruction is introduced. They are as follows format.joinshred imm16 joinshred r16If the target shred is not running, it joinshred returns immediately. This behavior, avoid conflicts between the shred ending joinshred and normal. After you run the joinshred, software is guaranteed to be the target shred it is not running anymore. Shred is also allowed to make to themselves joinshred (but not stand in the role). If the maximum number of shred greater than the shred number is supported by the hardware, # GP (0) exception occurs. joinshred instruction does not passes automatically to the return value.Since shredding to be able to determine its own shred number, getshred instruction is introduced. The format is as follows.getshred r32getshred returns the number of the current shred. getshred may be used to access the memory array to be specified by the shred number. getshred zero extends the 16-bit shred number to write to all the bits of the destination register.For the full shred generation / discard instructions, shred number may be specified in either register operand or immediate operand. It is expected towards the execution of the immediate type is faster than the execution of the register type. Shred number is because the would be obtained at the time of decoding rather than at run time. In the immediate form, the compiler allocates a shred number. Run-time of allocation is used in the register form.The following table is presented a summary of the generation / discard instructions for the shred.Table 15forkshred, haltshred, killshred, joinshred, getshred instructions may be executed at any privilege level. Whereas hlt instruction of existing IA-32 is a privileged instruction, haltshred is a non-privileged instructions.Shreds running result of the execution of killshred or haltshred may become zero. (To SC0 0) this state is different from the stop state of the existing IA-32. SC0 is a state recognized. However, it is not useful until the timer interrupt user level is generated.<Communication> shred is the communication with each other, made through existing shared memory, and through specially introduced set of registers for that purpose. Share register SH0-SH7 655 is accessible by all of the local shred belonging to the same thread. SH0-SH7 register 655 passes the incoming parameters to shred, to communicate the return value from the shred, can be used to perform the semaphore operations. Specific shared register 655 for each purpose is assigned by the convention of the software.Each shared register 655 has a corresponding free / fill bit to SC3. In order to write and read in the share register 655, MOV instruction is used from MOV, and shared register 655 to the shared register 655. These can be summarized as follows.mov r32, sh0-sh7 mov sh0-sh7, r32The encoding of the instruction is the same as the MOV instruction to / from MOV, and debug registers to and from the existing control register 660 /. MOV instruction from shared register / may be performed in any privilege level. These instructions, software is supposed to be explicitly perform synchronization with additional instructions. mov to and from the share register of / is also to examine the state of the free / fill bit of SC3, not also be modified.MOV to shared register 655, and MOV delay from the shared register 655, it is expected to lower than the load and save delay to the shared memory. Hardware implementation, read at the prospect of the share register 655, which is likely to be able to pry the other shred writing. Hardware, when writing to the shared register 655, must ensure a strong ordering of equivalents. In alternative embodiments, it can be a barrier instruction is generated to access the shared register 655.The one architecture features, ordering and the ordering of memory shared register is kept things from each other separate. Thus, there shred write to the shared register 655, then the case to be written to the memory 120, is no guarantee that the contents of the share register 655 appear earlier than the shared memory contents. Reason this definition, without creating unnecessary memory barriers, and to enable high-speed access / update of the loop counter in the shared register 655. If the software requires a barrier to both of the shared register 655 memory, the software to both run the share register semaphore with memory semaphore. Memory semaphore is another thing to act as a barrier is redundant.In order to provide a rapid communication in addition to the synchronization, it will be used to synchronize mov instructions to and from the share register /. These instructions can be summarized as follows.syncmov r32, sh0-sh7 syncmov sh0-sh7, r32Instruction encoding is a MOV instruction and parallel to / from MOV, and debug registers to and from the existing control register 660 /. Synchronous mov to the shared register 655, in addition to the fact that the free / fill bit before writing to the share register 655 will wait until the show free is the same as that of its asynchronous counterparts. After writing to the share register 655, free / fill bit is set to fill. Synchronization mov from shared register 655, in addition to the fact that wait until they show free / fill bit is filled in before reading from the shared register 655 is the same as that of its asynchronous counterparts. After reading from the shared register 655, free / fill bit is cleared to empty.Free / fill bits can be initialized by using the movement to SC3 as described below. Synchronization MOV instruction from shared register / may be performed in any privilege level. Shared register communications instructions can be summarized as follows.Table 16<Synchronization> a set of synchronization primitives is applied to the share register 655. Synchronization primitives, the addition to the fact that acting on the share register 655 instead of the memory is the same as the existing semaphore instruction. They are as follows instruction.Table 17Synchronization primitives is executed at any privilege level. These instructions, also to examine the state of the free / fill bit of SC3, not also be modified.MAX architecture <start / end of a multi-shredded mode> is to provide a mechanism for switching between the multi-shredded mode and a single shred mode. Single shred mode, the processor is, to be able to run the context switching in the manner ordered by stopping the execution of all shreds except one. SC0 indicates the current mode of operation as follows: SC0 just any - any bit position, including the one of 1 implies a single shred mode. - Which bit position a SC0 including a pattern other than that one of the 1 it represents a multi-shredded mode.In order to perform a context switch is required to: 1) to suspend all of the shred except the one by switching to a single shred mode. 2) Save the shred state. 3) to load the new shred state. 4) to resume the execution of all shreds by switching to multi-shredded mode.In order to switch multi shred mode (multi-shredded mode) and a single shred mode, entermsm and exitmsm are used respectively. entermsm is used to enter the multi-shredded mode. Prior to the execution of this instruction, it is necessary to state of all shred is loaded. entermsm copies the new shred execution vector of SC1 to SC0. entermsm then initiates the designated various shred.After the execution of entermsm, the contents of the SC1 is quite possible that does not result in the execution of the additional shred. In this case, the processor remains in the single shredded mode. As a result of executing entermsm, it is also possible to shred the Entermsm is executed is not running anymore. To end the multi-shredded mode exitmsm is used. exitmsm copies the current shred execution vector of SC0 to SC1. All of SC0 execution bit other than those corresponding to shred to run exitmsm is cleared. All of shred other than shred to run exitmsm is stopped. These operations are performed by atomic sequence. SC0 state shows a single shred mode. entermsm and exitmsm may be executed at any privilege level.<State management> instruction (shsave and shrestore), in order to save and restore the collective shred state, respectively, that is to write the contents of the proprietary state of all shred to memory, and to read the proprietary status of all shred from memory It is used to. The format is as follows.shsave m16384 shrestore m16384Address of the memory storage area is designated by excursion in the instruction. Address is aligned to the boundary of 16 bytes. For memory storage area is to allow the expansion of the future, which is 16 kilobytes. Memory storage area will be extended by adding an integer register an existing FXSAVE / FXRESTOR format. Memory storage area for each shred is defined as follows.Table 18The contents of all shreds is stored / recovered at the address given by the following formula: Address = 512 × (shred number) + (base address) memory storage region comprises EIP and ESP shreds currently running. shsave writes the current EIP and ESP in memory. In order to avoid the branch, shrestore instruction is not able to overwrite the EIP and ESP of the current shred. shrestore function, when executed as part of the IRET, to overwrite the EIP and ESP current shred.shsave and shrestore may be performed in any privilege level, but only when in the single-shredded mode. When shsave or shrestore is attempted when it is in the multi-shredded mode, # GP (0) exception occurs. In the implementation, it can be free to use all the hardware resources available in order to perform a save / load operation of shsave / shrestore.shrestore is unconditionally, to load the state of all shred from memory. This behavior is necessary to ensure that the exclusive state of the shred does not leak from one task to the next task. shsave is, unconditionally or conditionally, to save the state of all shred to memory. In some implementations may maintain a dirty bit of the non-architecturally visible to skip some or all of shsave save operation when the occupation state is not modified (dirty bits).shsave and shrestore instructions to save and restore the only proprietary state of shred. To save and restore the share register 655 is the responsibility of the OS.Write to <shred control transfer from / to the register 660> Shred control registers SC0-SC4 660, instructions for reading out therefrom are provided. It can be summarized as follows.mov r32, sc-sc4 mov sc0-sc4, r32 instruction encoding is the same as the MOV instruction to / from MOV, and debug registers to and from the existing control register 660 /. Shred MOV instruction from the control register to the / may be performed in any privilege level. Safety Measures for malicious application program to ensure that it can not affect to any process other than itself by writing to shred control register is provided.Application program, rather than to manipulate the contents of the SC0 directly, use the forkshred and joinshred. exitmsm is carried out in the atomic way the transition to a single shred mode from the multi-shredded mode. To read the current shred execution state to use the mov from SC0, then than the use of the mov to SC0 to write a shred execution state, so shred execution state may change between reading and writing, the desired it's of no results.<OS exception> MAX has several of involvement for the IA-32 exception mechanism. First, the user-level exception mechanism, several types exception is can be reported directly to the shreds that caused it. This mechanism will be described later.Then, IA-32 exception mechanism is modified to handle properly the multiple of shred when an exception that requires a context switch exists. One of the problems associated with conventional IA-32 exception mechanism is that it just CS for one of the running thread, EIP, SS, ESP, that has been defined to automatically save and restore the EFLAGS.Existing IA-32 exception mechanism, entermsm, exitmsm, shsave, it is extended to include the functionality of the shrestore instruction. When an interrupt or exception is generated to require a context switching, exception mechanism is the following: 1) to end the multi-shredded mode by executing a exitmsm. exitmsm stops all shred other than those that are causing the interrupt or exception. OS will be entered using a shred that caused the interrupt or exception. By executing the shsave at the start address given by 2) SC2, to save the current state of all shred to memory. 3) to perform the IA-32 context switching, such as are currently defined.To return to the multi-shredding program, modified IRET instruction is to do the following: 1) to perform the IA-32 context switching, such as are currently defined. By executing the shrestore at the start address given by 2) SC2, to restore the current state of all shred from memory. This is to overwrite the EIP and ESP, which is conserved in IA-32 context switching. Enter the multi-shredded mode by executing the 3) entermsm. Depending SC1 state, running entermsm may also be remain single shredded mode by the processor.OS, prior to executing the IRET, be required to set the save / restore area of shred state in memory to load the address to SC2. OS also, is also required to save / restore the state of the SC1, SC3, SC4.There is a possibility that a plurality of shreds encounters simultaneously exceptions that require service of the OS. Because MAX architecture can only report one of the OS exception for temporary, hardware is with the OS exception priorities across multiple shred, just to report one, raise an exception the state of all of the other shred instruction must be set to a time that have not yet been executed.MAX <exception of user-level> is, to allow the exception of a certain type is processed completely in the application program, the introduction of user-level exception mechanism. Any OS involvement, privilege level transition or context switching is not also necessary.When the user-level exception is occurring, EIP of the next unexecuted instruction is pushed onto the stack, the processor is directed to the specified handler. User-level exception handler to perform the task, then return via the existing RET instruction. According to one embodiment, also not provided any mechanism to mask the user-level exception. The application raises an exception of user level is because the application is assumed to be only when you are willing to pay it.In order to generate the first of two user-level exception, two instructions are provided: a signalshred and forkshred. These will be described in the following sections.<Signal transmission> signalshred instruction is used to send a signal to the specified shred. The format is as follows.signalshred imm16, target IP signalshred r16, target IP target shred may be specified as a register operand or immediate operand. signalshred imm16, the encoding of the instruction that the target IP is the same as the existing long-distance jump instruction, instead of shred number is a 16-bit selector, the target IP has become instead of 16-bit and 32-bit offset. As in the case of long distance jump, a target IP of signalshred, rather than relative to the current IP, the head of the code segment (nominally 0) is designated as the reference.In response to Signalshred, target shred pushes the EIP of the next unexecuted instruction stack, directed to the specified address. Shred may send a signal to their own, in which case the effect is the same as running a short distance call (near call) instruction. When the target shred is not running is, signalshred is silently ignored. If the maximum number of shred greater than the shred number is supported by the hardware, # GP (0) exception occurs.signalshred instructions may be executed at any privilege level. signalshred instruction does not automatically pass it a parameter to the target shred. Any mechanism for blocking signalshred not also provided. Thus, the software may need to be whether to implement a block mechanism before issuing a signalshred, or provide a nestable signalshred handler.<Shred unavailable (SNA: Shred Not Available)> forkshred is, if the program is trying to start a shred running already generates a #SNA exception. Software #SNA handler can run the killshred against existing shred, it may return to the forkshred instruction.#SNA Exception pushes the EIP of forkshred instruction stack is processed by directing the address given by SC4 + 0. Code in place of the SC 4 + 0 should branch to the actual handler. Exception vectors are placed in such SC4 + 16, SC4 + 32. Software is reserved the memory of up to SC4 + 4095 in order to cover the user level the possible exception of the 256. Interrupt table in memory / SC4 mechanism is replaced with a more clean mechanism at the time of the later.<Suspend / resume and shred visualization> multi-threaded architecture extensions, user-level software is allowed to shred the suspend or resume using the instructions, such as the following. Shred instructions to suspend the: 1) to initialize the shred state save area in memory. This is a memory area that has been set by the application program to suspend operations, different from the context switching shred state region that is SC2 and point. It sends a signal to shred that points to 2) suspend handler. This is done signalshred goal shred, through suspend handler IP. 3) suspend handler of existing mov, pusha, using the fxsave instruction to save the proprietary state of the shred to memory. 4) suspend handler to run the haltshred. 5) the original code is wait until the shred to stop running the joinshred.Shreds at the time of the suspend operation is already possible may have been stopped. In this case, Signalshred is ignored, suspend handler without being called, that no wait joinshred. Shred state save area in memory retains its initial value, but said initial value, it is necessary to point the dummy shred to run haltshred immediately. To resume a shred, the reverse operation is performed. 1) to fork a shred that points to the resume handler. This is, forkshred goal shred, is done through the resume handler IP. 2) Resume handler existing mov, popa, using the fxrestor instruction to restore the proprietary state of shred from memory. 3) Resume handler returns to shred through the existing RET instruction.When the thread of the resumption destination has already been stopped, resuming handler will RET to dummy shred to immediately run the haltshred. Suspend / Resume function opens the possibility of shredding visualization. Before running the forkshred, software can choose to suspend the existing shred with the same shred number. To After running joinshred, software can choose to resume the existing shred with the same shred number. Suspend / Resume sequence is not re-entrant, for that is running also suspend / resume for shred also of any given at any given time to ensure that only one, decisive of the software a section is required. Using these mechanisms, it is possible that the application program is to generate its own pre-emptive shred scheduler.In an alternative embodiment of the MAX, the instruction is present in order to fork using the first available shred (allocforkshred r32). Here r32 is written using the shred number assigned (in forkshred r32 specifies the shred number should be fork). allocforkshred also returns a flag indicating whether there is an available hardware shred.In another embodiment, it waitshred instruction, to provide a stand-by synchronization with the share register (waitshred sh0-sh7, imm). wait instruction provides a wait function as an instruction. Without this instruction, there is a need to use a loop such as the following.loop: mov eax, sh0 and eax, in the mask jz loop another embodiment, joinshred is, given a bit mask in order to wait on multiple shreds. If there is no bit mask, joinshred waits for one of the shred is finished, in order to wait on multiple shreds are required multiple of joinshred.In an alternative embodiment, killshred is not used. signalshred and joinshred followed it can be used in place of the killshred. signalshred handler consists of haltshred instruction.In yet another embodiment, it is possible to combine forkshred and signalshred. forkshred and signalshred does not differ only in the behavior of how shred is stopped or running currently. If Signalshred is allowed to be started shred being stopped, signalshred as the possibility it can be a substitute for forkshred.7, according to one embodiment of the present invention, is a flow diagram of an exemplary process user-level multithreading. Described next process is assumed to have been initiated by an application or software program. The described next process without in connection with any particular program, is described as one embodiment of a multi-threaded user-level achieved by the above instructions and architecture. Furthermore, described below the process, 16-bit, 32-bit, 64-bit, whether 128 bits or more of any architecture, are performed in conjunction with a microprocessor, such as a multi-processor ISA. Multiprocessor (such as processor 105), shared register, to initialize the values in Table 3 of register supra example (processing block 705). Processor 105 executes the forkshred instructions to generate a shred (processing block 710). Multiple concurrent operations are executed by the processor 105. Main (parent) shred is executed by the processor 105 (processing block 715). joinshred operation is performed, wait for a new target shred has completed execution (processing block 730). Meanwhile, the new target shred is the stack to initialize, to get the incoming parameters from the share register and / or memory (processing block 720), run (processing block 721). The execution of the current target shred is terminated using the haltshred instruction (processing block 723). Processor 105, the execution result is returned to the program or application from a register execution result of the shreds is stored (processing block 735). Once all of the executed data is returned, the process is complete (end block 799).A method and system for providing a multi-threaded user-level is disclosed. Embodiments herein of the present invention has been described with reference to specific examples and subsystems, to those skilled in the art, embodiments are specific examples or subsystems thereof, where the present invention is not limited to, it will be apparent that also spread to other embodiments. Embodiments herein of the present invention are defined in the appended claims, is intended to include all of these other embodiments.It should describe the claims of WO 2005/098624. [Claim 1] encounters a non-privileged user level programming instructions, in response to the programming instructions, one or more other shred and (shared resource threads) the first shred to share the virtual memory address space generated, in response to said programming instructions, executed in parallel with at least one of said shred the one or more other shreds, it provides a method comprising the generation of the shred, operating method characterized by, without intervention of the system is implemented in hardware. A [Claim 2] The method of claim 1, further comprising: between said first shred and comprising said one or more other shreds, are a plurality of associated with the first thread shred Share a condition, on the other hand, there is a method of the second shredding characterized in that it does not share the state associated with the second thread. [Claim 3] wherein shred and the one or more other shred, share the current privilege level, characterized in that share a common address translation method of claim 1 wherein. [Claim 4], characterized in that it further includes receiving a non-privileged user-level programming instructions to encode the shred discard operation, The method of claim 1 wherein. [Claim 5], wherein, further include communicating between at least one of said first shred and the one or more shreds method of claim 1 wherein. [Claim 6] wherein the communication is characterized in that it is performed via one or more shared registers, the method of claim 5. [Claim 7] wherein the communication is characterized in that it is performed through the shred signal transmission instruction of user-level method of claim 5 wherein. [Claim 8] without operating system intervention user-level application and wherein the scheduling for execution the first shred method of claim 1 wherein. In response to receipt of [Claim 9] Context switching request, to store the one or more shreds state corresponding to the one or more shreds, characterized in that it further comprises, according to claim 1, wherein the method of. Without [Claim 10] The operating system intervention, the exception handler code user-level handles exceptions that occurred during the execution of the first shredding, characterized in that it further comprises, according to claim 1 the method described. Executes [Claim 11] a plurality of instructions, the apparatus having various execution resources including a plurality of instruction sequencer, the various execution resource is intended to receive a non-privileged user-level instructions, said various execution resources further, in response to the received instruction is for starting the execution of a shred that is in parallel with one or more other shreds, that the device according to claim. [Claim 12] wherein one or more shared shreds registers to facilitate communication in two or more among of the various shredding and further comprising a device of claim 11, wherein . [13.] said includes a first register that there is one or more of the share register, by said first register, multi-threaded architecture for the operating system or BIOS is user-level multi-threading capabilities characterized in that it is possible to enable the expansion apparatus of claim 12 wherein. [Claim 14] The shred and the one or more other shred share a current privilege level, characterized in that they share a common address translation device of claim 11. [Claim 15] The various execution resources further, in response to the received command, without intervention of the operating system, to start execution of shreds are parallel with one or more other shreds and wherein, method of claim 11. [Claim 16] The various execution resources is characterized in that it comprises one or more processor cores that can execute in parallel a plurality of shreds method of claim 11. Implementing the instruction set architecture [Claim 17] is (ISA), a microprocessor that can run multiple concurrent shreds, memory, a system with a capital, since the ISA to allow a multi-threaded operation of the user-level system characterized in that it comprises one or more instructions. [Claim 18] wherein one or more instructions, characterized in that it comprises instructions for generating a shred without operating system intervention system of claim 17. [Claim 19] wherein one or more instructions, characterized in that it comprises discarding instructions shred without operating system intervention of claim 17 system. [Claim 20] The user-level multithreading operation, characterized in that it comprises a parallel execution of two or more shreds associated with the same thread, according to claim 17 systems.701 start 705 share register of value to run the initialization 710 forkshred to 721 shred the run 722 share register to read the value of the run 715 main shred from run 720 share register write shred execution result 723 haltshred the run 730 join the operation 735 shred 799 Exit to read the value to be returned from the execution from the register |
An acoustic system, which may be ultrasonic, operates in a power efficient idle mode thereby reducing the power consumption required by high frequency sampling and processing. While in idle mode, an acoustic receiver device operates with an idle sampling rate that is lower than the full sampling rate used during full operational mode, but is capable of receiving a wake-up signal from the associated acoustic transmitter. When the wake-up signal is received, the acoustic receiver switches to full operational mode by increasing the sampling rate and enables full processing. The acoustic system may be used in, e.g., an ultrasonic pointing device, location beacons, in peer-to-peer communications between devices, as well as gesture detection. |
1.A method for operating an acoustic receiver device, comprising:In the idle mode, the acoustic receiver device is operated using an idle sampling rate that is less than a full sampling rate for sampling the full ultrasound spectrum and supporting the full operational mode of the acoustic receiver device, wherein The idle sampling rate only supports a minimum set of features used to wake the acoustic receiver device from the idle mode, but not sufficient to support the ultrasound data capture required during the full mode of operation;Receiving a wake-up signal from an acoustic transmitter device having a frequency detectable by the acoustic receiver device using the idle sampling rate while in the idle mode, wherein receiving the wake-up signal to the acoustic The receiver device switches to the full mode of operation using the full sampling rate.2.The method of claim 1 wherein the acoustic receiver device is an ultrasound receiver device, and wherein the acoustic transmitter device is an ultrasound transmitter device.3.The method of claim 2 wherein said wake-up signal is below a minimum of the ultrasonic frequency range.4.The method of claim 2 wherein said wake-up signal is in the ultrasonic frequency range.5.The method of claim 1 wherein said frequency of said wake-up signal is lower than a Nyquist rate of said acoustic receiver device.6.The method of claim 1 wherein said frequency of said wake-up signal exceeds a Nyquist rate of said acoustic receiver device and an aliasing artifact is generated at said acoustic receiver device, which provides an exit Indicates the indication of the idle mode.7.The method of claim 1 wherein said idle sampling rate is 48 kHz.8.The method of claim 1 wherein said acoustic receiver device is a mobile platform.9.The method of claim 8 wherein said acoustic transmitter device is one of a pointing device, a location beacon, and a remote mobile platform.10.The method of claim 8 wherein said acoustic transmitter device is co-located with said acoustic receiver device.11.The method of claim 1 further comprising returning said acoustic receiver device to said idle mode.12.An acoustic receiver device comprising:An acoustic receiver for receiving an acoustic signal from an acoustic transmitter;a processor coupled to the acoustic receiver, the processor configured to:Causing the acoustic receiver to operate in an idle mode for sampling a full ultrasound spectrum and supporting a full mode of operation of the acoustic receiver device using an idle sampling rate that is less than a full sampling rate, wherein The idle sampling rate only supports a minimum set of features used to wake the acoustic receiver device from the idle mode, but not sufficient to support the ultrasound data capture required during the full operational mode,Detecting a wake-up signal received by the acoustic receiver using the idle sampling rate when in the idle mode, andCausing the acoustic receiver to operate in the full mode of operation using the full sampling rate upon receiving the wake-up signal.13.The acoustic receiver device of claim 12, wherein the acoustic receiver is an ultrasound receiver device.14.The acoustic receiver device of claim 13 wherein said wake-up signal is below a minimum of the ultrasonic frequency range.15.The acoustic receiver device of claim 13 wherein said wake-up signal is within an ultrasonic frequency range.16.The acoustic receiver device of claim 12, wherein the wake-up signal has a frequency that is lower than a Nyquist rate of the acoustic receiver.17.The acoustic receiver device of claim 12 wherein said wake-up signal has a frequency that exceeds a Nyquist rate of said acoustic receiver and produces aliasing artifacts at said acoustic receiver that provide an exit Indicates the indication of the idle mode.18.The acoustic receiver device of claim 12 wherein said idle sampling rate is 48 kHz.19.The acoustic receiver device of claim 12 wherein said acoustic receiver device is a mobile platform.20.The acoustic receiver device of claim 19, wherein the acoustic transmitter is one of a pointing device, a location beacon, and a remote mobile platform.21.The acoustic receiver device of claim 19, wherein the acoustic receiver device is included in an acoustic system including the acoustic transmitter.22.The acoustic receiver device of claim 19, wherein the processor is further configured to cause the acoustic receiver to return to the idle mode.23.An apparatus for operating an acoustic receiver device, comprising:Means for operating the acoustic receiver device in an idle mode using an idle sampling rate less than a full sampling rate for sampling the full ultrasound spectrum and supporting full operation of the acoustic receiver device a mode wherein the idle sampling rate only supports a minimum set of features used to wake the acoustic receiver device from the idle mode, but not sufficient to support ultrasound data capture required during the full operational mode;Means for receiving a wake-up signal from an acoustic transmitter device, the wake-up signal having a frequency detectable by the acoustic receiver device using the idle sampling rate while in the idle mode, wherein the wake-up signal is received Switching the acoustic receiver device to the full mode of operation using the full sampling rate.24.The device of claim 23, wherein the acoustic receiver device is an ultrasound receiver device, and wherein the acoustic transmitter device is an ultrasound transmitter device.25.The apparatus of claim 23 wherein said frequency of said wake-up signal is lower than a Nyquist rate of said acoustic receiver device.26.The apparatus of claim 23 wherein said frequency of said wake-up signal exceeds a Nyquist rate of said acoustic receiver device and aliasing artifacts are generated at said acoustic receiver device, which provides an exit Indicates the indication of the idle mode.27.The apparatus of claim 23 wherein said acoustic transmitter device is co-located with said acoustic receiver device.28.A method for operating an acoustic transmitter device, comprising:Transmitting a wake-up signal from the acoustic transmitter device, wherein the wake-up signal has a lower frequency than a full ultrasound spectrum transmitted during a full mode of operation of the acoustic transmitter device, the frequency only supporting the acoustic receiver The smallest set of features that the device wakes up from idle mode;Determining that the acoustic receiver device switches to a full mode of operation upon receiving the wake up signal;Acoustic data is transmitted within the full ultrasound spectrum.29.The method of claim 28 wherein said acoustic transmitter device is an ultrasound transmitter device, and wherein said acoustic receiver device is an ultrasound receiver device.30.The method of claim 28 wherein determining that the acoustic receiver device is in a full mode of operation comprises receiving, from the acoustic receiver device, a signal that the acoustic receiver device is in a full mode of operation in response to the wake up signal.31.The method of claim 28 wherein said wake up signal is transmitted in response to a received user input.32.The method of claim 31 wherein said acoustic transmitter device is one of a pointing device, a location beacon, and a mobile platform.33.An acoustic transmitter device comprising:Acoustic transmitter;a processor coupled to the acoustic transmitter, the processor configured to:Causing the acoustic transmitter to transmit a wake-up signal having a lower frequency than the full ultrasound spectrum transmitted during the full operational mode of the acoustic transmitter, the frequency only supporting the awakening of the acoustic receiver device from the idle mode The smallest set of features,Determining that the acoustic receiver switches to the full operation mode upon receiving the wake-up signal, andThe acoustic emitter is caused to emit acoustic data in a full ultrasound spectrum.34.The acoustic transmitter device of claim 33, wherein the acoustic transmitter is an ultrasound transmitter, and wherein the acoustic receiver is an ultrasound receiver device.35.The acoustic transmitter device of claim 33, the acoustic transmitter device being included in an acoustic system including a receiver configured to receive signals from the acoustic receiver and coupled to the processing The processor, wherein the processor is configured to determine that the acoustic receiver is in a full mode of operation in response to the signal.36.The acoustic transmitter device of claim 33, further comprising a user input element, wherein the processor is configured to cause the acoustic transmitter to transmit the user input in response to a user input received from the user input element Wake up signal.37.The acoustic transmitter device of claim 33, wherein the acoustic transmitter is one of a pointing device, a position beacon, and a mobile platform.38.An apparatus for operating an acoustic transmitter device, comprising:Means for transmitting a wake-up signal from the acoustic transmitter device, wherein the wake-up signal has a lower frequency than a full ultrasound spectrum transmitted during a full mode of operation of the acoustic transmitter device, the frequency only supported for The smallest set of features that wake the acoustic receiver device from idle mode;Means for determining that the acoustic receiver device switches to a full mode of operation upon receiving the wake-up signal;Means for transmitting acoustic data within the full ultrasound spectrum.39.38. Apparatus according to claim 38 wherein said acoustic transmitter device is an ultrasound transmitter device and wherein said acoustic receiver device is an ultrasound receiver device. |
Ultrasound-based mobile receiver in idle modeCross-reference to related applicationsThe present application claims priority to U.S. Patent Application Serial No. 13/290,797, entitled,,,,,,,,,,,,,,,,,,,, The patent application is assigned to the assignee of the present application and is hereby incorporated by reference in its entirety.Background techniqueElectronic pointing devices sometimes emit an audible signal, and most are typically ultrasonic signals from which the position of the electronic pointing device position can be determined. For example, a digital pen or stylus operates as a standard pen that allows a user to write on paper while transmitting predefined encoded ultrasound data that is received by the receiver and used to determine the position of the digital pen. The ultrasound data is sampled and decoded by a receiver, which may be a mobile device such as a smart phone, notebook computer, tablet PC, slate, e-reader, and the like. Based on the signal processing algorithm, the mobile device can determine the precise location of the pointing device, and thus, the digital pen can act as a data input device for the mobile device. Ultrasound-based digital pens can be used as touch screen substitutes/supplements, high resolution graphics input devices, navigation mice, 2D/3D games, and the like. In general, ultrasound techniques can also be used to enhance user experience in applications such as gesture detection, finger hovering, and peer-to-peer positioning and communication.The audio digitizer (CODEC) in mobile devices used in conjunction with ultrasound technology is typically part of the audio subsystem, which has traditionally been used to sample and reproduce speech and music, all within the auditory frequency range (up to 25 kHz). . The CODEC of conventional audio systems uses sampling frequencies up to 48 kHz. Recently, due to the advent of various ultrasound technologies, CODECs are now generally designed to support higher sampling rates, for example up to 200 kHz and above, and thus enable sampling of ultrasound data based on ultrasound devices. However, the sampling rate is directly related to power efficiency, and therefore, using a higher sampling rate for ultrasound data results in increased power consumption. In addition, the processing of ultrasound samples also contributes to CPU utilization and therefore consumes more power.Summary of the inventionAn acoustic system that can be ultrasonic, operates in a power efficient idle mode, thereby reducing the power consumption required for high frequency sampling and processing. While in the idle mode, the acoustic receiver device operates at an idle sampling rate that is lower than the full sampling rate used during the full mode of operation, but is capable of receiving a wake-up signal from the associated acoustic transmitter. When receiving the wake-up signal, the acoustic receiver switches to the full mode of operation by increasing the sampling rate and achieves full processing. The acoustic system can be used, for example, in an ultrasound pointing device, in a position beacon, in peer-to-peer communication between devices, and in gesture detection.In one embodiment, a method includes operating an acoustic receiver device in an idle mode using an idle sampling rate less than a full sampling rate; receiving a wake-up signal from an acoustic transmitter device, the wake-up signal having an acoustic receiver receivable The frequency detected by the device while in the idle mode; and operating the acoustic receiver device at a full sampling rate in response to the wake-up signal.In another embodiment, an apparatus includes: an acoustic receiver for receiving an acoustic signal from an acoustic transmitter; and a processor coupled to the acoustic receiver, the processor configured to cause the The acoustic receiver operates in an idle mode using an idle sampling rate less than a full sampling rate, detecting a wake-up signal received by the acoustic receiver in the idle mode, and causing the acoustic receiver device to respond to the wake-up The signal operates at the full sampling rate.In another embodiment, an apparatus includes: means for operating an acoustic receiver device in an idle mode using an idle sampling rate less than a full sampling rate; means for receiving a wake-up signal from an acoustic transmitter device, The wake-up signal has a frequency detectable by the acoustic receiver device in an idle mode; and means for operating the acoustic receiver device at a full sampling rate in response to the wake-up signal.In still another embodiment, a non-transitory computer readable medium having stored thereon program code comprising: program code for operating an acoustic receiver device in an idle mode using an idle sampling rate less than a full sampling rate a program code for receiving a wake-up signal from an acoustic transmitter device, the wake-up signal having a frequency detectable by the acoustic receiver device in an idle mode; and operating the acoustics at a full sampling rate in response to the wake-up signal The program code of the receiver device.In another embodiment, a method includes transmitting a wake-up signal from an acoustic transmitter device, wherein the wake-up signal has a frequency that is lower than a full frequency range transmitted during full operation of the acoustic transmitter device; determining an acoustic receiver The device is in a full mode of operation in response to the wake-up signal; and transmits acoustic data over a full frequency range.In another embodiment, an apparatus includes: an acoustic transmitter; and a processor coupled to the acoustic transmitter, the processor configured to cause the acoustic transmitter to transmit having a lower than acoustic A wake-up signal of a frequency of the full frequency range transmitted during full operation of the transmitter determines that the acoustic receiver is in a full operational mode in response to the wake-up signal and causes the acoustic transmitter to transmit acoustic data over a full frequency range.In another embodiment, an apparatus includes: means for transmitting a wake-up signal from an acoustic transmitter device, wherein the wake-up signal has a frequency that is lower than a full frequency range transmitted during full operation of the acoustic transmitter device; Means for determining that the acoustic receiver device is in a full operational mode in response to the wake-up signal; and means for transmitting acoustic data over a full frequency range.In still another embodiment, a non-transitory computer readable medium stores program code, comprising: program code to transmit a wake-up signal from an acoustic transmitter device, wherein the wake-up signal has a lower than acoustic emission The frequency of the full frequency range transmitted during full operation of the device; the program code to determine the acoustic receiver device in full operational mode in response to the wake-up signal; and the program code to transmit acoustic data over the full frequency range.DRAWINGS1 illustrates an acoustic system that includes a transmitter device and a receiver device that are capable of operating in a power efficient idle mode.2 is a flow chart illustrating the operation of a receiver device in an acoustic system.3 is a flow chart illustrating the operation of a transmitter device in an acoustic system.Figure 4 illustrates an acoustic system similar to the acoustic system shown in Figure 1, but with transmitter and receiver devices in the same location.Figure 5 is a block diagram of a receiver device that can be used in conjunction with an ultrasound system.6 is a block diagram of a transmitter device that can be used in conjunction with an ultrasound system.Detailed waysFIG. 1 illustrates an acoustic system 100 that is capable of operating in a power efficient idle mode. The acoustic system 100 operates within ultrasound and is therefore sometimes referred to as the ultrasound system 100. However, it should be understood that the present invention is applicable to systems that operate outside of the ultrasonic range. The ultrasound system 100 is illustrated as including: a transmitter device 110 having an ultrasound transmitter 112 that emits an acoustic signal 114; and a receiver device 120 that includes an acoustic (ultrasonic) receiver 122 that includes two that receive the acoustic signal 114 Microphones 122a, 122b. Acoustic receiver 122 may include additional microphones, such as three or more microphones, if desired. Receiver device 120 includes a CODEC 162 (or multiple CODECs) for sampling and decoding acoustic signals received by acoustic receiver 122. The transmitter device 110 is illustrated as being in the form of a writing instrument, such as a digital pen, but it should be understood that the transmitter device 110 is not limited to a digital pen, but can be any desired type of transmitting ultrasound device, including but not limited to pointing A device, a digital stylus or mouse (eg, in a user interface application), a location beacon (eg, in a navigation application), or a neighboring device similar to the receiver device 120 (eg, in a peer to peer communications application).The receiver device 120 is illustrated as a mobile platform, such as a cellular or smart phone, comprising: a display 124, which can be a touch screen display; and a speaker 126 and a microphone 128, illustrated as being separate from the acoustic receiver 122 but can be A portion of the acoustic receiver 122. Although the receiver device 120 is illustrated as a cellular telephone, it should be understood that the receiver device 120 can be any desired electronic device, including a portable computer, such as a laptop computer, notebook computer or tablet computer, or other similar device. For example, an e-reader or personal communication system (PCS) device, a personal navigation device (PND), a personal information manager (PIM), a personal digital assistant (PDA), or other suitable device. While the ultrasonic power efficient idle mode provided in the ultrasound system 100 advantageously conserves battery life and is therefore most beneficial in mobile devices with limited battery life, the ultrasound system 100 can also be used in less portable or stationary devices.During the idle mode, the receiver device 120 does not operate at the high sampling rate required for ultrasound data transmission and does not process the ultrasound samples, but only supports the minimum set of features used to wake the receiver device 120 from the idle mode. For mobile systems that are subject to extreme low power consumption requirements (eg, "normally connected, often connected" mobile platforms), it is advantageous to use idle mode to reduce the power requirements of ultrasonic communications. The receiver device 120 uses a lower sampling rate when in the idle mode, but is sufficient to ensure that once the user wants to use the ultrasound system 100, the receiver device 120 wakes up from idle mode and switches to full operational mode within an acceptable time. Sampling rate. For example, a user may indicate that the ultrasound system 100 is intended to be used by activating the user input element 170, which is illustrated as a button in FIG. 1, but alternatively may be for a period of time (eg, 0.5s) Or a motion sensor that detects the vibration of the transmitter device 110 by any other suitable mechanism. User input component 170 causes transmitter device 110 to transmit a wake-up acoustic signal. In embodiments where the transmitter device 110 is an ultrasound transmitting navigation beacon or any similar type of device, the transmitter device 110 may periodically transmit the wake-up acoustic signal to any nearby receiver device 120. The wake-up ultrasound signal may be in the range of auditory frequencies, typically considered to be less than 25 kHz, or may be in the ultrasound range, typically considered to be above 25 kHz, but typically lower than the ultrasound frequency transmitted during the full mode of operation.While in the idle mode, the receiver device 120 uses a reduced sampling rate, such as 48 kHz, which is sufficient to detect the ring acoustic signal, but is insufficient to support the ultrasound data capture required during the full mode of operation. For example, during the idle mode, the receiver device 120 can have a sampling rate sufficient to detect an acoustic frequency rather than an acoustic signal within the ultrasonic frequency range. However, if desired, the receiver device 120 may have a sampling rate sufficient to capture the ultrasound frequency (eg, greater than 25 kHz) when in the idle mode, but less than the full ultrasound spectrum used during the full mode of operation. In other words, when in the idle mode, the receiver device 120 can have ultrasonic frequencies that are detected when fully or partially within the ultrasonic frequency range detected when in the full mode of operation or when fully in the full mode of operation. The sampling rate of the detection of signals within the acoustic frequency range outside the range. For example, if the receiver device 120 is in a full operating mode with a sampling rate sufficient to detect 25 to 80 kHz, the receiver device 120 can detect only less than 25 kHz or 25 to 30 kHz or 25 when in the idle mode. To the acoustic frequency of 40kHz. Although 25 to 30 kHz or 25 to 40 kHz is the ultrasonic frequency, this range is less than the full ultrasound range (25 kHz to 80 kHz) and requires less sampling and less processing.The sampling rate of the CODEC 162 in the receiver device 120 for acoustic frequencies within the human hearing range is, for example, 48 kHz. Thus, while in the idle mode, the CODEC 162 in the receiver device 120 can have an idle sampling rate of 48 kHz. As discussed above, in the idle mode, the receiver device 120 can be at a low ultrasound frequency (ie, about a human hearing but below a frequency range used in a full operating mode or a low subset of the frequency range) Operating under frequency). Thus, while in the idle mode, the CODEC 162 in the receiver device 120 can operate at a sampling rate greater than, for example, 48 kHz but still less than the full sampling rate. For example, the idle sampling rate can be, for example, 96 kHz, which is sufficient to support some ultrasonic frequencies, but less than a full sampling rate of, for example, 192 kHz, and thus cannot support the full ultrasonic frequency range supported in full operating mode. The sampling rate of 96 kHz is lower than 192 kHz, and therefore requires less power, while 48 kHz requires less power.The wake-up acoustic sequence transmitted by the transmitter device 110 is within a range of frequencies detectable by the receiver device 120 when in the idle mode, such as below the ultrasonic frequency or the low ultrasonic frequency. Thus, the receiver device 120 will detect a request from the transmitter device 110 to switch from the idle mode to the full mode of operation. When in the full mode of operation, the receiver device 120 will increase the sampling rate to a full sampling rate that supports ultrasound data capture, such as 192 kHz. Thus, during the ultrasound idle mode, the receiver device 120 operates at low power by reducing the sampling rate and avoiding unnecessary processing.2 is a flow chart 200 illustrating the operation of receiver device 120 in acoustic system 100. As illustrated, the receiver device 120 operates in a idle mode using a sampling rate that is less than the full sampling rate, where the full sampling rate is the ultrasonic sampling rate (202). The idle sampling rate used by receiver device 120 may be sufficient to sample the acoustic signal with a frequency that is, for example, lower than the ultrasonic frequency. The idle sampling rate can be high enough to sample the ultrasonic acoustic signal, but still below the full sampling rate. Receiver device 120 searches through all of the audio samples for predefined wake-up signals embedded in an expected frequency (e.g., 22 kHz to 24 kHz). The CODEC 162 can be placed in a restricted active state if desired, so the CODEC 162 does not need to remain active at all times. For example, when in a restricted active state, the CODEC 162 may periodically become active for a limited time, such as every 1 second, to determine whether to find an acoustic with an expected frequency of the wake-up signal (eg, 22 kHz to 24 kHz). Signal, and if not found, return to inactive state. When in a restricted active state, the CODEC 162 may be limited to a particular frequency expected in the sampled wake-up signal.As illustrated in Figure 2, the receiver device 120 receives a wake-up signal from an ultrasound transmitter device, wherein the wake-up signal has a frequency (204) that can be detected by a receiver device in an idle mode. Receiver device 120 then switches to operate at full sample rate (206) in response to the wake-up signal. The full sampling rate can be, for example, 192 kHz. After moving to the full mode of operation, a user of the ultrasound system 100 can receive audible or visual feedback that the ultrasound system 100 is ready for use (eg, via the speaker 126 or display 124 in FIG. 1). If desired, the receiver device 120 can provide a signal (e.g., an infrared (IR) or radio frequency (RF) signal) that the receiver device 120 is now in full sampling mode and ready for use to the transmitter device 110. The receiver device 120 may return to the idle mode after a predefined condition, such as when ultrasound data is not received within a specified time period or in response to user input.FIG. 3 is a flow diagram 300 illustrating the operation of the transmitter device 110 in the acoustic system 100. The transmitter device transmits a wake-up signal (302) having a frequency that is lower than the full frequency range used during full operation of the transmitter device. For example, the wake-up signal can have a frequency of less than 25 kHz, or between 25 and 30 kHz or between 30 and 40 kHz, while the frequency used during full operation can be 25 to 80 kHz. The transmission of the wake-up signal may be responsive to user input, such as pressing a button on the transmitter device or movement of the transmitter device, such as shaking or tapping a specified sequence over a period of time. In the case of ultrasound-transmitted beacons, the wake-up signal is periodically transmitted to any nearby mobile device without user input. The wake-up signal is a predefined signal that can be unique for each ultrasound-based device. The wake-up signal can be at the highest possible frequency below the Nyquist rate such that it does not interfere with the user or produces minimal interference to the user, ie the signal cannot be heard. For example, the wake-up signal can be a predefined signal ranging from 22 kHz to 24 kHz at low power. However, any low power signal can be used in conjunction with a frequency that is low enough to be detected by the idle sampling rate (eg, 48 kHz). Transmitter device 110 determines that receiver device 120 is in a full mode of operation (304). For example, the transmitter device 110 can receive a signal, such as an IR or RF signal, from the receiver device 120 that indicates that the receiver device 120 is in a full mode of operation in response to the wake-up signal. Alternatively, transmitter device 110 may not receive signals from receiver device 120, and after a predefined delay from transmitting the wake-up signal, may unilaterally determine that receiver device 120 is in a full mode of operation. The ultrasound transmitter then transmits ultrasound data (306) over the full frequency range.If desired, the wake-up signal may not be transmitted with a predefined signal range, but rather the receiver device 120 uses the downsampling method to detect the wake-up signal. For example, the receiver device 120 can be in an idle mode in which the idle sampling rate is lower than the full sampling rate. If the transmitter device 110 produces an acoustic signal that exceeds the Nyquist rate, such as when the idle sampling rate is 48 kHz, the transmitted acoustic signal is above 24 kHz, then an aliasing effect is generated within the receiver device 120. Thus, the presence of aliasing artifacts in the audio data sampled by the receiver device 120 can be used as an indication that the receiver device 120 should exit the idle mode. If desired, the transmitted wake-up signal can be specifically configured such that aliasing artifacts can be decoded by the receiver device 120 to provide a reliable indication of exiting the idle mode. While the analog filter is typically present before the sampler to reject aliasing in the sampled signal, the analog filter can be configured to permit aliasing effects when the receiver device 120 is in idle mode.Additionally, while FIG. 1 illustrates an ultrasound system 100 having separate transmitter device 110 and receiver device 120, the principles described herein can be extended to ultrasound with transmitter device 410 and receiver device 420 located at the same location, if desired. System 400, which can be used, for example, to illustrate motion discrimination, as illustrated in FIG. The ultrasound system 400 is similar to the ultrasound system 100 described in FIG. 1, except that the transmitter device 410 and the receiver device 420 are co-located, the elements of the same representation are the same. As illustrated, the transmitter device 410 produces an ultrasound signal 412 that is reflected from an object 402 (illustrated as a hand in FIG. 4) in front of the ultrasound system 100. The reflected signal 414 is returned to the receiver device 420 as the object 402 moves, the position of the object can be determined based on the reflected signal, and thus the gesture can be detected. The use of a full ultrasound spectrum is beneficial to the performance of the ultrasound system 400 in detecting a gesture, but it is not necessary to detect the presence of an object, ie proximity detection. Thus, while in the idle mode, the ultrasound system 400 can emit an acoustic signal with a high or low bandwidth, while the receiver device 420 uses an idle sampling rate that is lower than the full sampling rate, but sufficient to detect when an object is present in front of the device. When an object is detected, the ultrasound system 400 switches from the idle mode to the full sampling mode so that the gesture can be detected.FIG. 5 is a block diagram of a receiver device 120 that can be used in conjunction with the ultrasound system 100 shown in FIG. Receiver device 120 includes an acoustic receiver 122 that can include two or more microphones 122a, 122b capable of receiving acoustic (eg, ultrasound) signals from transmitter device 110 (shown in FIG. 1). The receiver device 120 can also include a transmitter 130 for transmitting, for example, an IR or RF signal to the transmitter device 110 when the receiver device 120 has switched from the idle mode to the full mode of operation. Additionally, the ultrasound transmitter 132 may be present, i.e., in the same location as the acoustic receiver 122, which may be useful for peer to peer communications or for illustrative motion detection, as discussed with respect to FIG. Receiver device 120 can also include a user interface 140 that includes a display 124 that can display text or images. User interface 140 may also include a keypad 144 or other input device through which a user may enter information into receiver device 120. If desired, the keypad 144 can be removed by integrating the virtual keypad into the display 124 with the touch sensor. User interface 140 may also include a microphone 128 and a speaker 126, for example if the mobile platform is a cellular telephone. The microphone 128 can be part of the acoustic receiver 122 if desired. Of course, receiver device 120 can include other components that are not relevant to the present invention.The receiver device 120 also includes a control unit 150 that is coupled to and in communication with the acoustic receiver 122, the transmitter 130, and the ultrasound transmitter 132 (if present) as well as the user interface 140, as well as any other desired features. Control unit 150 may be provided by processor 152 and associated memory/storage 154, which may include software 156, as well as hardware 158, as well as firmware 160. Control unit 150 includes a CODEC 162 that is used to decode the acoustic signals received by acoustic receiver 122. The CODEC 162 can be controlled to operate at a full sampling rate or a lower idle sampling rate. More than one CODEC can be used if desired, for example with different CODECs operating at different sampling rates. Control unit 150 is also illustrated as having a filter 164 to reject aliasing in the sampled acoustic signal, but it can be controlled to permit aliasing effects when receiver device 120 is in idle mode. For clarity, the CODEC 162 is described separately from the processor 152, but may be implemented in the processor 152 based on instructions in the software 156 running in the processor 152. Filter 164 can be an analog filter, such as a high pass filter, but can also be implemented in processor 152. Control unit 150 can be configured to implement one or more of the functions described or discussed above.It will be understood that, as used herein, processor 152 and CODEC 162 may, but need not necessarily, include one or more microprocessors, embedded processors, controllers, application specific integrated circuits (ASICs), digital signal processors (DSPs). )Wait. The term processor is intended to describe the functions implemented by the system, rather than limiting these elements to specific hardware. Also, as used herein, the terms "memory" and "storage" refer to any type of computer storage medium, including long-term, short-term or other memory associated with a mobile platform, and are not limited to any particular type of memory or memory. The number, or the type of media on which the memory is stored.Depending on the application, the methods described herein can be implemented by a variety of means. For example, the methods can be implemented in hardware 158, firmware 160, software 156, or any combination thereof. For a hardware implementation, the CODEC 162 may be one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (designed to perform the functions described herein) PLD), Field Programmable Gate Array (FPGA), processor, controller, microcontroller, microprocessor, electronics, other electronic unit, or a combination thereof. Receiver device 120 includes means for operating the acoustic receiver device in an idle mode using an idle sampling rate that is less than the full sampling rate, including, for example, CODEC 162 and processor 152. The receiver device 120 includes means for receiving a wake-up signal from an acoustic transmitter device having a frequency detectable by the acoustic receiver device when in an idle mode, the device comprising an acoustic receiver 122 and a processor 152. Receiver device 120 further includes means for operating the acoustic receiver device at a full sampling rate in response to the wake-up signal, including, for example, CODEC 162 and processor 152. Receiver device 120 can further include means for decoding aliased artifacts when the frequency of the wake-up signal exceeds the Nyquist rate of the acoustic receiver, which can include, for example, CODEC 162 and processor 152.For firmware and/or software implementations, the methods can be implemented with modules (e.g., programs, functions, etc.) that perform the functions described herein. Any machine readable medium tangibly embodying instructions can be used to implement the methods described herein. For example, the software code can be stored in memory 154 and executed by processor 152. The memory can be implemented within or external to the processor 152.If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer readable medium. Examples include non-transitory computer readable media encoded with a data structure and computer readable media encoded with a computer program. Computer readable media includes physical computer storage media. The storage medium can be any available media that can be accessed by a computer. By way of example and not limitation, the computer-readable medium can comprise RAM, ROM, flash memory, EEPROM, CD-ROM or other optical disk storage device, disk storage device or other magnetic storage device, or any other Or a form of data structure that stores the desired program code and is accessible by a computer; disks and optical discs (as used herein) include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), soft Disks and Blu-ray discs, in which disks typically reproduce data magnetically, while discs use optical light to reproduce data optically. Combinations of the above should also be included within the scope of computer readable media.FIG. 6 is a block diagram of a transmitter device 110 that can be used in conjunction with the ultrasound system 100 shown in FIG. The transmitter device 110 includes an ultrasound transmitter 112 that transmits an ultrasound signal (and, if desired, below the ultrasound signal) to the receiver device 120. The transmitter device 110 may also include a user input element, which may be, for example, a motion sensor, such as one or more accelerometers, or for example a mechanical component such as a button or switch. The transmitter device 110 can further include a receiver 172, such as an IR or RF receiver, for receiving a signal from the receiver device 120 indicating that the receiver device 120 has switched from an idle mode to a full mode of operation. Of course, depending on the type of device that is not relevant to the present invention, the transmitter device 110 can include other components.The transmitter device 110 also includes a control unit 180 that is coupled to and in communication with the ultrasound transmitter 112, the user input component 170, and the receiver 172. Control unit 180 may be provided by processor 182 and associated memory/storage 184, which may include software 186, as well as hardware 188, as well as firmware 190. Control unit 180 includes a wake-up controller 192 to determine when the user has indicated the need to use ultrasound system 100, such as via user input element 170. The wake-up controller 192 controls the ultrasound transmitter 112 to transmit a wake-up signal, as discussed above. For clarity, wake controller 192 is illustrated separately from processor 182, but may be implemented in processor 182 based on instructions in software 186 running in processor 182. Control unit 180 can be configured to implement one or more of the functions described or discussed above.It will be understood that, as used herein, processor 182 and wake-up controller 192 may, but need not necessarily, include one or more microprocessors, embedded processors, controllers, application specific integrated circuits (ASICs), digital signal processing. (DSP), etc. The term processor is intended to describe the functions implemented by the system, rather than limiting these elements to specific hardware. Also, as used herein, the terms "memory" and "storage" refer to any type of computer storage medium, including long-term, short-term or other memory associated with a mobile platform, and are not limited to any particular type of memory or memory. The number, or the type of media on which the memory is stored.Depending on the application, the methods described herein can be implemented by a variety of means. For example, the methods can be implemented in hardware 188, firmware 190, software 186, or any combination thereof. For hardware implementations, wake-up controller 192 can be one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable, designed to perform the functions described herein. Implemented within a logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, electronics, other electronic unit, or a combination thereof. Transmitter device 110 includes means for transmitting a wake-up signal from an acoustic transmitter device, wherein the wake-up signal has a frequency that is lower than a full frequency range transmitted during full operation of the acoustic transmitter device, which may be, for example, ultrasound Transmitter 112. The transmitter device 110 can further include means for determining that the acoustic receiver device is in a full operational mode in response to the wake-up signal, which can include the processor 182 and the receiver 172. The transmitter device 110 may further comprise means for transmitting acoustic data over a full frequency range, which may for example be an ultrasound transmitter 112.For firmware and/or software implementations, the methods can be implemented with modules (e.g., programs, functions, etc.) that perform the functions described herein. Any machine readable medium tangibly embodying instructions can be used to implement the methods described herein. For example, the software code can be stored in memory 184 and executed by processor 182. The memory can be implemented within or external to the processor 182.If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer readable medium. Examples include non-transitory computer readable media encoded with a data structure and computer readable media encoded with a computer program. Computer readable media includes physical computer storage media. The storage medium can be any available media that can be accessed by a computer. By way of example and not limitation, the computer-readable medium can comprise RAM, ROM, flash memory, EEPROM, CD-ROM or other optical disk storage device, disk storage device or other magnetic storage device, or any other Or a form of data structure that stores the desired program code and is accessible by a computer; disks and optical discs (as used herein) include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), soft Disks and Blu-ray discs, in which disks typically reproduce data magnetically, while discs use optical light to reproduce data optically. Combinations of the above should also be included within the scope of computer readable media.Although the invention has been described in connection with specific embodiments, the invention is not limited thereto. Various adaptations and modifications can be made without departing from the scope of the invention. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description. |
In one example, a method includes receiving, from a user application and with a wireless docking service of a wireless docking communications stack executing on a computing device, a request to discover one or more peripheral functions within wireless communication range of the computing device. The method also includes, responsive to receiving the request, discovering, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center. The method further includes consolidating the peripheral functions into a docking session for the user application. The method also includes, responsive to receiving the request, sending a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. |
CLAIMS What is claimed is: 1. A method comprising: receiving, from a user application and with a wireless docking service of a wireless docking communications stack executing on a computing device, a request to discover one or more peripheral functions within wireless communication range of the computing device; responsive to receiving the request, discovering, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center; consolidating the peripheral functions into a docking session for the user application; responsive to receiving the request, sending a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. 2. The method of claim 1, further comprising: receiving, with the wireless docking service, a request to configure at least one of the one or more peripheral functions; and responsive to receiving the request to configure at least one of the one or more peripheral functions and by the wireless docking service, configuring the at least one of the one or more peripheral functions. 3. The method of claim 1, further comprising: receiving, with the wireless docking service and from the user application, a request to use at least one of the one or more peripheral functions; and responsive to receiving the request to use at least one of the one or more peripheral functions, establishing respective corresponding wireless connections with the at least one of the one or more peripheral functions with the wireless docking service. 4. The method of claim 1, wherein the wireless docking communications stack comprises one or more of the following layers that have direct communication interfaces with the wireless docking service: an application service platform layer, a Wi- Fi direct layer, a Miracast layer, a Wi-Fi Serial Bus layer, a Bluetooth layer, a Print service layer, and a Display service layer. 5. The method of claim 1, further comprising: receiving, with the wireless docking service and from the user application, a request to create a wireless docking environment comprising at least one of the one or more peripheral functions; responsive to receiving the request to create the wireless docking environment, creating, with a wireless docking service and without communicating with a wireless docking center, the wireless docking environment that includes the at least one of the one or more peripheral functions; and sending a handle for the wireless docking environment to the user application. 6. The method of claim 5, further comprising: receiving, with the wireless docking service and from the user application, a request to use the wireless docking environment; responsive to receiving the request to use the wireless docking environment, establishing respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions of the wireless docking environment; and sending the docking session identifier and the one or more respective references corresponding to the one or more peripheral functions to the user application in response to the request to use the wireless docking environment. 7. The method of claim 1, further comprising: receiving, with the wireless docking service and from the user application, a request to discover any wireless docking environments within a wireless communication range; responsive to receiving the request, discovering, with the wireless docking service and without communicating with a wireless docking center, one or more wireless docking environments that each includes one or more peripheral functions; and sending a reference to a wireless docking environment of the wireless docking environments to the user application. 8. The method of claim 7, further comprising: receiving, with the wireless docking service and from the user application, a request to use the wireless docking environment; responsive to receiving the request to use the wireless docking environment, establishing respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions of the wireless docking environment; and sending the docking session identifier and the one or more respective references corresponding to the one or more peripheral functions to the user application in response to the request to use the wireless docking environment. 9. The method of claim 1, further comprising: receiving, with the wireless docking service and from the user application, a request to use at least one of the one or more peripheral functions; responsive to receiving the request to use at least one of the one or more peripheral functions, establishing, for a first one of the peripheral functions and a second one of the peripheral functions, a common application service platform (ASP) session; and establishing respective corresponding payload connections for the first one of the peripheral functions and the second one of the peripheral functions, wherein each of the corresponding payload connections use the ASP session. 10. The method of claim 9, further comprising: sending, from the wireless docking service and to an application service platform layer of the wireless docking communications stack, a configuration credential for configuring the first one of the peripheral functions and a second one of the peripheral functions. 11. A device comprising one or more processors, wherein the one or more processors are configured to: receive, from a user application and with a wireless docking service of a wireless docking communications stack executing on the device, a request to discover one or more peripheral functions within wireless communication range of the device; responsive to receiving the request, discover, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center; consolidate the peripheral functions into a docking session for the user application; responsive to receiving the request, send a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. 12. The device of claim 11, wherein the one or more processors are further configured to: receive, with the wireless docking service, a request to configure at least one of the one or more peripheral functions; and responsive to receiving the request to configure at least one of the one or more peripheral functions and by the wireless docking service, configure the at least one of the one or more peripheral functions. 13. The device of claim 11, wherein the one or more processors are further configured to: receive, with the wireless docking service and from the user application, a request to use at least one of the one or more peripheral functions; and responsive to receiving the request to use at least one of the one or more peripheral functions, establish respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions with the wireless docking service. 14. The device of claim 11, wherein the wireless docking communications stack comprises one or more of the following layers that have direct communication interfaces with the wireless docking service: an application service platform layer, a Wi- Fi direct layer, a Miracast layer, a Wi-Fi Serial Bus layer, a Bluetooth layer, a Print service layer, and a Display service layer. 15. The device of claim 11, wherein the one or more processors are further configured to: receive, with the wireless docking service and from the user application, a request to create a wireless docking environment comprising at least one of the one or more peripheral functions; responsive to receiving the request to create the wireless docking environment, create, with a wireless docking service and without communicating with a wireless docking center, the wireless docking environment that includes the at least one of the one or more peripheral functions; and send a handle for the wireless docking environment to the user application. 16. The device of claim 15, wherein the one or more processors are further configured to: receive, with the wireless docking service and from the user application, a request to use the wireless docking environment; responsive to receiving the request to use the wireless docking environment, establish respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions of the wireless docking environment; and send the docking session identifier and the one or more respective references corresponding to the one or more peripheral functions to the user application in response to the request to use the wireless docking environment. 17. The device of claim 11, wherein the one or more processors are further configured to: receive, with the wireless docking service and from the user application, a request to discover any wireless docking environments within a wireless communication range; responsive to receiving the request, discover, with the wireless docking service and without communicating with a wireless docking center, one or more wireless docking environments that each includes one or more peripheral functions; and send a reference to a wireless docking environment of the wireless docking environments to the user application. 18. The device of claim 17, wherein the one or more processors are further configured to: receive, with the wireless docking service and from the user application, a request to use the wireless docking environment; responsive to receiving the request to use the wireless docking environment, establish respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions of the wireless docking environment; and send the docking session identifier and the one or more respective references corresponding to the one or more peripheral functions to the user application in response to the request to use the wireless docking environment. 19. The device of claim 11, wherein the one or more processors are further configured to: receive, with the wireless docking service and from the user application, a request to use at least one of the one or more peripheral functions; responsive to receiving the request to use at least one of the one or more peripheral functions, establish, for a first one of the peripheral functions and a second one of the peripheral functions, a common application service platform (ASP) session; and establish respective corresponding payload connections for the first one of the peripheral functions and the second one of the peripheral functions, wherein each of the corresponding payload connections use the ASP session. 20. The device of claim 19, wherein the one or more processors are further configured to: send, from the wireless docking service and to an application service platform layer of the wireless docking communications stack, a configuration credential for configuring the first one of the peripheral functions and a second one of the peripheral functions. 21. An apparatus comprising: means for receiving, from a user application and with a wireless docking service of a wireless docking communications stack executing on the apparatus, a request to discover one or more peripheral functions within wireless communication range of the apparatus; means for, responsive to receiving the request, discovering, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center; means for consolidating the peripheral functions into a docking session for the user application; means for, responsive to receiving the request, sending a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. 22. The apparatus of claim 21, further comprising: means for receiving, with the wireless docking service, a request to configure at least one of the one or more peripheral functions; and means for, responsive to receiving the request to configure at least one of the one or more peripheral functions and by the wireless docking service, configuring the at least one of the one or more peripheral functions. 23. The apparatus of claim 21, further comprising: means for receiving, with the wireless docking service and from the user application, a request to use at least one of the one or more peripheral functions; and means for, responsive to receiving the request to use at least one of the one or more peripheral functions, establishing respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions with the wireless docking service. 24. The apparatus of claim 21, wherein the wireless docking communications stack comprises one or more of the following layers that have direct communication interfaces with the wireless docking service: an application service platform layer, a Wi- Fi direct layer, a Miracast layer, a Wi-Fi Serial Bus layer, a Bluetooth layer, a Print service layer, and a Display service layer. 25. The apparatus of claim 21, further comprising: means for receiving, with the wireless docking service and from the user application, a request to create a wireless docking environment comprising at least one of the one or more peripheral functions; means for, responsive to receiving the request to create the wireless docking environment, creating, with a wireless docking service and without communicating with a wireless docking center, the wireless docking environment that includes the at least one of the one or more peripheral functions; and means for sending a handle for the wireless docking environment to the user application. 26. The apparatus of claim 25, further comprising: means for receiving, with the wireless docking service and from the user application, a request to use the wireless docking environment; means for, responsive to receiving the request to use the wireless docking environment, establishing respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions of the wireless docking environment; and means for sending the docking session identifier and the one or more respective references corresponding to the one or more peripheral functions to the user application in response to the request to use the wireless docking environment. 27. The apparatus of claim 21, further comprising: means for receiving, with the wireless docking service and from the user application, a request to discover any wireless docking environments within a wireless communication range; means for, responsive to receiving the request, discovering, with the wireless docking service and without communicating with a wireless docking center, one or more wireless docking environments that each includes one or more peripheral functions; and means for sending a reference to a wireless docking environment of the wireless docking environments to the user application. 28. The apparatus of claim 27, further comprising: means for receiving, with the wireless docking service and from the user application, a request to use the wireless docking environment; means for, responsive to receiving the request to use the wireless docking environment, establishing respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions of the wireless docking environment; and means for sending the docking session identifier and the one or more respective references corresponding to the one or more peripheral functions to the user application in response to the request to use the wireless docking environment. 29. The apparatus of claim 21, further comprising: means for receiving, with the wireless docking service and from the user application, a request to use at least one of the one or more peripheral functions; means for, responsive to receiving the request to use at least one of the one or more peripheral functions, establishing, for a first one of the peripheral functions and a second one of the peripheral functions, a common application service platform (ASP) session; and means for establishing respective corresponding payload connections for the first one of the peripheral functions and the second one of the peripheral functions, wherein each of the corresponding payload connections use the ASP session. 30. The apparatus of claim 29, further comprising: means for sending, from the wireless docking service and to an application service platform layer of the wireless docking communications stack, a configuration credential for configuring the first one of the peripheral functions and a second one of the peripheral functions. 31. A computer-readable storage medium comprising instructions stored thereon that, when executed, configure one or more processors to: receive, from a user application and with a wireless docking service of a wireless docking communications stack executing on a computing device, a request to discover one or more peripheral functions within wireless communication range of the computing device; responsive to receiving the request, discover, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center; consolidate the peripheral functions into a docking session for the user application; responsive to receiving the request, send a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. 32. The computer-readable storage medium of claim 31 , wherein the instructions further configure the one or more processors to: receive, with the wireless docking service, a request to configure at least one of the one or more peripheral functions; and responsive to receiving the request to configure at least one of the one or more peripheral functions and by the wireless docking service, configure the at least one of the one or more peripheral functions. 33. The computer-readable storage medium of claim 31 , wherein the instructions further configure the one or more processors to: receive, with the wireless docking service and from the user application, a request to use at least one of the one or more peripheral functions; and responsive to receiving the request to use at least one of the one or more peripheral functions, establish respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions with the wireless docking service. 34. The computer-readable storage medium of claim 31 , wherein the wireless docking communications stack comprises one or more of the following layers that have direct communication interfaces with the wireless docking service: an application service platform layer, a Wi-Fi direct layer, a Miracast layer, a Wi-Fi Serial Bus layer, a Bluetooth layer, a Print service layer, and a Display service layer. 35. The computer-readable storage medium of claim 31 , wherein the instructions further configure the one or more processors to: receive, with the wireless docking service and from the user application, a request to create a wireless docking environment comprising at least one of the one or more peripheral functions; responsive to receiving the request to create the wireless docking environment, create, with a wireless docking service and without communicating with a wireless docking center, the wireless docking environment that includes the at least one of the one or more peripheral functions; and send a handle for the wireless docking environment to the user application. 36. The computer-readable storage medium of claim 35, wherein the instructions further configure the one or more processors to: receive, with the wireless docking service and from the user application, a request to use the wireless docking environment; responsive to receiving the request to use the wireless docking environment, establish respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions of the wireless docking environment; and send the docking session identifier and the one or more respective references corresponding to the one or more peripheral functions to the user application in response to the request to use the wireless docking environment. 37. The computer-readable storage medium of claim 31 , wherein the instructions further configure the one or more processors to: receive, with the wireless docking service and from the user application, a request to discover any wireless docking environments within a wireless communication range; responsive to receiving the request, discover, with the wireless docking service and without communicating with a wireless docking center, one or more wireless docking environments that each includes one or more peripheral functions; and send a reference to a wireless docking environment of the wireless docking environments to the user application. 38. The computer-readable storage medium of claim 37, wherein the instructions further configure the one or more processors to: receive, with the wireless docking service and from the user application, a request to use the wireless docking environment; responsive to receiving the request to use the wireless docking environment, establish respective corresponding wireless connections with at least one peripheral device that offers at least one of the one or more peripheral functions of the wireless docking environment; and send the docking session identifier and the one or more respective references corresponding to the one or more peripheral functions to the user application in response to the request to use the wireless docking environment. 39. The computer-readable storage medium of claim 31 , wherein the instructions further configure the one or more processors to: receive, with the wireless docking service and from the user application, a request to use at least one of the one or more peripheral functions; responsive to receiving the request to use at least one of the one or more peripheral functions, establish, for a first one of the peripheral functions and a second one of the peripheral functions, a common application service platform (ASP) session; and establish respective corresponding payload connections for the first one of the peripheral functions and the second one of the peripheral functions, wherein each of the corresponding payload connections use the ASP session. 40. The computer-readable storage medium of claim 39, wherein the instructions further configure the one or more processors to: send, from the wireless docking service and to an application service platform layer of the wireless docking communications stack, a configuration credential for configuring the first one of the peripheral functions and a second one of the peripheral functions. |
WIRELESS DOCKING SERVICE WITH DIRECT CONNECTION TO PERIPHERALS [0001] This application claims the benefit of U.S. Provisional Application No. 61/752,792, filed January 15, 2013, the entire content of which is incorporated herein by reference in its entirety. TECHNICAL FIELD [0002] This disclosure relates to techniques for wireless docking between electronic devices. BACKGROUND [0003] Docking stations, which may also be referred to as "docks," are sometimes used to couple electronic devices such as laptop computers to peripherals such as monitors, keyboards, mice, printers, or other types of input or output devices. These docking stations typically require a physical connection between the electronic device and the docking station. Additionally, the electronic device and the docking station typically establish docking communications before docking functions may be used. SUMMARY [0004] In general, this disclosure describes techniques for a wireless docking system in which a wireless dockee, such as a mobile computing device, may wirelessly and directly dock with one or more peripheral devices using a wireless docking service that provides a uniform interface for controlling and/or exchanging data with peripheral devices. More specifically, the wireless docking service may provide applications executing on the wireless dockee with an interface for discovering and obtaining references to peripheral devices, for configuring one or more of the peripheral devices discovered, and for using one of more of the peripheral devices discovered according to peripheral-specific functionality. [0005] In some examples, this disclosure describes a wireless dockee that includes a processor, and a memory coupled to the processor. The memory stores instructions for causing the processor to execute a software stack that includes a wireless docking service (WDS). The WDS provides an Application Programming Interface (API) for an application executed by the processor. The API of the WDS consolidates Application Service Platform (ASP) communications and communications for Peripheral Function Protocols (PFPs), such as WiFi Serial Bus (WSB), Bluetooth, or Miracast, to provide an interface to applications executed by the wireless dockee. The WDS may enable the application to discover, configure, and select peripheral devices with which to directly dock using wireless docking sessions that are unmediated by a wireless docking center (WDC). The wireless dockee may directly connect through the WDS to selected peripheral devices and utilize the API to engage PFPs in order to control and exchange data with the corresponding selected peripherals. In other words, once the wireless dockee and the selected peripherals directly connect through the WDS, the wireless dockee may operate a wireless docking session to make use of the wireless docking ASP and PFPs for the peripherals. In this way, the wireless dockee may directly control and exchange data with the peripherals without relying on consolidated wireless docking session connections provided by a WDC. [0006] In some examples, the WDS additionally provides an API to allow the application create a wireless docking environment (WDN) that includes a set of one or more peripherals. The API may further enable the application to discover one or more WDNs previously created, and to select one of the discovered WDNs to operate a docking session to make use of the ASP and PFPs for the peripherals. The WDS may also manage the topology for one or more direct wireless docking session connections and/or the topology of the WDNs. [0007] The techniques of this disclosure may provide one or more advantages. For example, a wireless docking system that operates without communication mediation by a WDC does not require implementation of a WDC that is interoperable with both the wireless dockee and the peripheral devices. This may speed development of wireless docking system protocols, reduce outlays for the additional WDC device, and/or eliminate a requirement for a standardized docking protocol to be implemented for the wireless dockee. In addition, many legacy wireless peripherals may as a consequence connect directly to the wireless dockee, unmediated by a WDC and attendant protocols. [0008] In some examples, a method includes receiving, from a user application and with a wireless docking service of a wireless docking communications stack executing on a computing device, a request to discover one or more peripheral functions within wireless communication range of the computing device. The method also includes, responsive to receiving the request, discovering, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center. The method further includes consolidating the peripheral functions into a docking session for the user application. The method also includes, responsive to receiving the request, sending a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. [0009] In another example, a device includes one or more processors. The one or more processors are configured to receive, from a user application and with a wireless docking service of a wireless docking communications stack executing on the device, a request to discover one or more peripheral functions within wireless communication range of the device. The one or more processors are also configured to, responsive to receiving the request, discover, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center. The one or more processors are further configured to consolidate the peripheral functions into a docking session for the user application. The one or more processors are also configured to, responsive to receiving the request, send a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. [0010] In another example, an apparatus includes means for receiving, from a user application and with a wireless docking service of a wireless docking communications stack executing on the apparatus, a request to discover one or more peripheral functions within wireless communication range of the apparatus. The apparatus also includes means for, responsive to receiving the request, discovering, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center. The apparatus further includes means for consolidating the peripheral functions into a docking session for the user application. The apparatus also includes means for, responsive to receiving the request, sending a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. [0011] In another example, a computer-readable storage medium includes instructions stored thereon that, when executed, configure one or more processors to receive, from a user application and with a wireless docking service of a wireless docking communications stack executing on a computing device, a request to discover one or more peripheral functions within wireless communication range of the computing device. The instructions further configure the one or more processors to, responsive to receiving the request, discover, with the wireless docking service, the one or more peripheral functions without communicating with a wireless docking center. The instructions further configure the one or more processors to consolidate the peripheral functions into a docking session for the user application. The instructions further configure the one or more processors to, responsive to receiving the request, send a docking session identifier and one or more respective references corresponding to the one or more peripheral functions to the user application. [0012] The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF DRAWINGS [0013] FIG. 1 is a conceptual diagram of an example wireless docking system in which a wireless dockee uses a wireless docking service to communicate with multiple peripherals over one or more wireless communication channels. [0014] FIG. 2 is a conceptual diagram illustrating an example wireless docking system in which a wireless dockee uses a wireless docking service to create and use or discover and use a wireless docking environment that includes one or more peripheral devices. [0015] FIG. 3 is a conceptual diagram illustrating an example wireless docking communications stack that includes a wireless docking service by which a wireless dockee may directly communication with one or more peripheral devices. [0016] FIGS. 4A-4D illustrate example software stacks for various peripheral devices used by a computing device that implements a wireless docking communications stack for establishing unmediated docking sessions with the peripheral devices to consolidate one or more peripheral functions into common docking session in accordance with techniques of this disclosure. [0017] FIGS. 5A-5C depict a flow diagram for example call flows by which a user application executing on a computing device uses an wireless docking service to exchange communications with peripherals, unmediated by a wireless docking center, to discover, configure, and select peripherals to establish and operate a consolidated docking session, in accordance with one or more examples of this disclosure. [0018] FIGS. 6A-6C depict a call flow diagram for example call flows by which a user application executing on a computing device uses an wireless docking service to exchange communications with peripherals, unmediated by a wireless docking center, to discover, configure, and select peripherals to establish and operate a consolidated docking session, in accordance with one or more examples of this disclosure. [0019] FIG. 7 depicts a call flow diagram for example call flows for creating a persistent wireless docking environment, in accordance with techniques described in this disclosure. [0020] FIG. 8 depicts a call flow diagram for example call flows for discovering available peripherals and using the discovered peripherals to create a persistent wireless docking environment, in accordance with techniques described in this disclosure. [0021] FIG. 9 depicts a call flow diagram for example call flows for discovering available peripherals and using the discovered peripherals to create a persistent wireless docking environment, in accordance with techniques described in this disclosure. [0022] FIG. 10 depicts a call flow diagram for example call flows for using a previously persisted wireless docking environment that includes one or more peripheral functions, in accordance with techniques described in this disclosure. [0023] FIG. 11 depicts a call flow diagram for example call flows for using a previously persisted wireless docking environment that includes one or more peripheral functions, in accordance with techniques described in this disclosure. [0024] FIG. 12 is a block diagram illustrating an example instance of a computing device operating according to techniques described in this disclosure. [0025] Like reference characters denote like elements throughout the figures and text. DETAILED DESCRIPTION [0026] This disclosure describes an architecture for wireless docking without a wireless docking center. The Wireless Docking Special Interest Group (SIG) has been developing an interoperable wireless docking solution in which a Wireless Docking Center (WDC) manages its peripherals and provides consolidate connections for a mobile to easily connect to and use the peripherals. Described herein is a wireless docking solution in which a mobile directly connects to wireless peripherals without a WDC. A WDC-less Docking system does not require any vendor to implement an interoperable WDC. In addition, A WDC-less docking system may not require any standardized docking protocol to be implemented on a mobile or other computing device. [0027] As described in greater detail below, this disclosure describes wireless communication techniques, protocols, methods, and devices applicable to a wireless docking system in which a wireless dockee, such as a mobile computing device, may wirelessly and directly dock with one or more peripheral devices using a wireless docking service that provides an interface for controlling and/or exchanging data with peripheral devices. The wireless docking service (WDS) may consolidate Application Service Platform (ASP) communications and Peripheral Function Protocol (PFP) communications, such as WiFi Serial Bus (WSB) and Miracast communications, to provide an interface to applications executed by the wireless dockee. The WDS may execute as part of a software protocol stack that includes an interface for Wi-Fi communications, and the WDS executing on top of the ASP and PFPs may be implemented as a Wi-Fi docking service or a wireless docking service using a subset of Wi-Fi docking. For example, the wireless docking service may use a subset of Wi-Fi docking standards directed to peer-to-peer (P2P) topology, generally in accordance with the set of standards promoted as "Wi-Fi Direct" by the Wi-Fi Alliance. [0028] An ASP is generally a wireless communications stack that may enable devices to easily advertise, seek and provide services over a wireless network, such as a Wi-Fi Direct certified network. The wireless stack forming the ASP may be implemented to comply with Wi-Fi Direct certification. The remainder of this disclosure makes reference to the example of a wireless docking service (WDS) implemented for operating through a Wi-Fi Direct ASP, that is, a wireless ASP implemented to comply with Wi-Fi Direct certification, as one illustrative example of a wireless docking service of this disclosure. This is done with the understanding that a WDS through Wi-Fi Direct ASP is merely one example, and a WDS may also be implemented in accordance with a variety of wireless standards, protocols, and technologies. For example, a WDS may also be implemented in accordance with WiGig and/or one or more of the Institute of Electrical and Electronics Engineers (IEEE) 802.11 set of standards (e.g., 802.11, 802.11a, 802.11b, 802.1 lg, 802.11η, 802.1 lac, 802.1 lad, etc.), or extensions of Wi-Fi, WiGig, and/or one or more 802.11 standards. [0029] A wireless docking service operating on top of the ASP and the PFPs may enable peripherals to advertise their specific docking services directly to wireless dockees. A WDS operating on top of the ASP and the PFPs may also enable wireless dockees to discover peripherals that provide docking services. A WDS operating on top of the ASP and the PFPs may also enable peripherals and wireless dockees to connect to each other and establish wireless docking sessions with each other. The wireless docking session may enable services provided by peripheral devices that are coupled to the wireless dockee by the WDS. For example, the peripherals may include displays, projectors, speakers, keyboards, mice, joysticks, data storage devices, network interface devices, other docking hosts, remote controls, cameras, microphones, printers, or other devices. Such peripheral devices may include stand-alone devices or components of devices such as other computers, in different examples. A wireless dockee device, such as a mobile handset, may wirelessly dock with a wireless docking center using the WDS operating through the ASP, thereby enabling the wireless dockee device to access services provided by any of the peripherals, in some examples. [0030] FIG. 1 is a conceptual diagram of an example wireless docking system in which a wireless dockee uses a wireless docking service to communicate with multiple peripherals over one or more wireless communication channels. In the illustrated example, the wireless docking system 100 includes the wireless dockee (WD) 110 that represents a computing device configured for wireless docking and referred to as a wireless dockee in the context of a wireless docking system 100. The wireless dockee 110 may be a mobile device such as a smartphone or other mobile handset, a tablet computer, a laptop computer, or other computing devices. The wireless dockee 110 may be a stationary device such as a desktop computer. The wireless dockee 110 may also be a component of a larger device or system. For example, the wireless dockee 110 may be a processor, a processing core, a chipset, or other one or more integrated circuits. [0031] The peripheral devices 140, 142, 144 of the wireless docking system 100 may include displays, projectors, speakers, keyboards, mice, joysticks, data storage devices, network interface devices, other docking hosts, remote controls, cameras, microphones, printers, or any of various other devices capable of wireless communication with WD 110. WD 110 may engage services provided by peripherals 140, 142, 144. WD 110 may couple to peripherals 140, 142, 144 via wireless communication channels to operate and/or exchange data with peripherals 140, 142, 144 in accordance with the services accessible to WD 110. [0032] Wireless communication channels 130, 132, 134 may be any channels capable of propagating communicative signals between the WD 110 and the respective peripherals 140, 142, 144. In some examples, the wireless communication channels 130, 132, 134 may be implemented in radio frequency communications in frequency bands such as the 2.4 gigahertz (GHz) band, the 5 GHz band, the 60 GHz band, or other frequency bands. In some examples, the wireless communication channels 130, 132, 134 may comply with one or more sets of standards, protocols, or technologies among Wi-Fi (as promoted by the Wi-Fi Alliance), WiGig (as promoted by the Wireless Gigabit Alliance), and/or the Institute of Electrical and Electronics Engineers (IEEE) 802.11 set of standards (e.g., 802.11, 802.11a, 802.11b, 802.11g, 802.11η, 802.1 lac, 802.1 lad, etc.), or other standards, protocols, or technologies. The frequency bands used for the wireless communication channels 130, 132, 134, such as the 2.4 GHz, 5 GHz, and 60 GHz bands, may be defined for purposes of this disclosure as they are understood in light of the standards of Wi-Fi, WiGig, any one or more IEEE 802.11 protocols, and/or other applicable standards or protocols. In some examples, the wireless communications channels 130, 132, 134 may represent a single wireless communication channel multiplexed among peripherals 140, 142, 144. [0033] The wireless dockee 110 may establish communications with any subset of the peripherals 140, 142, 144 automatically once the WD 110 and the subset come within operative communication range of each other, or manually in response to a user input, in different examples. Example call flows between the WD 110 and the peripherals 140, 142, 144 establishing docking communications with each other is depicted in FIGS. 5A-5C, 6A-6C. The wireless dockee 110 and the peripherals 140, 142, 144 may use Application Service Platform (ASP) and/or Peripheral Function Protocols (PFPs), such as WiFi Serial Bus (WSB) and Miracast, to manage communications with each other for a variety of services, including a wireless docking service (WDS), as illustrated in FIGS. 5A-5C, 6A-6C. [0034] FIG. 2 is a conceptual diagram illustrating an example wireless docking system in which a wireless dockee uses a wireless docking service to create and use or discover and use a wireless docking environment that includes one or more peripheral devices. In the illustrated example, the wireless docking system 150 includes the WD 110 and the peripherals 140, 142, 144 which may correspond to the WD 110 and the peripherals 140, 142, 144 of FIG. 1. [0035] In some examples, the WD 110 uses a wireless docking service to create a wireless docking environment (WDN) 152 that includes the peripherals 140, 142, 144. In some examples, the WD 110 uses the wireless docking service to discover the wireless docking environment 152 already created by the WD 110 or another wireless dockee. The WD 110 may use the wireless docking service to select the WDN 152 to establish a wireless docking session by which the WD 110 may engage services provided by the peripherals 140, 142, 144. The wireless dockee 110 may couple to the WDN 152 via the wireless communication channel 182 to operate and/or exchange data with the peripherals 140, 142, 144 in accordance with services accessible to the WD 110. The wireless communication channel 182 may be similar to any of the wireless communication channels 130, 132, 134 of FIG. 1. [0036] In the examples of FIGS. 1-2, the WD 110 may use a wireless docking service to consolidate connections with one or more peripherals into a common context, or "docking session," that may be used by one or more application(s) executing the WD 110 to easily connect to and use the peripherals. [0037] FIG. 3 is a conceptual diagram illustrating an example wireless docking communications stack that includes a wireless docking service by which a wireless dockee may directly communication with one or more peripheral devices. In the illustrated example, the computing device 200 includes a user application 216 executing over a wireless docking communications stack 201. The computing device 200 may represent wireless dockee 110 of FIGS. 1-2. The wireless docking communications stack 201 includes a wireless docking service 214 that provides an application programming interface (API) 226 to the user application 216. The API 226 includes methods, data fields, and/or events by which the user application 216 may discover, configure, and select peripherals for use by the WD 110. In some examples, the API 226 includes an interface by which the user application 216 may create and/or discover WDNs that include peripherals for selection and use by the WD 110. Reference herein to "methods," "messages," and "signals," for instance, with respect to communication between different layers of a communication stack, should be considered interchangeable in that each of these different constructs may be used by the layers of a communication stack to provide/receive data, request an action or data or respond accordingly, or to send/receive a command. The methods, messages, and signals described may represent any of a number of different forms of communication, including messaging services, shared memory, pipes, network communication, and so forth. [0038] In some example implementations, the API 226 includes the example methods of Table 1. Table 1: Example Application Programming Interface [0039] Accordingly, the user application 216 may use the API 226 to direct the wireless docking service 214 to discover ("DiscoverPeripherals"), configure ("ConfigurePeripherals"), and directly select ("UsePeripherals") peripherals to establish a docking session, as well as to undock a docking session ("Undock"). Alternatively, or in addition, the user application 216 may use the API 226 to direct the wireless docking service 214 to create a wireless docking environment ("Create WDN") from a set of peripherals, discover an existing WDN ("DiscoverWDN"), and select a WDN that includes peripherals for use ("UseWDN"). [0040] Wireless docking is implemented as a wireless docking service (WDS) 214 operating over a wireless Application Service Platform layer 204 ("ASP 204") operating over Wi-Fi Direct wireless communications layer 202 ("Wi-Fi Direct 202"), in this example. The Wi-Fi Direct communications 202 are an example implementation of wireless communications over which the ASP 204 may operate. [0041] Various wireless services may be enabled as interface layers over the ASP 204, including a Print service 206, a Display service 208, and other services in some examples. The wireless docking service 214 operates over each of the Print service 206 and the Display service 208 to provide an interface with the user application 216. The Print service 206 and the Display service 208 may be provided by one or more peripheral devices directly accessible to the computing device 200 and managed via the ASP communication layer 204 in some examples. [0042] The WDS 214 may be provided as a Wi-Fi Direct service, and referred to as a Wi-Fi Direct Docking Service. The Wi-Fi Direct Docking Service can be a subset of Wi-Fi docking, in particular, a subset of Wi-Fi docking operating over a P2P Wi-Fi Direct topology, in the example of a Wi-Fi Direct implementation. The WDS 214 may, for example, be implemented as a software module that may be loaded onto or stored in a device such as the wireless dockee 110. Aspects of the WDS 214 may also be integrated with, pre-packaged with, or implemented in hardware in some examples. For example, the WDS 214 may be stored on, integrated with, or implemented by an integrated circuit or a chipset containing one or more integrated circuits and one or more memory components. [0043] A packet-based transport layer protocol stack (not illustrated) may run on top of ASP 204, Miracast 210, and/or WiFi Serial Bus (WSB) 212. The packet-based transport layer may include an Internet Protocol (IP) communications layer and one or more of various Transport Layer communication layers. The IP communications layer may run on top of ASP 204, or directly on Wi-Fi Direct 202. The Transport Layer communications layer may include and one or more of Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), or other Transport Layer communication protocols. [0044] The wireless communications stack 201 includes several additional communication interfaces between different components of the wireless communications stack 201. A WDS Interface 224 between ASP 204 and the WDS 214 serves as a wireless docking interface for ASP methods and events. The WDS Interface 224 may implement WDS 214 running directly on ASP 204 to manage wireless docking communications directly with one or more peripherals. [0045] Various other communication interfaces are also included in wireless docking communications stack 201. Communication interface 220 between Miracast 210 and WDS 214 serves as an interface for controlling and using Miracast operations. Communication interface 222 between WiFi Serial Bus (WSB) 212 and WDS 214 serves as an interface for controlling and using WiFi Serial Bus operations. Communication interface 217 between the Print service 206 and the WDS 214 serves as an interface for controlling and using the Print service 206 operations. Communication interface 218 between the Display service 208 and the WDS 214 serves as an interface for controlling and using Display service 208 operations. [0046] FIGS. 4A-4D illustrate example software stacks for various peripheral devices used by a computing device that implements a wireless docking communications stack for establishing unmediated docking sessions with the peripheral devices to consolidate one or more peripheral functions into common docking session in accordance with techniques of this disclosure. The peripheral devices 300, 310, 320, and 330 may each represent any of the peripheral devices 140, 142, 144 of FIG. 1. [0047] FIG. 4 A depicts a peripheral device 300 having a communication stack 301 that includes a Wi-Fi Serial Bus (WSB) Hub / Peripheral layer 304 operating over Wi-Fi direct 302. Wi-Fi direct 302 may communicate with Wi-Fi direct 202 of FIG. 3, while WSB Hub / Peripheral layer 304 may communicate with the computing device 200 by WSB 212. [0048] FIG. 4B depicts a peripheral device 310 having a communication stack 311 that includes a Miracast sink 314 operating over Wi-Fi direct 312. Wi-Fi direct 312 may communicate with Wi-Fi direct 202 of FIG. 3, while Miracast sink 314 may communicate with the computing device 200 by Miracast 210 (a Miracast source). [0049] FIG. 4C depicts a peripheral device 320 having a communication stack 321 that includes an optional wireless docking service (WDS) layer 328 operating over Print service 326. WDS layer 328 may advertise the Print service 326 to the WDS 214 of FIG. 3. The Print service 326 may communicate with the Print service 206 of FIG. 3 over Wi-Fi direct 322 and Wi-Fi direct 202 of FIG 3. [0050] FIG. 4D depicts a peripheral device 330 having a communication stack 331 that includes an optional wireless docking service (WDS) layer 338 operating over Display service 336. WDS layer 338 may advertise the Display service 336 to the WDS 214 of FIG. 3. The Display service 336 may communicate with the Display service 208 of FIG. 3 over Wi-Fi direct 332 and Wi-Fi direct 202 of FIG. 3. [0051] FIGS. 5A-5C depict a flow diagram for example call flows by which a user application executing on a computing device uses an wireless docking service to exchange communications with peripherals, unmediated by a wireless docking center, to discover, configure, and select peripherals to establish and operate a consolidated docking session, in accordance with one or more examples of this disclosure. The computing device 200 includes components of the example wireless docking communications stack 201 illustrated in FIG. 3, specifically, the wireless docking service 214, the Print service 206, the Application Service Platform (ASP) 204, and the Wi-Fi Direct layer 202. The application 216 executing on computing device 200 may invoke the wireless docking service 214 to establish a consolidated docking session including one or more peripheral devices, e.g., peripheral device 320. [0052] In the illustrated example, the user application 216 of computing device 200 queries, by invoking DiscoverPeripherals() method 400, the WDS 214 of wireless docking communications stack 201 of computing device 200 to discover peripheral devices. The DiscoverPeripherals() method 400 may represent the DiscoverPeripherals() method listed in Table 1 , above, for the API 226 of the wireless docking communications stack 201 of computing device 200. [0053] The WDS 214 may issue communications for different sub-layers corresponding to different peripheral services. In this example, the WDS 214 performs device discovery. As part of device discovery, the WDS 214 may send one or more communications to sub-layers that include directives to discover available peripherals. In this example, the WDS 214 uses Discover message 402 by communication interface 217 to request the Print service 206 to request service discovery for one or more printer peripherals (corresponding to a Printer peripheral function type). This is merely one example of device discovery for a peripheral. Other examples for other types of peripherals functions, such as Display (cf. the Display service 208 of the computing device 200), Mouse, Keyboard, and so forth, are also contemplated. [0054] The Print service 206 requests, from the ASP 204, service discovery for peripheral functions that conform to the Printer peripheral function type by issuing SeekService() message 404 to the ASP 204. The service discovery performed by the ASP 204 in response to SeekService() message 404, i.e., Wi-Fi Direct Service (WFDS) Printer Discovery 406, may represent pre-association service discovery in that it occurs prior to initiation of a wireless docking session between the computing device 200 that is a wireless dockee and the peripheral computing device 320. [0055] As part of the WFDS Printer Discovery 406, the ASP 204 may query the ASP 324 of peripheral computing device 320 for information on peripheral functions, including peripheral functions conforming, e.g., to a Printer peripheral function type, and available via the ASP 324. WFDS Printer Discovery 406 may include multiple communications for a communication flow. For example, the ASP 204 and ASP 324 may initially exchange device discovery communications. The ASP 204 may further send a service discovery query to query the ASP 324 for peripheral function information, or information on peripheral functions available to the ASP 324. Applicable peripheral functions may be available to the ASP 324 from an application also executing on peripheral computing device 320, e.g., the Print service 326. [0056] The ASP 324, serving as a wireless docking host, may send a service discovery response that provides its peripheral function information. The ASP 324 may thereby advertise its peripheral functions in pre-association service discovery communications with the ASP 204 of computing device 200. These communications are pre-association in that they take place prior to initiation of a wireless docking session between the ASP 204 and the ASP 324. The ASP 204 may thus discover the peripheral functions associated with the ASP 324 from the service discovery response as part of the pre- association service discovery communications of WFDS printer discovery 406. Additional details of these pre-association service discovery communications are provided below. [0057] The service discovery communications may be implemented in Data Link Layer, or layer 2 (L2), communications. The L2 communications may be conveyed over any of various types of physical layer (PHY) communication channels, including any of the Wi-Fi or WiGig standards and/or IEEE 802.11 protocols as discussed above. A service discovery query sent by the ASP 204 and the service discovery response sent by THE ASP 324 may use service discovery action frames. An example action frame may include a Media Access Control (MAC) header, a frame category, action details, and a frame check sequence (FCS). The action details in the service discovery query sent by the ASP 204 may include object identifier (OI) fields and query data fields. The ASP 204 may set an OI field in the service discovery action frame to 0x506F9A, i.e., the Organizationally Unique Identifier (OUI) of the Wi-Fi Alliance (WFA). The ASP 204 may also set additional fields in the service discovery action frame, such as an OUI subtype field and a service protocol type field. The ASP 204 may set the query data field of the service discovery query action frame to include a list of docking sub- element identifiers (ID's) to query for information on available docking sub-elements. In some examples, the ASP 204 may communicate with the ASP 324 using plaintext payloads that include SOAP requests and responses (e.g., in accordance with the SOAP specification defined at www.w3.org/TR/soapl2-partl) and GENA (General Event Notification Architecture) notifications running on a packet-based transport layer protocol stack, while in other examples, the ASP 204 may communicate with docking the ASP 324 using a binary protocol running on a packet-based transport layer protocol stack, as further described below. The ASP 204 may also set a service transaction identifier (ID) in the query data field. Examples of the query data fields and the docking sub-element ID's for examples using SOAP and GENA payloads are shown as follows in Tables 2 and 3. Table 2: Query Data Fields Table 3: Docking Sub-element IDs [0058] In some examples that may use a binary protocol instead of SOAP and GENA payloads, the ASP 204 may communicate with the ASP 324 without requiring the use of docking sub-element ID's 8 and 9 as listed in Table 2. [0059] The ASP 324 may respond to receiving service discovery query from the ASP 204 by sending a service discovery response. The ASP 324 may include in the service discovery response a service discovery action frame with a service response data field that includes a list of requested docking sub-elements. The ASP 324 may include a service transaction ID in the service response type-length-value (TLV) element that matches the service transaction ID in the query data field of the service discovery query from the ASP 204, to ensure that the ASP 240 can associate the service discovery response with the previous service discovery query. The ASP 324 may set a docking information element (IE) in Docking Service Discovery action frames included in the service discovery response. In some examples, the ASP 204 may set the docking IE to include sub-elements as shown as follows in Table 4. [0060] These docking information sub-elements provided by the ASP 324 in the service discovery response, i.e., the Peripheral Function Information Sub-element, the Docking Host (ASP) SOAP Uniform Resource Locator (URL) Sub-element, and the Docking Host (ASP) General Event Notification Architecture (GENA) URL Sub-element, are further described as follows. In examples that use a binary protocol, the ASP 324 may omit the docking host (ASP) SOAP URL and docking host (ASP) GENA URL from the information sub-elements from the docking information element in the docking service discovery response. In some examples that use the SOAP and GENA payloads, the computing device 200 and the peripheral device 320 may both send SOAP requests and responses to each other, and the computing device 200 may send GENA notifications to the peripheral device 320, where both the SOAP and GENA payloads may be sent over a packet-based transport layer protocol stack, in accordance with specifications such as Transmission Control Protocol / Internet Protocol (TCP/IP) or User Datagram Protocol / IP (UDP/IP), for example, to specified URLs, and potentially also to specified port numbers, such as TCP port number 80 (commonly associated with HTTP). [0061] The Peripheral Function Information sub-element may provide the peripheral function (PF) information of peripherals hosted by peripheral device 320, specifically in this example, Print service 326. The Peripheral Function Information sub-element may have a data structure as shown in Table 5, with additional information on the listed fields thereafter. Table 5: Peripheral Function Information Sub-element Field Length (Octets) Type n_PFs 1 uimsbf for (i = 0; i < n_PFs; i++) { PF ID 2 uimsbf PF type 2 uimsbf PF name Variable UTF-8_String() PF capability Variable UTF-8_String() PF state 1 uimsbf n PFPs 1 uimsbf for (i = 0; I < n PFPs; i++) { PFP ID 1 uimsbf } } [0062] The field "n_PFs" may contain the number of peripheral functions (PF's) hosted by the peripheral device 320 that generates this PF Status information data structure. Any one or more peripheral devices coupled to wireless dockee 110 (e.g., peripheral devices 140, 142, 144 of FIG. 1) may provide one or more peripheral functions. [0063] The field "PF ID" may contain the ID of a particular peripheral function (PF). As indicated by the line "for (i = 0; i < n_PFs; i++)," the peripheral function information sub-element may include a peripheral function ID and associated information for each peripheral function ID for each of the "n_PFs" peripheral functions. The peripheral function ID may be unique for all peripheral functions that ASP 324 of peripheral device 320 currently hosts or centers or has ever hosted or centered. The ASP 324 may specify when a peripheral function is new and when the peripheral function is not new. [0064] The field "PF type" may indicate the peripheral function type of the peripheral function. An illustrative set of peripheral function types is listed below in Table 6. [0065] The field "PF name" may contain a user-friendly name of the peripheral function. This peripheral function name may be unique for all PFs available to the ASP 324. The format of the peripheral function name may be a UTF-8_String() structure, in some examples. [0066] The field "PF capability" may contain the capability of the peripheral function as reported by the ASP 324. The format of the peripheral function capability may also be a UTF-8_String() structure, in some examples. [0067] The field "n PFPs" may contain the number of Peripheral Function Protocols that can be used to support the use of the particular peripheral referred to by a given PF ID. The field "PFP ID" may contain the identifier (ID) of the Peripheral Function Protocol that can be used to support the use of the particular peripheral. An illustrative set of peripheral function protocols is listed below in Table 7. The field "PF state" may contain the state of the peripheral function, such as with the example states defined below in Table 7. Table 6: Peripheral Function Type PF Type Description 0 Mouse 1 Keyboard 2 Remote Control 3 Display 4 Speaker 5 Microphone 6 Storage 7 Printer 8-65535 Reserved Table 7: Peripheral Function Protocol Identifier PFP ID Description 0 Miracast 1 WiFi Serial Bus (WSB) 2 Bluetooth 3 WiGig Display Extension (WDE) 4 WiGig Serial Extension (WSE) 5-255 Reserved Table 8: PF state values [0068] The Docking Service SOAP URL sub-element provides the URL of the SOAP command service for the docking protocol provided by the ASP 324. The Docking Service SOAP URL sub-element may have the data structure shown as follows in Table 9. Table 9: Docking Service SOAP URL Sub-element Field Length (Octets) Type Description port num 2 uimsbf Port number URL_path Variable UTF-8_String() Substring of URL path, percent- encoded as per Internet Engineering Task Force (IETF) Request for Comment (RFC) 3986 [0069] The Docking Service GENA URL sub-element provides the URL of the GENA notification service for the docking protocol provided by the ASP 324. The Docking Service GENA URL sub-element may have the data structure shown as follows in Table 10. Table 10: Docking Host GENA URL Sub-element [0070] On receiving the peripheral function information of peripherals hosted by the peripheral device 320 and the ASP 324, the ASP 204 may return, in SearchResults() message 408 and to Print service 206, the subset of peripherals included in the peripheral function information that have a peripheral function type of Printer (e.g., type 7 in the illustrative set of peripheral function types listed in Table 6). The Print service 206 may thus discover the peripheral functions associated with the peripheral computing device 320 from the service discovery response as part of the pre-association service discovery communications. The Print service 206 may provide the subset of peripherals received in Results message 410 to the WDS 214. [0071] The WDS 214 may consolidate peripheral functions discovered by operation of the ASP 204, as well as other peripheral functions that may communicate using various other Peripheral Function Protocols (PFPs) such as WSB, Bluetooth, and Miracast (described in further detail below with respect to FIGS. 6A-6C), and return a representation of the peripheral functions in message 412 to the application 216, which may then select a subset of the peripheral functions for configuration and use, as described in further detail below with respect to FIG. 5B. In this way, the WDS 214 may provide, to the application 216, a unified interface by which the application 216 may discover one or more peripheral functions offered by peripheral devices, e.g., peripheral device 320, with which computing device 200 as a wireless dockee may directly dock using a wireless docking session that is unmediated by a wireless docking center. [0072] Turning now to FIG. 5B, the application 216 may select a subset of the discovered peripheral functions for use. The application 216 may then direct, using ConfigurePeripherals() method 420, the WDS 214 to configure the subset of the discovered peripheral functions for use by the application 216. The ConfigurePeripherals() method 420 may include parameter(s) for listing the subset of the discovered peripheral functions for use by the application 216. The ConfigurePeripherals() method 420 may represent the ConfigurePeripherals() method listed in Table 1 , above, for the API 226 of the wireless docking communications stack 201 of computing device 200. [0073] The wireless docking service 214 may be configured with connectivity configuration information for peripheral devices establishing data (or "payload") connections by the computing device 200. In some instances, a payload connection may include a Wi-Fi peer-to-peer (P2P) connection, and the connectivity configuration information may include a P2P Group Credential. For establishing a Wi-Fi P2P connection in instances in which a persistent P2P group is not available, the connectivity configuration information may include a Group Owner Intent, Operating Channel, Intended P2P Interface Address, Channel List, P2P Group ID, and the aforementioned P2P Group Credential. For establishing a Wi-Fi P2P connection in instances in which a persistent P2P group is available, the connectivity configuration information may include an Operating Channel, P2P Group BSSID, Channel List, and P2P Group ID. [0074] The WDS 214 may use ConfigurationCredential() method 422 to provide the connectivity configuration information for establishing payload connections by the computing device 200 to ASP 204, which consolidates session setup for, potentially, multiple peripheral functions and selected peripheral function protocols for the corresponding peripheral functions. In particular, the WDS 214 may provide the P2P Group Credential to ASP 204 using ConfigurationCredential() method 422. [0075] The WDS 214 may further provide, using ConfigurationCredential() method 422, additional information relating to payload connection negotiation, such as a payload connection protocol, a selected peripheral function protocol for each of the selected subset of peripheral functions, and identifiers for the selected peripheral functions that use the payload connection protocol and the payload function protocol. [0076] The application 216 subsequently invokes the UsePeripherals() method 424 of WDS 214 to request the user of the peripherals and a unified docking session that is a common context for the multiple selected peripheral functions for the application 216. The UsePeripherals() method 424 may include parameter(s) for listing the subset of the selected peripheral functions sought for use by the application 216. The selected peripheral functions may be identified using the example Peripheral Function Type identifiers of Table 6 in some instances. The UsePeripherals() method 424 may represent the UsePeripherals() method listed in Table 1 , above, for the API 226 of the wireless docking communications stack 201 of computing device 200. [0077] In the illustrated example, the WDS 214, in turn, identifies the Print service 206 as one of the selected peripheral functions. The WDS 214 may identify the Print service 206 using the example identifier for the Printer Peripheral Function Type identifier listed in Table 6. The WDS 214 accordingly directs, using Start message 426, Print service 206 to initiate a Printer peripheral function session with the peripheral device 320 that provides the Printer peripheral function. Print service 206 requests to connect to an Application Service Platform session by invoking a ConnectSession() method 428 of ASP 204. [0078] The ASP 204 establishes, using ASP session setup procedure 430, an Application Service Platform session with ASP 324 of peripheral device 320 according to connectivity configuration information provided by WDS 214. The Application Service Platform session may serve as a common ASP session for multiple peripheral functions and corresponding peripheral function protocols. [0079] Upon successful establishment of an ASP session with the ASP 324 of peripheral device 320, the ASP 204 may notify the Print service 206 that the ASP session is established by invoking a SessionConnected() method 432. The Print service 206, in turn, notifies the WDS 214 of the successful establishment of the ASP session by returning a Success message 434 to the WDS 214 in response to the Start message 426 that initiated the Printer peripheral function session with the peripheral device 320. [0080] Subsequently, the WDS 214 may request, by invoking the GetConnectionConfiguration() method 436, connectivity configuration information from the ASP 204 that the ASP 204 may have negotiated with the ASP 324 during the ASP session setup procedure 430. The ASP 204 may respond to the invocation of the GetConnectionConfiguration() method 436 by returning connectivity configuration information. The connectivity configuration information may include, e.g., the payload connection protocol, connectivity configuration information of the payload connection agreed to by both the ASP 204 of computing device 200 and the ASP 324 of peripheral device 320, the peripheral function protocol, and an identifier for the peripheral function type ("Printer" in this example, cf. Table 6) of the ASP session established for the Print service 206 data payload connection. For a Wi-Fi P2P connection that does not use a persistent P2P group, the connectivity configuration information of the payload connection may include an operating channel, channel list, and P2P Group ID, for example. For a Wi-Fi P2P connection that uses a persistent P2P group, the connectivity configuration information of the payload connection may include an operating channel, P2P Group BSSID, and channel list, for example. [0081] The WDS 214 consolidates one or more peripheral functions and corresponding payload connections into a common context that is identifiable by a docking session identifier. In some cases, as in the example of FIGS. 5A-5C, the ASP 204 may organize one or more peripheral functions protocols for communicating with peripheral devices that offer peripheral functions to computing device 200. The WDS 214 may then address the one or more consolidated peripheral functions using the common context identified by the docking session identifier. Because application 216 instigates the selection and configuration of the one or more peripheral functions consolidated by the WDS 214, the WDS 214 provides the docking session identifier ("[docking session]") to application 216 in message 440. [0082] Subsequently, application 216 may use the docking session identifier to address the payload connection established by the ASP 204 with the ASP 324 in order to exchange data and, in some cases control information, with the peripheral Print service 326 by data message exchanges 442 using the negotiated peripheral function protocol. In this way, the WDS 214 may provide, to the application 216, a unified interface by which the application 216 may discover, configure, and select a subset of the one or more peripheral functions offered by peripheral devices, e.g., peripheral device 320, with which computing device 200 as a wireless dockee may directly dock using a wireless docking session that is unmediated by a wireless docking center. [0083] FIG. 5C illustrates additional example operations of the peripheral device 320 to establish a payload connection for a Printer peripheral function. The communication stack 321 of peripheral device 320 includes an optional wireless docking service 328 layer to advertise docking content using Advertise Docking Content message 450. The docking content may include the set of peripheral functions and associated status information. The Print service 326 of the peripheral device 320 may notify the ASP 324 of the availability of a Printer peripheral function type on the peripheral device 320 using AdvertisePeripheral() method 452. As a result, the ASP 324 may respond favorably to the WFDS Printer Discovery procedure 406, in that the ASP 324 may respond to service discovery messages issued by the ASP 204 of the computing device 200 with peripheral function information for the Printer peripheral function. [0084] As described above with respect to FIG. 5B, the ASP 204 and the ASP 324 perform the ASP session setup procedure 430. Upon successful connection of an ASP session between the ASP 204 and the ASP 324, the ASP 324 may notify the Print service 326 that the ASP session is established by invoking a SessionConnected() method 454. [0085] FIGS. 6A-6C depict a call flow diagram for example call flows by which a user application executing on a computing device uses an wireless docking service to exchange communications with peripherals, unmediated by a wireless docking center, to discover, configure, and select peripherals to establish and operate a consolidated docking session, in accordance with one or more examples of this disclosure. In this example, the computing device 200 operating as a wireless dockee engages a peripheral function that communicates using the Miracast peripheral function protocol (cf. Table V). [0086] The computing device 200 includes components of the example wireless docking communications stack 201 illustrated in FIG. 3, specifically, the wireless docking service 214, the Miracast host 210, the Application Service Platform (ASP) 204, and the Wi-Fi Direct layer 202. The application 216 executing on computing device 200 may invoke the wireless docking service 214 to establish a consolidated docking session including one or more peripheral devices, e.g., peripheral device 310 that includes a Miracast sink 314. [0087] In the illustrated example, the user application 216 of computing device 200 queries, by invoking DiscoverPeripherals() method 500, the WDS 214 of wireless docking communications stack 201 of computing device 200 to discover peripheral devices. The DiscoverPeripherals() method 500 may represent the DiscoverPeripheralsQ method listed in Table 1 , above, for the API 226 of the wireless docking communications stack 201 of computing device 200. In addition, the Disco verPeripherals() method 400 and the Disco verPeripherals() method 500 may represent the same operation in that application 216 is seeking to discover a plurality of peripheral functions at once. [0088] The WDS 214 may issue communications for different sub-layers corresponding to different peripheral services. In this example, the WDS 214 performs device discovery. As part of device discovery, the WDS 214 may send one or more communications to sub-layers that include directives to discover available peripherals. Here, the WDS 214 uses Discover message 502 by communication interface 220 to request the Miracast Host 210 to request service discovery for a Miracast service, that is, a Miracast-capable peripheral function type (e.g., Display) that uses the Miracast peripheral function protocol. Again, this is merely one example of device discovery for a peripheral. [0089] The Miracast host 210 executes a Miracast discovery procedure 504 to discover available Miracast services, which in this example are provided peripheral device 310 having a Miracast sink 314. The Miracast sink 314 may return peripheral function information to the Miracast host 210 during the Miracast discovery procedure 504. The Miracast Host 210 may return, in Results message 506 and to the WDS 214, the subset of peripherals included in the peripheral function information that have a peripheral function protocol of type Miracast. Although only one peripheral device 310 having a Miracast sink 314 are illustrated in FIG. 6A, in some cases multiple such peripheral devices may be available to sink data using the Miracast peripheral function protocol. [0090] The WDS 214 may consolidate peripheral functions that use the Miracast peripheral function protocol discovered by operation of the Miracast host 210, as well as other peripheral functions that may communicate using various other Peripheral Function Protocols (PFPs) such as WSB, and Bluetooth, and return a representation of the peripheral functions ("[peripherals]") in message 508 to the application 216, which may then select a subset of the peripheral functions for configuration and use. In some cases, message 508 of FIG. 6A and message 42 of FIG. 5A may represent the same message in that the WDS may discover a plurality of peripheral functions at once and return the representation of discovered peripheral functions in a single message. The WDS 214 may in this way provide, to the application 216, a unified interface by which the application 216 may discover one or more peripheral functions offered by peripheral devices, e.g., peripheral device 310, with which computing device 200 as a wireless dockee may directly dock using a wireless docking session that is unmediated by a wireless docking center. [0091] Turning now to FIG. 6B, the application 216 may select a subset of the discovered peripheral functions for use. The application 216 may then direct, using ConfigurePeripherals() method 510, the WDS 214 to configure the subset of the discovered peripheral functions for use by the application 216. The ConfigurePeripherals() method 510 may include parameter(s) for listing the subset of the discovered peripheral functions for use by the application 216 including, in this case, a peripheral function that uses the Miracast peripheral function protocol. The ConfigurePeripherals() method 510 may represent the ConfigurePeripherals() method listed in Table 1 , above, for the API 226 of the wireless docking communications stack 201 of computing device 200. In addition, the ConfigurePeripherals() method 510 may represent the ConfigurePeripherals method 420 of FIG. 5B yet including any parameter(s) for listing the peripheral function making use of the Miracast peripheral function protocol. [0092] The wireless docking service 214 may be configured with connectivity configuration information for peripheral devices establishing data (or "payload") connections by the computing device 200. In some instances, a payload connection may include a Wi-Fi peer-to-peer (P2P) connection, and the connectivity configuration information may include a P2P Group Credential. For establishing a Wi-Fi P2P connection in instances in which a persistent P2P group is not available, the connectivity configuration information may include a Group Owner Intent, Operating Channel, Intended P2P Interface Address, Channel List, P2P Group ID, and the aforementioned P2P Group Credential. For establishing a Wi-Fi P2P connection in instances in which a persistent P2P group is available, the connectivity configuration information may include an Operating Channel, P2P Group BSSID, Channel List, and P2P Group ID. [0093] The WDS 214 may use Configuration Credential message 512 to provide the connectivity configuration information for establishing payload connections by the computing device 200 to Wi-Fi Direct 202, which may configure a dedicated WFD channel for a Miracast connection (described below). The application 216 subsequently invokes the UsePeripheralsQ method 514 of WDS 214 to request a unified docking session that is a common context for the multiple selected peripheral functions for the application 216. The UsePeripherals() method 424 may include parameter(s) for listing the subset of the selected peripheral functions sought for use by the application 216. The selected peripheral functions may be identified using the example Peripheral Function Type identifiers of Table 6 in some instances. The UsePeripherals() method 514 may represent the UsePeripherals() method listed in Table 1, above, for the API 226 of the wireless docking communications stack 201 of computing device 200. In addition, the UsePeripherals() method 514 may represent the UsePeripherals() method 424 of FIG. 5B. [0094] In the illustrated example, the WDS 214, in turn, identifies one of the selected peripheral functions as using the Miracast peripheral function protocol. The WDS 214 may identify the one of the selected peripheral functions using the example identifier for the Peripheral Function Protocol Identifier ("PFP ID") listed in Table 7 for Miracast. The WDS 214 accordingly directs, using SetConnectionConfiguration() method 516, the Wi-Fi Direct 202 to establish a WFD channel for establishing a Miracast connection. The WDS 214 further issues Start message 517 to the Miracast host 210 to direct the Miracast host 210 to perform a Miracast connection setup procedure 518 with Miracast sink 314 to establish a payload connection that uses the Miracast peripheral function protocol. [0095] Upon successful establishment of the Miracast connection with the Miracast sink 314 of peripheral device 310, the Miracast host 210 may notify the WDS 214 of the successful establishment of the Miracast connection by returning a Success message 520 to the WDS 214 in response to the Start message 517 that initiated the Miracast connection with the peripheral device 310. [0096] The WDS 214 consolidates one or more peripheral functions and corresponding payload connections into a common context that is identifiable by a docking session identifier. The WDS 214 may then address the one or more consolidated peripheral functions using the common context identified by the docking session identifier. Because application 216 instigates the selection and configuration of the one or more peripheral functions consolidated by the WDS 214, the WDS 214 provides the docking session identifier ("[docking session]") to the application 216 in the message 522. The message 522 may represent the message 440 of FIG. 5B and include the same identifier in instances where the WDS 214 establishes multiple payload connections for a Printer peripheral function and a Miracast-capable peripheral function in parallel. [0097] Subsequently, application 216 may use the docking session identifier to address the payload connection established by the Miracast host 210 with Miracast sink 314 in order to exchange data and, in some cases control information, with the Miracast sink by data message exchanges 524 using the payload connection. In this way, the WDS 214 may provide, to the application 216, a unified interface by which the application 216 may discover, configure, and select a subset of the one or more peripheral functions offered by peripheral devices, e.g., peripheral device 310, with which computing device 200 as a wireless dockee may directly dock using a wireless docking session that is unmediated by a wireless docking center. [0098] FIG. 6C illustrates example operations of the Miracast sink 314 of the peripheral device 310 to establish a Miracast payload connection with the Miracast host 210 of computing device 200. Miracast sink 314 responds to indicate an available Miracast service during Miracast discovery 504. In addition, Miracast sink 314 participates in establishing a Miracast payload connection with Miracast host 210 during Miracast connection setup procedure 518 with computing device 200. [0099] The computing device 200 may create the persistent WDN for future use by the application 216, which may include simplifying and accelerating the process of establishing and operating future wireless docking sessions between computing device 200 and peripheral devices (e.g. peripheral device 140, 142, and 144 of FIGS. 1 and 2), for example. The WDN configuration data may include the peripheral functions (PF) used in a particular wireless docking session that does not involve a wireless docking center, and the peripheral function protocol (PFP) and payload connection protocol (PCP) information for each peripheral function. A persistent P2P group may be associated with a persistent WDN, although a persistent WDN is not necessarily associated with a persistent P2P group, in some examples. [0100] In some examples, the WDS 214 of computing device 200 may store a persistent wireless docking environment (WDN) for the future use of the application 216. During the pre-association service discovery procedure, the peripheral devices may include a docking information element (IE) in a service discovery response that may include a wireless docking (WDCK) capability sub-element. The WDS 214 may set a corresponding WDCK capability sub-element in part to indicate that it has the capability to store a persistent WDN for the future use of the application 216. If the WDS 214 has the capability to store a persistent WDN, then application 216 may initiate a transaction to store the persistent WDN with the WDS 214. Example call flow diagrams for the setup and use of a persistent WDS are shown in FIGS. 7-11. [0101] FIG. 7 depicts a call flow diagram for example call flows for creating a persistent wireless docking environment, in accordance with techniques described in this disclosure. The procedure 600 for discovering and configuring peripherals may correspond in many respects to the procedures for discovering and configuring peripheral functions using direct connections to peripheral devices, i.e., without a wireless docking controller. That is, the procedure 600 may incorporate features described with respect to FIGS. 5A-5C and 6A-6C for discovering and configuring peripheral functions for a Printer peripheral function type and a Miracast-capable peripheral function, respectively. [0102] After directing the WDS 214 to configure selected peripherals discovered by the WDS 214, the application 216 may request the WDS 214 to create a persistent WDN (alternatively referred to as a "wireless docking environment") by invoking the Create WirelessDockingEnvironment() method 602. The Create WirelessDockingEnvironment() method 602 may represent the Create WDN() method listed in Table 1 , above, for the API 226 of the wireless docking communications stack 201 of computing device 200. [0103] The WDS 214 may respond to the invocation of its exposed Create WirelessDockingEnvironment() method 602 by creating and storing the persistent WDN as requested based at least on the discovered, selected, and configured peripherals for a docking session. To create and store the persistent WDN, the WDS 214 may store peripheral function configuration information for the selected peripheral functions, which may include the peripheral function type, corresponding peripheral function protocol, and payload connection type (e.g., one of IEEE 802.11η, 802.1 lac, 802.1 lad) for each of the selected peripheral functions. The WDS 214 may in some instances further store an identifier for application 216 and/or a docking session identifier for the persistent WDN. For a persistent WDN to be stored by the WDS 214, it will be understood that in other examples, the persistent WDN and/or the persistent WDN configuration data may equivalently be stored by another device accessible to computing device 200, which may include proximate or remote storage resources in various examples. [0104] Upon creating and storing a persistent wireless docking environment in response to the invocation of Create WirelessDockingEnvironment() method 602, the WDS 214 returns a handle for the persistent WDN ("[Wireless Docking Environment]") to the application 216. As described below with respect to FIGS. 10 and 11 , the application 216 may use the handle to avoid discovering, selecting, and configuring peripheral functions encompassed by the persistent WDN. [0105] FIG. 8 depicts a call flow diagram for example call flows for discovering available peripherals and using the discovered peripherals to create a persistent wireless docking environment, in accordance with techniques described in this disclosure. In this example, the application 216 invokes the Discover WirelessDockingEnvironment() method 612 of the WDS 214 to direct the WDS 214 to discover available peripherals and return the peripherals to the application 216 as a wireless docking environment. The Disco verWirelessDockingEnvironment () method 612 may represent the DiscoverWDN () method listed in Table 1, above, for the API 226 of the wireless docking communications stack 201 of computing device 200. [0106] The WDS 214, in response, performs the service discovery procedures in conjunction with the Print service 206 and the ASP 204 to discover the Printer peripheral function provided by peripheral device 320. The service discovery procedures include Discover message 614 from the WDS 214 to the Print service 206, the SeekService() method 616 of the ASP 204 invoked by the Print service 206, the WFDS Printer Discovery procedure 618 between the ASP 204 and the ASP 324, the SearchResult() method 620 to return the results of the SeekService() method 616 to the Print service 206, and the Results message 622 to return the results of the Discover message 614 to the WDS 214. The service discovery procedures may be substantially similar to the Discover message 402, SeekService() message 404, WFDS Printer Discovery procedure 406, SearchResult() message 408, and Results message 410 procedures as illustrated and described with respect to FIG. 5A. [0107] Upon receiving the Results message 622, the WDS 214 may store the received peripheral function information by creating and storing a persistent WDN that includes the peripheral function information for the discovered peripheral functions (in this example, the Printer peripheral function). The WDS 214 may then return a handle for the persistent WDN ("[Wireless Docking Environment]") to the application 216 using message 624. As described below with respect to FIGS. 10 and 11, the application 216 may use the handle to avoid again discovering, selecting, and configuring peripheral functions encompassed by the persistent WDN. [0108] FIG. 9 depicts a call flow diagram for example call flows for discovering available peripherals and using the discovered peripherals to create a persistent wireless docking environment, in accordance with techniques described in this disclosure. In this example, the application 216 invokes the Discover WirelessDockingEnvironment() method 650 of the WDS 214 to direct the WDS 214 to discover available peripherals and return the peripherals to the application 216 as a wireless docking environment. The DiscoverWirelessDockingEnvironment() method 650 may represent the Discover WDN() method listed in Table 1, above, for the API 226 of the wireless docking communications stack 201 of computing device 200. In addition, the Disco verWirelessDockingEnvironment() method 650 may represent the Discover WirelessDockingEnvironment() method 612 of FIG. 8 in that the application may invoke the DiscoverWirelessDockingEnvironment() method 612 of the WDS 214 to discovery multiple peripheral functions in parallel. [0109] The WDS 214, in response, performs the service discovery procedures in conjunction with the Miracast host 210 to discover the Miracast-capable peripheral function provided by peripheral device 310. The service discovery procedures include Discover message 652 from the WDS 214 to the Miracast host 210, Miracast Discovery procedure 654, and the Results message 656 to return the results of the Discover message 656 to the WDS 214. The service discovery procedures may be substantially similar to the Discover message 502, Miracast Discovery procedure 504, and Results message 506 procedures as illustrated and described with respect to FIG. 6A. [0110] Upon receiving the Results message 656, the WDS 214 may store the received peripheral function information by creating and storing a persistent WDN that includes the peripheral function information for the discovered peripheral functions (in this example, the Miracast-capable Miracast host 210). The WDS 214 may then return a handle for the persistent WDN ("[Wireless Docking Environment]") to the application 216 using message 658. Message 658 may represent the message 624 of FIG. 8 in some instances. [0111] FIG. 10 depicts a call flow diagram for example call flows for using a previously persisted wireless docking environment that includes one or more peripheral functions, in accordance with techniques described in this disclosure. The application 216 may read from persistent storage, from a memory of computing device 200, or otherwise obtain a handle for a WDN. To request use of the WDN, the application 216 may then provide the handle for the WDN to the WDS 214 by invoking the UseWirelessDockingEnvironment() method 630 of the WDS 214. The UseWirelessDockingEnvironment() method 630 may represent the UseWDN() method listed in Table 1 , above, for the API 226 of the wireless docking communications stack 201 of computing device 200 [0112] In response, the WDS 214 establishes payload connections for the peripheral functions associated with the WDN identified by the WDN handle. As described and illustrated with respect to FIG. 5B, the WDS 214 establishes an ASP session with the ASP 324 using ASP 204 and establishes a payload connection for the application 216 to Print service 326. WDS 214 returns, using docking session message 646, a docking session ("[docking session]") by which user application may engage the peripheral function(s) provided by the peripheral device, which may include exchanging data for the peripheral function(s) between user application 216 and the peripheral device 320. By enabling the use of a persistent WDN in this way, the WDS 214 may repeatedly establish payload connections with the peripheral functions associated with the persistent WDN without having to perform perform pre-association service discovery procedures and peripheral function configuration procedures, for the information otherwise exchanged from these procedures is already stored in the persistent WDN. The WDS 214 may also avoid repeatedly providing, to the appropriate communication layer of the wireless docking communications stack 201, a configuration credential for configuring the peripheral function. [0113] FIG. 11 depicts a call flow diagram for example call flows for using a previously persisted wireless docking environment that includes one or more peripheral functions, in accordance with techniques described in this disclosure. The application 216 may read from persistent storage, from a memory of computing device 200, or otherwise obtain a handle for a WDN. The application 216 may then provide the handle for the WDN to the WDS 214 by invoking the UseWirelessDockmgEnvironmentO method 660 of the WDS 214. The UseWirelessDockmgEnvironmentO method 660 may represent the UseWDN() method listed in Table 1 , above, for the API 226 of the wireless docking communications stack 201 of computing device 200. In addition, UseWirelessDockingEnvironment() method 660 may represent the UseWirelessDockingEnvironment() method 630 of FIG. 10, for the persisted WDN may include multiple different peripheral functions usable by a single invocation of the UseWirelessDockingEnvironment() method of the WDS 214. [0114] In response, the WDS 214 establishes payload connections for the peripheral functions associated with the WDN identified by the WDN handle. As described and illustrated with respect to FIG. 6B, the WDS 214 establishes a Miracast connection for the application 216 to Miracast sink 314. WDS 214 returns, using docking session message 670, a docking session ("[docking session]") by which user application may engage the peripheral function(s) provided by the peripheral device, which may include exchanging data for the peripheral function(s) between user application 216 and the peripheral device 310. By enabling the use of a persistent WDN in this way, the WDS 214 may repeatedly establish payload connections with the peripheral functions associated with the persistent WDN without having to perform pre-association service discovery procedures and peripheral function configuration procedures, for the information otherwise exchanged from these procedures is already stored in the persistent WDN. The WDS 214 may also avoid repeatedly providing, to the appropriate communication layer of the wireless docking communications stack 201, a configuration credential for configuring the peripheral function. [0115] FIG. 12 is a block diagram illustrating an example instance of a computing device 200 operating according to techniques described in this disclosure. FIG. 12 illustrates only one particular example of computing device 200, and other examples of computing device 200 may be used in other instances. Although shown in FIG. 12 as a stand-alone computing device 200 for purposes of example, a computing device may be any component or system that includes one or more processors or other suitable computing environment for executing software instructions and, for example, need not necessarily include one or more elements shown in FIG. 12 (e.g., input devices 704, user interface devices 710, output devices 712). [0116] As shown in the specific example of FIG. 12, computing device 700 includes one or more processors 702, one or more input devices 704, one or more communication units 706, one or more output devices 712, one or more storage devices 708, and user interface (UI) device 710, and wireless communication module 726. Computing device 700, in one example, further includes wireless docking communications stack 718, authorization module 720, one or more applications 722, and operating system 716 that are executable by computing device 700. Each of components 702, 704, 706, 708, 710, 712, and 726 are coupled (physically, communicatively, and/or operative ly) for inter- component communications. In some examples, communication channels 714 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. As one example in FIG. 12, components 702, 704, 706, 708, 710, 712, and 726 may be coupled by one or more communication channels 714. Wireless docking communications stack 718, authorization module 720, and one or more applications 722 may also communicate information with one another as well as with other components in computing device 700. While illustrated as separate modules, any one or more of modules 718 or 720 may be implemented as part of any of applications 722. [0117] Processors 702, in one example, are configured to implement functionality and/or process instructions for execution within computing device 700. For example, processors 702 may be capable of processing instructions stored in storage device 708. Examples of processors 702 may include, any one or more of a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or integrated logic circuitry. [0118] One or more storage devices 708 may be configured to store information within computing device 700 during operation. Storage device 708, in some examples, is described as a computer-readable storage medium. In some examples, storage device 708 is a temporary memory, meaning that a primary purpose of storage device 708 is not long-term storage. Storage device 708, in some examples, is described as a volatile memory, meaning that storage device 708 does not maintain stored contents when the computer is turned off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some examples, storage device 708 is used to store program instructions for execution by processors 702. Storage device 708, in one example, is used by software or applications running on computing device 700 to temporarily store information during program execution. [0119] Storage devices 708, in some examples, also include one or more computer- readable storage media. Storage devices 708 may be configured to store larger amounts of information than volatile memory. Storage devices 708 may further be configured for long-term storage of information. In some examples, storage devices 708 include non-volatile storage elements. Examples of such non-volatile storage elements include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. [0120] Computing device 700, in some examples, also includes one or more communication units 706. Computing device 700, in one example, utilizes communication unit 706 to communicate with external devices via one or more networks, such as one or more wireless networks. Communication unit 706 may be a network interface card, such as an Ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive information. Other examples of such network interfaces may include Bluetooth, 7G and Wi-Fi radios computing devices as well as Universal Serial Bus (USB). In some examples, computing device 700 utilizes communication unit 706 to wirelessly communicate with an external device such as a server. [0121] In addition, the computing device 700 may include wireless communication module 726. As described herein, wireless communication module 726 may be active hardware that is configured to communicate with other wireless communication devices. These wireless communication devices may operate according to Bluetooth, Ultra- Wideband radio, Wi-Fi, or other similar protocols. In some examples, wireless communication module 726 may be an external hardware module that is coupled with computing device 700 via a bus (such as via a Universal Serial Bus (USB) port). Wireless communication module 726, in some examples, may also include software which may, in some examples, be independent from operating system 716, and which may, in some other examples, be a sub-routine of operating system 716. [0122] Computing device 700, in one example, also includes one or more input devices 704. Input device 704, in some examples, is configured to receive input from a user through tactile, audio, or video feedback. Examples of input device 704 include a presence-sensitive display, a mouse, a keyboard, a voice responsive system, video camera, microphone or any other type of device for detecting a command from a user. [0123] One or more output devices 712 may also be included in computing device 700. Output device 712, in some examples, is configured to provide output to a user using tactile, audio, or video stimuli. Output device 712, in one example, includes a presence- sensitive display, a sound card, a video graphics adapter card, or any other type of device for converting a signal into an appropriate form understandable to humans or machines. Additional examples of output device 712 include a speaker, a cathode ray tube (CRT) monitor, a liquid crystal display (LCD), or any other type of device that can generate intelligible output to a user. In some examples, user interface (UI) device 710 may include functionality of input device 704 and/or output device 712. [0124] Computing device 700 may include operating system 716. Operating system 716, in some examples, controls the operation of components of computing device 700. For example, operating system 716, in one example, facilitates the communication of wireless docking communications stack 718, and application 722 with processors 702, communication unit 706, storage device 708, input device 704, user interface device 710, wireless communication module 726, and output device 712. Wireless docking communications stack 718 and application 722 may also include program instructions and/or data that are executable by computing device 700. As one example, modules 718, 720, and 722 may include instructions that cause computing device 700 to perform one or more of the operations and actions described in the present disclosure. Wireless docking communications stack 718 and application 722 may represent wireless docking communications stack 201 and application 216 of FIG. 3, for example. [0125] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware -based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer- readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. [0126] By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, Flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0127] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. [0128] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. [0129] Various examples have been described. These and other examples are within the scope of the following claims. |
Apparatuses, systems and methods associated with electrical fast transient tolerant input/output (I/O) communication (e.g., universal serial bus (USB)) design are disclosed herein. In embodiments, an apparatus to mount an integrated circuit (IC) package, may include a printed circuit board (PCB), a plurality of pogo pins, and a mounting mechanism. The plurality of pogo pins may be mounted to electrical contacts of the PCB, the plurality of pogo pins may be coupled to the electrical contacts at first ends of the plurality of pogo pins and may be to couple to the IC package at second ends of the plurality of pogo pins. The mounting mechanism may position the IC package on the second ends of the plurality of pogo pins. Other embodiments may be described and/or claimed. |
ClaimsWhat is claimed is:1. An apparatus to mount an integrated circuit (IC) package, comprising:a printed circuit board (PCB);a plurality of pogo pins mounted to electrical contacts of the PCB, the plurality of pogo pins coupled to the electrical contacts at first ends of the plurality of pogo pins and to couple to the IC package at second ends of the plurality of pogo pins; anda mounting mechanism to position the IC package on the second ends of the plurality of pogo pins.2. The apparatus of claim 1, wherein the mounting mechanism maintains a compression force between the IC package and the PCB, the compression force sufficient to compress the plurality of pogo pins.3. The apparatus of any of the claims 1 and 2, wherein the mounting mechanism includes:one or more surface- mount (SMT) fixtures mounted to the PCB;a mounting plate to position the IC package on the second ends of the plurality of pogo pins; andone or more mounting extensions to affix the mounting plate to the one or SMT fixtures.4. The apparatus of claim 3, wherein the one or more mounting extensions are one or more screws, and wherein the one or more SMT fixtures each have a threaded aperture to receive the one or more screws.5. The apparatus of claim 3, wherein the mounting plate has a recess formed in a side of the mounting plate, and wherein the recess is to receive the IC package and maintain a position of the IC package relative to the mounting plate.6. The apparatus of any of the claims 1 and 2, wherein the mounting mechanism includes:a mounting plate to position the IC package on the second ends of the plurality of pogo pins;a back plate positioned on an opposite side of the PCB from the mounting plate; andone or more mounting extensions to affix the mounting plate to the back plate and maintain a distance between the mounting plate and the back plate.7. The apparatus of claim 6, wherein the mounting plate has a recess formed in a side of the mounting plate, and wherein the recess is to receive the IC package and maintain a position of the IC package relative to the mounting plate.8. The apparatus of any of the claims 1 and 2, wherein the IC package includes a ball grid array (BGA), and wherein solder balls of the BGA are positioned on the second ends of the plurality of pogo pins by the mounting mechanism.9. The apparatus of any of the claims 1 and 2, wherein the second ends of the plurality of pogo pins are coated with a non-corrosive material.10. The apparatus of claim 9, wherein the non-corrosive material is a non-gold, electrically conductive material.11. The apparatus of any of the claims 1 and 2, wherein:the IC package includes a ball grid array (BGA); andthe apparatus further comprises a land grid array (LGA) interposer to electrically couple the IC package to the plurality of pogo pins, the LGA interposer positioned between the IC package and the plurality of pogo pins, wherein electrical contacts of the IC package are coupled to a first set of electrical contacts on a first side of the LGA interposer via the BGA, and wherein a second set of electrical contacts on a second side of the LGA, opposite the first side, is coupled to the second ends of the plurality of pogo pins.12. A computer system, comprising:a printed circuit board (PCB);a plurality of pogo pins mounted to the PCB, first ends of the plurality of pogo pins coupled to electrical contacts of the PCB;an integrated circuit (IC) package positioned on the plurality of pogo pins, electrical contacts of the IC package coupled to second ends of the plurality of pogo pins; anda mounting mechanism to mount the IC package to the PCB with the plurality of pogo pins located between the IC package and the PCB, the mounting mechanism to compress the plurality of pogo pins between the IC package and the PCB.13. The computer system of claim 12, wherein the mounting mechanism includes: one or more surface mount (SMT) fixtures affixed to the PCB;a mounting plate to position the IC package on the plurality of pogo pins; and one or more mounting extensions that extend between the one or more SMT fixtures and the mounting plate, the one or more mounting extensions to maintain a position of the mounting plate.14. The computer system of claim 13, wherein the one or more mounting extensions include one or more screws, and wherein the one or more SMT fixtures each include a threaded aperture to receive the one or more screws.15. The computer system of claim 13, wherein the mounting plate has a recess formed in a side of the mounting plate located toward the PCB, wherein the IC package resides, at least partially, within the recess.16. The computer system of any of claims 12-15, wherein the mounting mechanism includes:a back plate positioned on a side of the PCB opposite from the IC package;a mounting plate positioned on a side of the IC package opposite from the PCB; andone or more mounting extensions to affix the mounting plate to the back plate and maintain a distance between the back plate and the mounting plate.17. The computer system of claim 16, wherein the mounting plate has a recess formed in a side of the mounting plate located toward the PCB, wherein the IC package resides, at least partially, within the recess.18. The computer system of claim 16, wherein the one or more mounting extensions extend through one or more apertures formed in the in the PCB.19. The computer system of any of claims 12-15, wherein the IC package includes:a semiconductor package with a ball grid array (BGA) on a side of the semiconductor package; anda land grid array (LGA) interposer located on the side of the semiconductor package, the LGA interposer coupled, on a first side of the LGA interposer, to the semiconductor package by the BGA and coupled, on a second side of the LGA interposer opposite the first side, to the plurality of pogo pins, wherein the LGA interposer is to convey signals between the semiconductor package and the plurality of pogo pins.20. A method of coupling an integrated circuit (IC) package to a printed circuit board (PCB), comprising:coupling first ends of a plurality of pogo pins to electrical contacts of the PCB; positioning the IC package with electrical contacts of the IC package aligned with second ends of the plurality of pogo pins;applying a compression force to one or both of the PCB and the IC package, the compression force compressing the plurality of pogo pins between the electrical contacts of the PCB and the electrical contacts of the IC package.21. The method of claim 20, wherein coupling the first ends of the plurality of pogo pins to the electrical contacts of the PCB includes soldering the first ends of the plurality of pogo pins to the electrical contacts of the PCB.22. The method of any of claims 20 and 21, wherein coupling the first ends of the plurality of pogo pins to the electrical contacts of the PCB includes:positioning, using a carrier body, the first ends of the plurality of pogo pins on the electrical contacts of the PCB;soldering the first ends of the plurality of pogo pins to the electrical contacts of thePCB while the plurality of pogo pins are located within the carrier body; andremoving the carrier body from the pogo pins after soldering the first ends of the plurality of pogo pins to the electrical contacts of the PCB.23. The method of any of claims 20 and 21, wherein positioning the IC package includes positioning the IC package in a cavity of a mounting plate of a mounting mechanism, wherein the mounting mechanism aligns the electrical contacts of the IC package with the second ends of the pogo pins, and wherein the mounting mechanism applies the compression force.24. The method of claim 23, further comprising:affixing one or more surface-mount (SMT) fixtures to a surface of the PCB; and attaching the mounting plate to the one or more SMT fixtures via one or more mounting extensions of the mounting mechanism.25. The method of claim 23, further comprising:positioning a back plate on a first side of the PCB;aligning the mounting plate with the back plate on a second side of the PCB, the second side opposite the first side; andaffixing the mounting plate to the plate via one or more mounting extensions, the one or more mounting extensions passing through apertures formed in the PCB. |
POGO PIN INTEGRATED CIRCUIT PACKAGE MOUNTRelated ApplicationThis application claims priority to U.S. Patent Application 15/231,018, entitled "POGO PIN INTEGRATED CIRCUIT PACKAGE MOUNT," filed August 8, 2016.Technical FieldThe present disclosure relates to the field of electronic circuits. More particularly, the present disclosure relates to integrated circuit package mount design for printed circuit boards.BackgroundThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.As computer technology continues to advance, legacy integrated circuit (IC) package mounting sockets will likely be unable to support the speeds of future generations of IC packages. The speeds of legacy IC packages are currently approaching the maximum supportable by the legacy IC package mounting sockets and will soon surpass this threshold.In order to address this issue, many computer products have transitioned to IC packages with a ball grid array (BGA). However, in many industries (including mobile devices) sockets for IC packages with a BGA meeting the application specifications are not available. Accordingly, rather than mount the IC packages with a BGA via sockets, the IC packages are soldered down to a printed circuit board (PCB). Repetitive soldering of the IC packages to the PCB may cause damage to the IC packages. Further, the soldered IC packages are difficult to debug and, if the soldered IC packages fail, the soldered IC packages are difficult to remove and replace.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.Figure 1 illustrates an example pogo pin integrated circuit package mount via surface-mount technology, according to various embodiments. Figure 2 illustrates an example process of generating the pogo pin integrated circuit package mount of Figure 1, according to various embodiments.Figure 3 illustrates example pogo pin mounting, according to variousembodiments.Figure 4 illustrates an example carrier body that may be utilized for pogo pin mounting, according to various embodiments.Figure 5 illustrates example surface mount fixture mounting, according to various embodiments.Figure 6 illustrates example integrated circuit package placement within the mounting plate, according to various embodiments.Figure 7 illustrates example integrated circuit package mounting to a printed circuit board, according to various embodiments.Figure 8 illustrates an example pogo pin integrated circuit package mount utilizing a through aperture design, according to various embodiments.Figure 9 illustrates an example process of generating the pogo pin integrated circuit package mount of Figure 8, according to various embodiments.Figure 10 illustrates an example pogo pin integrated circuit package mount with land grid array interposer, according to various embodiments.Figure 11 illustrates example pogo pins that may be implemented in the pogo pin integrated circuit package mounts described herein, according to various embodiments.Figure 12 illustrates an example computing device that may employ the apparatuses and/or methods described herein.Detailed DescriptionApparatuses, systems and methods associated with electrical fast transient tolerant input/output (I/O) communication (e.g., universal serial bus (USB)) design are disclosed herein. In embodiments, an apparatus to mount an integrated circuit (IC) package, may include a printed circuit board (PCB), a plurality of pogo pins, and a mountingmechanism. The plurality of pogo pins may be mounted to electrical contacts of the PCB, the plurality of pogo pins may be coupled to the electrical contacts at first ends of the plurality of pogo pins and may be to couple to the IC package at second ends of the plurality of pogo pins. The mounting mechanism may position the IC package on the second ends of the plurality of pogo pins.In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.Aspects of the disclosure are disclosed in the accompanying description. Alternate embodiments of the present disclosure and their equivalents may be devised without parting from the spirit or scope of the present disclosure. It should be noted that like elements disclosed below are indicated by like reference numbers in the drawings.Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter.However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.As used herein, the term "circuitry" may refer to, be part of, or include anApplication Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.Figure 1 illustrates an example pogo pin integrated circuit (IC) package mount 100 via surface-mount technology, according to various embodiments. The pogo pin IC package mount 100 may mount an IC package 102 to a printed circuit board (PCB) 104. The IC package 102 may include a through-hole package (such as a single in-line package and a dual in-line package), a surface mount package (such as a column grid array package and a land grid array package), a chip carrier package (such as a bump chip carrier), a pin grid array package, a flat package (such as a dual flat package, a quad flat package, and a no-lead flat package), a small outline package, a chip scale package, and/or a ball grid array package. Further, the IC package 102 may include a computer processor unit, a system on chip, a memory device (such as a dynamic random-access memory, a flash memory, and a read-only memory), a controller, a communication chip, digital logic gates, multiplexers, flip flops, amplifiers (such as operational amplifiers and audio amplifiers), converters, comparators, timers, transistors, and/or switches.The IC package mount 100 may include a plurality of pogo pins 106. The plurality of pogo pins 106 may be mounted to a side of the PCB 104. Each of the plurality of pogo pins 106 may be soldered on a first end to a corresponding electrical contact of the PCB 104 and may be electrically coupled to the corresponding electrical contact. Each of the plurality of pogo pins 106 may be separated from the other pogo pins of the plurality of pogo pins 106 by air. The air, having a low dielectric constant, may provide insulating qualities. The separation by air may prevent and/or limit cross talk between the pogo pins within the plurality of pogo pins 106. In some embodiments, the pogo pins may be separated by other materials with low dielectric constants.A second end of each of the plurality of pogo pins 106 may be electrically coupled to a corresponding electrical contact of the IC package 102. The second end of each of the plurality of pogo pins 106 may physically contact the corresponding electrical contact and contact between the second end and the corresponding electrical contact may be maintained by a compression force applied to one or both of the PCB 104 and the IC package 102 that urges the PCB 104 and the IC package 102 together. The plurality of pogo pins 106 may electrically couple the electrical contacts 110 of the IC package 102 with the electrical contacts 112 of the PCB 104. Accordingly, the IC package 102 may be electrically coupled to the PCB 104 via the plurality of pogo pins 106 without soldering of the IC package 102, which allows for easy interchange of the IC package 102 without soldering and de-soldering.In some embodiments, the second ends of the plurality of pogo pins 106 may be coated with a non-corrosive material. The non-corrosive material may include a non-gold, electrically conductive material, such as C3 coating, tin, nickel, silver, palladium, tin alloy, nickel alloy, silver alloy, palladium alloy, or some combination thereof. Coating the second ends of the plurality of pogo pins 106 with the non-corrosive material may prevent intermetallic formation that may form between the second ends of the plurality of pogo pins 106 and the IC package 102 and affix the IC package 102 to the second ends of the plurality of pogo pins 106.In some embodiments, the IC package 102 may include a ball grid array (BGA) 108 affixed to the electrical contacts 110 of the IC package 102. In these embodiments, the solder balls of the BGA 108 may physically contact the second ends of the plurality of pogo pins 106 when the IC package 102 is positioned on the plurality of pogo pins 106. The BGA 108 may remain in solidified form without being soldered to the plurality of pogo pins 106 while providing electrical coupling between the electrical contacts 110 of the IC package 102 and the plurality of pogo pins 106.The IC package mount 100 may further include a mounting mechanism 114. The mounting mechanism 114 may position the IC package 102 on the plurality of pogo pins 106 and may maintain the position of the IC package 102 relative to the pogo pins 106 and the PCB 104. The mounting mechanism 114 may apply the compression force to one or both of the PCB 104 and the IC package 102 that may maintain the electrical contact between the electrical contacts 110 of the IC package 102 and the second ends of the plurality of pogo pins 106. The compression force applied by the mounting mechanism 114 may further cause the plurality of pogo pins 106 to be compressed that may electrically couple the first ends of the plurality of pogo pins 106 to the second ends of the plurality of pogo pins 106.The mounting mechanism 114 may include a mounting plate 116, one or more surface mount (SMT) fixtures 118, and one or more mounting extensions 120. The mounting plate 116 may be positioned on a side of the IC package 102 opposite from the plurality of pogo pins 106 and the PCB 104. The mounting plate 116 may include a cavity 122 and the IC package 102 may be positioned, at least partially, within the cavity 122. The mounting plate 116, via the IC package 102 being positioned, at least partially, within the cavity 122, may maintain a position of the IC package 102 relative to a length of the mounting plate 116. In other embodiments, the mounting plate 116 may not include the cavity 122, but may include a different means of maintaining a position of the IC package 102 relative to the length of the mounting plate 116, such as the mounting plate 116 may include an adhesive to temporarily affix the IC package 102 to the side of the mounting plate 116, extrusions to position the IC package 102, or some combination thereof.The one or more SMT fixtures 118 may be mounted to the PCB 104. The SMT fixtures 118 may be mounted to the PCB 104 by an epoxy, an adhesive, soldering the SMT fixtures 118 to a metallic feature on a surface of the PCB 104, or some combination thereof. Each of the SMT fixtures 118 may include a threaded aperture, formed within each of the SMT fixtures 118, wherein the threaded aperture extends perpendicularly to the PCB 104.The mounting extensions 120 may affix the mounting plate 116 to the SMT fixtures 118 and may maintain a relative position between the mounting plate 116 and the PCB 104. The relative position may be based on a length of the plurality of pogo pins 106, the thickness of the IC package, diameters of the solder balls in the BGA 108, or some combination thereof. The mounting extensions 120 may extend through apertures 124 formed in the mounting plate 116 and into the SMT fixtures 118.As the mounting extensions 120 are inserted into the SMT fixtures 118, the mounting plate 116 may align the IC package 102 with the plurality of pogo pins 106. The mounting plate 116 may align the electrical contacts 110 of the IC package 102 with the second ends of the plurality of pogo pins 106. As the mounting extensions 120 are inserted into the SMT fixtures 118, the IC package 102 may move toward the PCB 104 in a direction perpendicular to the surface to the surface of the PCB 104 to which the plurality of pogo pins 106 are mounted and may, therefore, minimize any force that may be applied to the plurality of pogo pins 106 that is not in the direction of compression of the plurality of pogo pins 106, which may cause damage to the plurality of pogo pins 106.Further, as the mounting extensions 120 are inserted into the SMT fixtures 118, the mounting extensions 120 may generate a compression force between the mounting plate 116 and the PCB 104 urging the mounting plate 116 and the PCT 104 toward each other. The compression force may be equal or greater than a determined amount of compression force to cause the plurality of pogo pins 106 to be compressed. An amount of the compression force may be determined based on a number of pogo pins in the plurality of pogo pins 806, a type of the plurality of pogo pins 806, or some combination thereof. The amount of the compression force may be equal to a number of pogo pins in the plurality of pogo pins 806 multiplied by a compression force of a single pogo pin in the range of 0.4 newtons to 2 newtons. The compression force may be translated from the mounting plate 116 to the IC package 102 and may provide the compression force between the IC package 102 and the PCB 104. The compression force between the IC package 102 and the PCB 104 may cause the plurality of pogo pins 106 to compress, electrically coupling the first ends of the plurality of pogo pins 106 to the second ends of the plurality of pogo pins 106.The mounting extensions 120 may include one or more screws, wherein the screws may be installed into the threaded apertures of the SMT fixtures 118 and the heads of the screws may contact the mounting plate 116 and may maintain the position of the mounting plate 116. The length of the mounting extensions 120 may be based on the length of the plurality of pogo pins 106, the distance of available compression between the first ends and the second ends of the plurality of pogo pins 106, a thickness of the IC package 102, a thickness of the mounting plate 116, or some combination thereof. In some embodiments, the mounting extensions 120 may be a part of the mounting plate 116 and may extend into the SMT fixtures 118. Further, in some embodiments, the mounting extensions 120 may be unthreaded and may be held within the SMT fixtures 118 by another means of affixture, such as frictional force, a clamping mechanism, or some combination thereof.Figure 2 illustrates an example process of generating the pogo pin integrated circuit package mount 100 of Figure 1, according to various embodiments. In 202, the plurality of pogo pins 106 are mounted to the PCB 104, as is illustrated in Figure 3. The plurality of pogo pins 106 may be positioned within a carrier body 106 during the mounting process. The carrier body 302 may be used for aligning the plurality of pogo pins 106 with the electrical contacts 112 of the PCB 104 and may maintain the positions of the plurality of pogo pins 106 during the mounting process. Once aligned, first sides of the plurality of pogo pins 106 may be soldered to the electrical contacts 112 of the PCB 104. After the plurality of pogo pins 106 are soldered to the electrical contacts 112, the carrier body 302 may be removed from the plurality of pogo pins 106.Figure 4 illustrates an example carrier body 302 that may be utilized for pogo pin mounting in 202, according to various embodiments. The carrier body 302 may include a plurality of apertures 402. The plurality of apertures 402 may correspond to a layout of the electrical contacts 112 of the PCB 104 (Fig. 3) with one aperture corresponding to each of the electrical contacts 112, such that when the carrier body 302 is aligned with the PCB 104 (Fig. 3) each of the apertures of the plurality of apertures 402 is aligned with a corresponding electrical contact of the electrical contacts 112.The plurality of pogo pins 106 may be positioned within the plurality of apertures 402. The plurality of pogo pins 106 may be maintained in the plurality of apertures 302 by frictional force between the plurality of pogo pins 106 and the carrier body 302, an adhesive applied to the walls of the plurality of apertures 402, or some combination thereof. The plurality of pogo pins 106 may be removed from the plurality of apertures 302 via application of a removal force applied to the plurality of pogo pins 106 and/or the carrier body 302 that is large enough to overcome the frictional force and/or the retaining force generated by the adhesive. The carrier body 302 may be designed such that the frictional force and/or the retaining force generated by the adhesive is less than an affixation force of solder, such that plurality of pogo pins 106 may be removed from the carrier body 302 when the plurality of pogo pins 106 are soldered to the PCB 104 via application of the removal force without any of the pogo pins 106 being separated from the PCB 104.In some embodiments, the plurality of pogo pins 106 may be individually soldered to the PCB without the use of the carrier body 302. Each of the individual pogo pins may be aligned with a corresponding electrical contact of the PCB 104 and soldered to the corresponding electrical contact. Further, in some embodiments, the plurality of pogo pins 106 may be packaged on a tape and reel and may be soldered to the PCB 104 by a machine, such as an automatic pick and place machine.In 204, the one or more SMT fixtures 118 may be mounted to the PCB 104, as illustrated in Figure 5. The SMT fixtures 118 may be mounted to the PCB 104 by aligning each of the SMT fixtures 118 with a corresponding metallic feature on the surface of the PCB 104 and soldering each of the SMT fixtures 118 on the corresponding metallic feature. In some embodiments, an epoxy may be applied to the SMT fixtures 118, theSMT fixtures 118 may be positioned on the surface of the PCB 104, and the epoxy may be cured to affix the SMT fixtures 118 to the PCB 104. The epoxy may be cured by the application of heat, light, chemicals, or some combination thereof, to the epoxy. Further, in some embodiments the SMT fixtures 118 may be affixed to the PCB 104 by an adhesive, such as double sided tape.The SMT fixtures 118 may be positioned on the PCB 104 to correspond with a desired position of the mounting plate 116 for aligning the IC package 102 with the plurality of pogo pins 106. The SMT fixtures 118 may be mounted to the PCB 104 such that the SMT fixtures 118 align with the apertures 124 formed in the mounting plate 116 when the IC package 102 is aligned with the plurality of pogo pins 106.In 206, the IC package 102 may be positioned on the mounting plate 116, as illustrated in Figure 6. The IC package 102 may be positioned, at least partially, within the cavity 122 of the mounting plate 116. A side of the IC package 102 opposite the electrical contacts 110 may be orientated toward the mounting plate 116 and may abut the mounting plate 116 when positioned, at least partially, within the cavity 122.In some embodiments, the mounting plate 116 may not include the cavity 122. In these embodiments, the IC package 102 may be aligned with markings on the mounting plate 116 and may be positioned on the mounting plate using an adhesive. Further, in some embodiments, the mounting plate 116 may include extrusions. In these embodiments, the IC package 102 may be aligned between the extrusions of the mounting plate 116.In 208, the mounting plate 116 may be attached to the PCB 104 via the one or more mounting extensions 120, as illustrated in Figure 7. The mounting plate 116 may be aligned with the PCB 104, wherein the mounting plate 116 may position the IC package 102 positioned on the second end of the plurality of pogo pins 106 when aligned with the PCB 104. The mounting extensions 120 may extend through the apertures 124 of the mounting plate 116 and into the SMT fixtures 118 mounted to the PCB 104. As the mounting extensions 120 are inserted into the SMT fixtures 118, a compression force may be generated, by the mounting plate 116, between the IC package 102 and the PCB 104, urging the IC package 102 and the PCB 104 toward each other. The mounting extensions 120 may become affixed to the SMT fixtures 118 and may maintain a position of the mounting plate 116 relative to the PCB 104. The compression force, when the mounting extensions 120 are affixed to the SMT fixtures 118, may be equal to or greater than a compression force for maintaining the plurality of pogo pins 106 in a compressed state.Figure 8 illustrates an example pogo pin IC package mount 800 utilizing a through aperture design, according to various embodiments. The pogo pin IC package mount 800 may include one or more of the features of the pogo pin IC package mount 100 (Fig. 1), including the features of the plurality of pogo pins 106, the mounting plate 116, the mounting extensions 120, or some combination thereof. The IC package mount 800 may mount an IC package 802 to a PCB 804. The IC package 802 may include one or more of the features of the IC package 102 (Fig. 1) and the PCB 804 may include one or more of the features of the PCB 104 (Fig. 1).The IC package mount 800 may include a plurality of pogo pins 806 mounted at first ends of the plurality of pogo pins 806 to electrical contacts 812 of the PCB 804. The plurality of pogo pins 806 may include one or more of the features of the plurality of pogo pins 106. Further, the plurality of pogo pins 806 may be mounted to the PCB 804 by one or more of the processes and/or means for mounting of the plurality of pogo pins 106 to the electrical contacts 112 (Fig. 1), including soldering the first ends of the plurality of pogo pins 806 to the electrical contacts 812. Accordingly, the first ends of the plurality of pogo pins 806 may be electrically coupled to the electrical contacts 812 of the PCB 804.The pogo pin IC package mount 800 may include a mounting mechanism 814 for mounting the IC package 802 on a second end of the plurality of pogo pins 806. The mounting mechanism may include one or more of the features of the mounting mechanism 114 (Fig. 1).The mounting mechanism 814 may include a mounting plate 816. The mounting plate 816 may include one or more of the features of the mounting plate 116 (Fig. 1). The mounting plate 816 may include a cavity 822 formed in one of the sides of the mounting plate 816. The IC package 802 may be positioned, at least partially, within the cavity 822. The cavity 822 may be utilized for maintaining the IC package 802 in a position in relative to the mounting plate 816 and may be utilized for alignment of the IC package 802 with the plurality of pogo pins 806. Further, the mounting plate 816 may align electrical contacts 810 of the IC package 802 with second ends of the plurality of pogo pins 806.In some embodiments, the mounting plate 816 may not include the cavity 822. In these embodiments, an adhesive and/or epoxy may be applied to the side of the mounting plate 816 to maintain the IC package 802 in a position relative to the mounting plate 816 when the IC package 802 is positioned on the side of the mounting plate 816. In these embodiments, markings formed on, and/or applied to, the mounting plate 816 may be utilized for positioning the IC package 802 on the side of the mounting plate 816. Further, in some embodiments, the mounting plate 816 may include extrusions for aligning the IC package 802 on the mounting plate 816. The IC package 802 may be positioned between the extrusions.The mounting mechanism 814 may include a back plate 826. The back plate 826 may be positioned on an opposite side of the PCB 804 from the mounting plate 816 and the IC package 802. The back plate 826 may contact a side of the PCB 804 opposite from the plurality of pogo pins 806 when the IC package 802 is positioned on the plurality of pogo pins 806.The mounting mechanism 814 may include one or more mounting extensions 820. The mounting extensions 820 may include one or more of the features of the mounting extensions 120 (Fig. 1). The mounting extensions 820 may affix the mounting plate 816 to the back plate 826. The mounting extensions 820 may extend through apertures 824 formed in the mounting plate 816 and apertures 828 formed in the PCB 804 into apertures 830 formed in the back plate 826. The apertures 830 formed in the back plate 826 may couple to the mounting extensions 820, affixing the mounting plate 816 to the back plate 826 via the mounting extensions 820. A length of the mounting extensions 802 may be based on a length of the plurality of pogo pins 806, the distance of available compression between the first ends and the second ends of the plurality of pogo pins 106, a thickness of the IC package 802, a thickness of the PCB 804, a thickness of the mounting plate 806, a thickness of the back plate 826, or some combination thereof.The mounting extensions 820 may include one or more screws. Heads of the screws may engage with the mounting plate 816 preventing movement of the mounting plate 816 in a direction opposite from the back plate 826. The apertures 830 of the back plate 826 may be threaded and may receive threads of the one or more screws, affixing the screws within the apertures 830. Further, in some embodiments, the mounting extensions 120 may be unthreaded and may be held within the SMT fixtures 118 by another means of affixture, such as frictional force, a clamping mechanism, or some combination thereof.When affixed by the mounting extensions 820, the mounting plate 816 may position the IC package 802 on the second ends of the plurality of pogo pins 806 with the electrical contacts 810 of the IC package 802 aligned with the second ends of the plurality of pogo pins 806. The electrical contacts 810 of the IC package 802 may contact the second ends of the plurality of pogo pins 806, electrically coupling the electrical contacts 810 with the second ends of the plurality of pogo pins 806. Accordingly, the electrical contacts 810 of the IC package 802 may be electrically coupled to the second ends of the plurality of pogo pins 806 without the electrical contacts 810 being soldered, or otherwise permanently or semi-permanently affixed, to the plurality of pogo pins 806. In some embodiments, the IC package 802 may include a BGA 808 formed on the electrical contacts 810 of the IC package 802 and the solder balls of the BGA 808 may contact the second ends of the plurality of pogo pins 806, electrically coupling the electrical contacts 810 with the second ends of the plurality of pogo pins 806.Further, when affixed by the mounting extension 820, a compression force may be generated urging the mounting plate 816 and the back plate 826 toward each other. The compression force may be a force equal to or greater than a compression force to compress the plurality of pogo pins 806. An amount of the compression force may be determined based on a number of pogo pins in the plurality of pogo pins 806, a type of the plurality of pogo pins 806, or some combination thereof. This compression force may be transferred to the IC package 802 by the mounting plate 816 and to the PCB 804 by the back plate 826 and the compression force may urge the IC package 802 and the PCB 804 toward each other.The compression force may cause the plurality of pogo pins 806, located between the IC package 802 and the PCB 804, to be compressed, which may cause the first ends of the plurality of pogo pins 806 to be electrically coupled to the second ends of the plurality of pogo pins 806. Accordingly, the electrical contacts 810 of the IC package 802 may be electrically coupled to the electrical contacts 812 of the PCB 804 via the plurality of pogo pins 806, allowing transmission of electrical signals between the electrical contacts 810 of the IC package 802 and the electrical contacts 812 of the PCB 804.Figure 9 illustrates an example process 900 of generating the pogo pin integrated circuit package mount 800 of Figure 8, according to various embodiments. In 902, the plurality of pogo pins 806 may be mounted to the PCB 804. The process of mounting the plurality of pogo pins 806 to the PCB 804 may include one or more of the features of 202 (Fig. 2), including soldering the plurality of pogo pins 806 to the electrical contacts of 812 of the PCB 804, positioning the plurality of pogo pins 806 on the electrical contacts of 812 of the PCB 804 using a carrier body (such as carrier body 302 (Fig. 3)), or some combination thereof.In 904, the IC package 802 may be positioned on the mounting plate 816. The positioning of the IC package 802 on the mounting plate 816 may include one or more of the features of 206 (Fig. 2) related to the positioning of the IC package 102 on the mounting plate 116. The IC package 802 may be positioned, at least partially, within the cavity 822 of the mounting plate 816, with the electrical contacts 810 of the IC package 802 orientated in a direction opposite to the mounting plate 816.In 906, the back plate 826 is positioned on the PCB 804. The back plate 826 is positioned on a side of the PCB 804 opposite to the plurality of pogo pins 806. The back plate 826 may be positioned with the apertures 830 of the back plate 826 aligned with the apertures 828 of the PCB 804.In 908, the mounting plate 816 may be attached to the back plate 826 via the one or more mounting extensions 820. The mounting extensions 820 may be routed through the apertures 824 formed in the mounting plate 816, through the apertures 828 formed in the PCB 804, and into the apertures 830 formed in the back plate 826. The apertures 830 of the back plate may couple to the mounting extensions 820 and attach the mounting plate 816 to the back plate 826 via the mounting extensions 820. The IC package 802 may be positioned on the second ends of the plurality of pogo pins 806 and the plurality of pogo pins 806 may be in a compressed state when the mounting plate 816 is attached to the back plate 826.Figure 10 illustrates an example pogo pin IC package mount 1000 with land grid array (LGA) interposer 1032, according to various embodiments. The pogo pin IC package mount 1000 may mount an IC package 1002 to a PCB 1004. The PCB 1004 may include one or more of the features of the PCB 104 (Fig. 1) and/or the PCB 804 (Fig. 8), including the electrical contacts 112 of the PCB 104, the electrical contacts 812 of the PCB 804, the apertures 828 of the PCB 804, or some combination thereof.The IC package mount 1000 may include a plurality of pogo pins 1006. The plurality of pogo pins 1006 may include one or more of the features of the plurality of pogo pins 106 (Fig. 1) and/or the plurality of pogo pins 806 (Fig. 8). Further, the plurality of pogo pins 1006 may be mounted to the PCB 1004 by a same process as 202 (Fig. 2) and/or 902 (Fig. 9) for mounting the plurality of pogo pins 106 and/or the plurality of pogo pins 806, respectively.The IC package mount 1000 may include a mounting mechanism 1014. The mounting mechanism 1014 may include one or more of the features of the mounting mechanism 114 (Fig. 1), including the mounting plate 116, the SMT fixtures 118, the mounting extensions 120, or some combination thereof. The mounting mechanism 1014 may operate similarly to the mounting mechanism 114, including mounting the IC package 1002 to the plurality of pogo pins 1006, generating the compression force urging the IC package 1002 and the PCB 1004 towards each other, or some combination thereof.The IC package 1002 may include a semiconductor package 1034 and an LGA interposer 1032. The semiconductor package 1034 may include a BGA 1008 formed on electrical contacts 1010 of the semiconductor package 1034. Solder balls of the BGA 1008 may be soldered to electrical contacts 1036 on a first side of the LGA interposer 1032, electrically coupling the electrical contacts 1010 of the semiconductor package 1034 to the electrical contacts 1036 on the first side of the LGA interposer 1032.The LGA interposer 1032 may include an LGA 138 formed on a second side of the LGA interposer 1032. The second side of the LGA interposer 1032 may oppose the first side of the LGA interposer 1032. The LGA interposer 1032 may electrically couple the electrical contacts 1036 on the first side of the LGA interposer 1032 to the LGA 138 on the second side of the LGA interposer 1032. Accordingly, the electrical contacts 1010 of the semiconductor package 1034 are electrically coupled to the LGA 1038 of the LGA interposer 1032, allowing electrical signals to be transmitted between the electrical contacts 1010 and the LGA 1038.The LGA 1038 of the LGA interposer 1032 may be aligned with the plurality of pogo pins 1060 and may contact the plurality of pogo pins 1060 when the IC package 1002 is mounted to the plurality of pogo pins 1060 by the mounting mechanism 1014. The plurality of pogo pins 1006 may be compressed by a compression force urging the IC package 1002 and the PCB 1004 toward each other, wherein the compression force may be generated by the mounting mechanism 1014.The compressed plurality of pogo pins 1006 may electrically couple first ends of the plurality of pogo pins 1006, electrically coupled to electrical contacts 1012 of the PCB 1004, to second ends of the plurality of pogo pins 1006, electrically coupled to the LGA 1038 of the LGA interposer 1032. Accordingly, the IC package 1002 is electrically coupled to the PCB 1004, allowing transmission of electrical signals between the IC package 1002 and the PCB 1004.Figure 11 illustrates example pogo pins that may be implemented in the pogo pin IC package mounts described herein, according to various embodiments. A first embodiment of a pogo pin 1100 may include a plunger 1002 and a body 1104. In accordance with the embodiments of the pogo pin IC package mounts described here, a base 1106 of the body 1104 may be referred to as a first end of the pogo pin 1100 and a tip 1108 of the plunger 1102 may be referred to as a second end of the pogo pin 1100. In some embodiments, the tip 1108 may be coated with a non-corrosive material, such as non-gold, electrically conductive material including, but not limited to, a C3 coating.A spring 1110 may be located within the pogo pin 1100. The spring 1110 may be positioned between the body 1104 and the plunger 1102, and may urge the plunger 1102 away from a base 1106 of the body 1104. When the spring 1110 is in an extended state, an end 1112 of the plunger 1102 may be separated from the base 1106 of the body 1104. This state may also be referred to as a non-compressed state of the pogo pin 1100. In the non- compressed state, the plunger 1102 may be electrically isolated from the body 1104 and, accordingly, electrical signals may be prevented from transmission between the tip 1108 of the plunger 1102 and the base 1106 of the body 1104.As a compression force is applied to the tip 1108 of the plunger 1102 and/or the base 1106 of the body 1104 urging the tip 1108 toward the base 1106, the spring 1110 may be compressed and may allow the end 1112 of the plunger 1102 to contact the base 1106. The compression force to compress the spring may be equal to or greater than a resistance force of the spring 1110 to compression. When the spring 1110 is compressed, the pogo pin 1100 may be referred to as being in a compressed state. In the compressed state, the tip 1108 of the plunger 1102 may be electrically coupled to the base 1106 of the body 1104 and may allow transmission of electrical signals between the base 1106 (i.e. the first end of the pogo pin 1100) and the tip 1108 (i.e. second end of the pogo pin 1100).A length of the pogo pin 1100, measured from the tip 1108 to the base 1106, may be relatively short. Having the short length may facilitate high speeds that may be achieved by IC packages. In some embodiments, the pogo pin 1100 may have a length of less than one millimeter.In some embodiments, the compressed state of the pogo pin 1100 may occur when the spring 1110 is compressed beyond a threshold amount of compression, the threshold amount being between the fully extended spring and the fully compressed spring (where the end 1112 contacts the base 1106). In these embodiments, the tip 1108 of the plunger 1102 may be electrically coupled to the base 1106 whenever the spring 1110 is compressed beyond the threshold amount of compression.A second embodiment of a pogo pin 1150 may include a plunger 1152 and a body 1104. The pogo pin 1150 may include one or more of the features of the pogo pin 1100. In accordance with the embodiments of the pogo pin IC package mounts described here, a base 1156 of the body 1154 may be referred to as a first end of the pogo pin 1150 and the tip 1158 of the plunger 1152 may be referred to as a second end of the pogo pin 1150. In some embodiments, the tip 1158 may be coated with a non-corrosive material, such as non-gold, electrically conductive material including, but not limited to, a C3 coating.A spring 1160 may be located around a circumference of the pogo pin 1100. The spring 1110 may be positioned between the base 1156 of the body 1154 and the tip 1158 of the plunger 1152, and may urge the plunger 1152 away from a base 1156 of the body 1154. When the spring 1160 is in an extended state, an end 1162 of the plunger 1152 is separated from the base 1156 of the body 1154. This state may also be referred to as a non- compressed state of the pogo pin 1150. In the non-compressed state, the plunger 1152 may be electrically isolated from the body 1154 and, accordingly, electrical signals may be prevented from transmission between the tip 1158 of the plunger 1152 and the base 1156 of the body.As a compression force is applied to the tip 1158 of the plunger 1152 and/or the base 1156 of the body 1154 urging the tip 1158 toward the base 1156, the spring 1160 may be compressed and may allow the end 1162 of the plunger 1152 to contact the base 1156. The compression force to compress the spring may be equal to or greater than a resistance force of the spring 1160 to compression. When the spring 1160 is compressed, the pogo pin 1150 may be referred to as being in a compressed state. In the compressed state, the tip 1158 of the plunger 1152 may be electrically coupled to the base 1156 of the body 1154 and may allow transmission of electrical signals between the base 1156 (i.e. the first end of the pogo pin 1150) and the tip 1158 (i.e. second end of the pogo pin 1150).In some embodiments, the compressed state of the pogo pin 1150 may occur when the spring 1160 compresses beyond a threshold amount of compression, the threshold amount between the fully extended spring and the fully compressed spring (where the end 1162 contacts the base 1156). In these embodiments, the tip 1158 of the plunger 1152 may be electrically coupled to the base 1156 whenever the spring 1160 is compressed beyond the threshold amount of compression.A length of the pogo pin 1150, measured from the tip 1158 to the base 1156, may be relatively short. Having the short length may facilitate high speeds that may be achieved by IC packages. In some embodiments, the pogo pin 1150 may have a length of less than one millimeter.While various example embodiments of pogo pins are illustrated and described herein, it is to be understood that the pogo pins that may be implemented in the pogo pin IC package mounts described herein are not limited to these embodiments and may include any embodiment of a pogo pin as understood by one having ordinary skill in the art. The pogo pins may be implemented in the plurality of pogo pins 106 (Fig. 1), the plurality of pogo pins 806 (Fig. 8), and/or the plurality of pogo pins 1006 (Fig. 10).Figure 12 illustrates an example computer device 1200 that may employ the apparatuses and/or methods described herein (e.g., the pogo pin IC package mount 100 (Fig. 1), the pogo pin IC package mount 800 (Fig. 8), and/or the pogo pin IC package mount 1000 (Fig. 10)), in accordance with various embodiments. As shown, computer device 1200 may include a number of components, such as one or more processor(s) 1204 (one shown) and at least one communication chip 1206. In various embodiments, the one or more processor(s) 1204 each may include one or more processor cores. In various embodiments, the at least one communication chip 1206 may be physically and electrically coupled to the one or more processor(s) 1204. In further implementations, the communication chip 1206 may be part of the one or more processor(s) 1204. In various embodiments, computing device 1200 may include printed circuit board (PCB) 1202. For these embodiments, the one or more processor(s) 1204 and communication chip 1206 may be disposed thereon. In alternate embodiments, the various components may be coupled without the employment of PCB 1202.Depending on its applications, computer device 1200 may include other components that may or may not be physically and electrically coupled to the PCB 1202. These other components include, but are not limited to, memory controller 1226, volatile memory (e.g., dynamic random access memory (DRAM) 1220), non-volatile memory such as read only memory (ROM) 1224, flash memory 1222, storage device 1254 (e.g., a hard-disk drive (HDD)), an I/O controller 1241, a digital signal processor (not shown), a crypto processor (not shown), a graphics processor 1230, one or more antenna 1228, a display (not shown), a touch screen display 1232, a touch screen controller 1246, a battery 1236, an audio codec (not shown), a video codec (not shown), a global positioning system (GPS) device 1240, a compass 1242, an accelerometer (not shown), a gyroscope (not shown), a speaker 1250, a camera 1252, and a mass storage device (such as hard disk drive, a solid state drive, compact disk (CD), digital versatile disk (DVD)) (not shown), and so forth.In some embodiments, the one or more processor(s) 1204, flash memory 1222, and/or storage device 1254 may include associated firmware (not shown) storing programming instructions configured to enable computer device 1200, in response to execution of the programming instructions by one or more processor(s) 1204, to practice all or selected aspects of the methods described herein. In various embodiments, these aspects may additionally or alternatively be implemented using hardware separate from the one or more processor(s) 1204, flash memory 1222, or storage device 1254.In various embodiments, one or more of the pogo pin IC package mount 100 (Fig. 1), the pogo pin IC package mount 800 (Fig. 8), and/or the pogo pin IC package mount 1000 (Fig. 10) may be utilized for mounting components of the computer device 700 to the printed circuit board 702. For example, one or more of the pogo pin IC package mount 100 (Fig. 1), the pogo pin IC package mount 800 (Fig. 8), and/or the pogo pin IC package mount 1000 (Fig. 10) may be utilized for mounting the processor 704, the DRAM 720, the flash memory 722, the ROM 724, the GPS 740, the compass 742, the communication chip 706, the memory controller 726, the I/O controller 741, the graphics CPU 730, the storage device 754, the touch screen controller 746, or some combination thereof, to the printed circuit board 702.The communication chips 1206 may enable wired and/or wireless communications for the transfer of data to and from the computer device 1200. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1206 may implement any of a number of wireless standards or protocols, including but not limited to IEEE 802.20, Long Term Evolution (LTE), LTE Advanced (LTE-A), General Packet Radio Service (GPRS), Evolution Data Optimized (Ev-DO), Evolved High Speed Packet Access (HSPA+), Evolved High Speed Downlink Packet Access (HSDPA+), Evolved High Speed Uplink Packet Access (HSUPA+), Global System for Mobile Communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), WorldwideInteroperability for Microwave Access (WiMAX), Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4Q 5G, and beyond. The computer device 1200 may include a plurality of communication chips 1206. For instance, a first communication chip 1206 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth, and a second communication chip 1206 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.In various implementations, the computer device 1200 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a computing tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit (e.g., a gaming console or automotive entertainment unit), a digital camera, an appliance, a portable music player, or a digital video recorder. In further implementations, the computer device 1200 may be any other electronic device that processes data.Example 1 may include an apparatus to mount an integrated circuit (IC) package, comprising a printed circuit board (PCB), a plurality of pogo pins mounted to electrical contacts of the PCB, the plurality of pogo pins coupled to the electrical contacts at first ends of the plurality of pogo pins and to couple to the IC package at second ends of the plurality of pogo pins, and a mounting mechanism to position the IC package on the second ends of the plurality of pogo pins.Example 2 may include the apparatus of example 1, wherein the mounting mechanism maintains a compression force between the IC package and the PCB, the compression force sufficient to compress the plurality of pogo pins.Example 3 may include the apparatus of any of the examples 1 and 2, wherein the mounting mechanism includes one or more surface-mount (SMT) fixtures mounted to the PCB, a mounting plate to position the IC package on the second ends of the plurality of pogo pins, and one or more mounting extensions to affix the mounting plate to the one or SMT fixtures.Example 4 may include the apparatus of example 3, wherein the one or more mounting extensions are one or more screws, and wherein the one or more SMT fixtures each have a threaded aperture to receive the one or more screws.Example 5 may include the apparatus of example 3, wherein the mounting plate has a recess formed in a side of the mounting plate, and wherein the recess is to receive the IC package and maintain a position of the IC package relative to the mounting plate.Example 6 may include the apparatus of any of the examples 1 and 2, wherein the mounting mechanism includes a mounting plate to position the IC package on the second ends of the plurality of pogo pins, a back plate positioned on an opposite side of the PCB from the mounting plate, and one or more mounting extensions to affix the mounting plate to the back plate and maintain a distance between the mounting plate and the back plate.Example 7 may include the apparatus of example 6, wherein the mounting plate has a recess formed in a side of the mounting plate, and wherein the recess is to receive the IC package and maintain a position of the IC package relative to the mounting plate.Example 8 may include the apparatus of any of the examples 1 and 2, wherein the IC package includes a ball grid array (BGA), and wherein solder balls of the BGA are positioned on the second ends of the plurality of pogo pins by the mounting mechanism.Example 9 may include the apparatus of any of the examples 1 and 2, wherein the second ends of the plurality of pogo pins are coated with a non-corrosive material.Example 10 may include the apparatus of example 9, wherein the non-corrosive material is a non-gold, electrically conductive material.Example 11 may include the apparatus of any of the examples 1 and 2, wherein the IC package includes a ball grid array (BGA), and the apparatus further comprises a land grid array (LGA) interposer to electrically couple the IC package to the plurality of pogo pins, the LGA interposer positioned between the IC package and the plurality of pogo pins, wherein electrical contacts of the IC package are coupled to a first set of electrical contacts on a first side of the LGA interposer via the BGA, and wherein a second set of electrical contacts on a second side of the LGA, opposite the first side, is coupled to the second ends of the plurality of pogo pins.Example 12 may include a computer system, comprising a printed circuit board (PCB), a plurality of pogo pins mounted to the PCB, first ends of the plurality of pogo pins coupled to electrical contacts of the PCB, an integrated circuit (IC) package positioned on the plurality of pogo pins, electrical contacts of the IC package coupled to second ends of the plurality of pogo pins, and a mounting mechanism to mount the IC package to the PCB with the plurality of pogo pins located between the IC package and the PCB, the mounting mechanism to compress the plurality of pogo pins between the IC package and the PCB.Example 13 may include the computer system of example 12, wherein the mounting mechanism includes one or more surface mount (SMT) fixtures affixed to the PCB, a mounting plate to position the IC package on the plurality of pogo pins, and one or more mounting extensions that extend between the one or more SMT fixtures and the mounting plate, the one or more mounting extensions to maintain a position of the mounting plate.Example 14 may include the computer system of example 13, wherein the one or more mounting extensions include one or more screws, and wherein the one or more SMT fixtures each include a threaded aperture to receive the one or more screws.Example 15 may include the computer system of example 13, wherein the mounting plate has a recess formed in a side of the mounting plate located toward the PCB, wherein the IC package resides, at least partially, within the recess.Example 16 may include the computer system of any of the examples 12-15, wherein the mounting mechanism includes a back plate positioned on a side of the PCB opposite from the IC package, a mounting plate positioned on a side of the IC package opposite from the PCB, and one or more mounting extensions to affix the mounting plate to the back plate and maintain a distance between the back plate and the mounting plate.Example 17 may include the computer system of example 16, wherein the mounting plate has a recess formed in a side of the mounting plate located toward the PCB, wherein the IC package resides, at least partially, within the recess.Example 18 may include the computer system of example 16, wherein the one or more mounting extensions extend through one or more apertures formed in the in the PCB.Example 19 may include the computer system of any of the examples 12 and 13, wherein the IC package includes a ball grid array (BGA), wherein the electrical contacts of the IC package are coupled to the second ends of the plurality of pogo pins via the BGA.Example 20 may include the computer system of any of the examples 12-15, wherein the IC package includes a semiconductor package with a ball grid array (BGA) on a side of the semiconductor package, and a land grid array (LGA) interposer located on the side of the semiconductor package, the LGA interposer coupled, on a first side of the LGA interposer, to the semiconductor package by the BGA and coupled, on a second side of the LGA interposer opposite the first side, to the plurality of pogo pins, wherein the LGA interposer is to convey signals between the semiconductor package and the plurality of pogo pins.Example 21 may include the computer system of any of the examples 12-15, wherein the second ends of the plurality of pogo pins are coated with a non-corrosive material.Example 22 may include the computer system of example 21, wherein the non- corrosive material is a non-gold, electrically conductive material.Example 23 may include a method of coupling an integrated circuit (IC) package to a printed circuit board (PCB), comprising coupling first ends of a plurality of pogo pins to electrical contacts of the PCB, positioning the IC package with electrical contacts of the IC package aligned with second ends of the plurality of pogo pins, applying a compression force to one or both of the PCB and the IC package, the compression force compressing the plurality of pogo pins between the electrical contacts of the PCB and the electrical contacts of the IC package.Example 24 may include the method of example 23, wherein coupling the first ends of the plurality of pogo pins to the electrical contacts of the PCB includes soldering the first ends of the plurality of pogo pins to the electrical contacts of the PCB.Example 25 may include the method of any of the examples 23 and 24, wherein coupling the first ends of the plurality of pogo pins to the electrical contacts of the PCB includes positioning, using a carrier body, the first ends of the plurality of pogo pins on the electrical contacts of the PCB, soldering the first ends of the plurality of pogo pins to the electrical contacts of the PCB while the plurality of pogo pins are located within the carrier body, and removing the carrier body from the pogo pins after soldering the first ends of the plurality of pogo pins to the electrical contacts of the PCB.Example 26 may include the method of any of the examples 23 and 24, wherein positioning the IC package includes positioning the IC package in a cavity of a mounting plate of a mounting mechanism, wherein the mounting mechanism aligns the electrical contacts of the IC package with the second ends of the pogo pins, and wherein the mounting mechanism applies the compression force.Example 27 may include the method of example 26, further comprising affixing one or more surface-mount (SMT) fixtures to a surface of the PCB, and attaching the mounting plate to the one or more SMT fixtures via one or more mounting extensions of the mounting mechanism.Example 28 may include the method of example 27, wherein the one or more mounting extensions include one or more screws, wherein the one or more SMT fixtures each include a threaded aperture, and wherein attaching the mounting plate to the one or more SMT fixtures includes screwing the one or more screws at least partially into the threaded aperture of a corresponding SMT fixture of the one or more SMT fixtures.Example 29 may include the method of example 26, further comprising positioning a back plate on a first side of the PCB, aligning the mounting plate with the back plate on a second side of the PCB, the second side opposite the first side, and affixing the mounting plate to the plate via one or more mounting extensions, the one or more mounting extensions passing through apertures formed in the PCB.Example 30 may include the method of any of the examples 23 and 24, further comprising determining an amount of force to compress the plurality of pogo pins based on a number of the plurality of pogo pins, wherein applying the compression force includes applying an amount of the compression force equal to or greater than the amount of force to compress the plurality of pogo pins.Example 31 may include the method of any of the examples 23 and 24, wherein the IC package includes a ball grid array (BGA), and wherein positioning the IC package includes aligning solder balls of the BGA with the second ends of the plurality of pogo pins.Example 32 may include the method of any of the examples 23 and 24, wherein the IC package includes a semiconductor package with a ball grid array (BGA) and a land grid array (LGA) interposer coupled to the semiconductor package, on a first side of the LGA interposed via the BGA, wherein the electrical contacts of the IC package are located on a second side of the LGA interposer and are electrically coupled to the BGA.Example 33 may include the method of any of the examples 23 and 24, further comprising applying a non-corrosive coating to the second ends of the plurality of pogo pins.Example 34 may include the method of example 33, wherein the non-corrosive coating is a non-gold, conductive material coating.It will be apparent to those skilled in the art that various modifications and variations can be made in the disclosed embodiments of the disclosed device and associated methods without departing from the spirit or scope of the disclosure. Thus, it is intended that the present disclosure covers the modifications and variations of the embodiments disclosed above provided that the modifications and variations come within the scope of any claims and their equivalents. |
Integrated circuit structures including increased transistor source/drain (S/D) contact area using a sacrificial S/D layer (340) are provided herein. The sacrificial layer, which includes different material from the S/D material, is deposited into the S/D trenches prior to the epitaxial growth of that S/D material (360), such that the sacrificial layer acts as a space-holder below the S/D material. During S/D contact processing, the sacrificial layer can be selectively etched relative to the S/D material to at least partially remove it, leaving space below the S/D material for the contact metal (380) to fill. In some cases, the contact metal is also between portions of the S/D material. In some cases, the contact metal wraps around the epi S/D, such as when dielectric wall structures (320) on either side of the S/D region are employed. By increasing the S/D contact area, the contact resistance is reduced, thereby improving the performance of the transistor device. |
An integrated circuit including at least one transistor, the integrated circuit comprising:a body including semiconductor material;a gate electrode at least above the body, the gate electrode including one or more metals;a gate dielectric between the gate electrode and the body, the gate dielectric including one or more dielectrics;a source region and a drain region, the body between the source and drain regions, the source and drain regions including semiconductor material;a first contact structure at least above and below the source region, the first contact structure including one or more metals; anda second contact structure at least above and below the drain region, the second contact structure including one or more metals.The integrated circuit of claim 1, wherein the first contact structure is further on at least one side of the source region and the second contact structure is further on at least one side of the drain region.The integrated circuit of claims 1 or 2, wherein the first contact structure wraps around at least a portion of the source region and the second contact structure wraps around at least a portion of the drain region.The integrated circuit of any of claims 1-3, further comprising a substrate, wherein a portion of the first contact structure is between the substrate and the source region, and a portion of the second contact structure is between the substrate and the drain region.The integrated circuit of any of claims 1-4, further comprising a layer between the first contact structure and the substrate, the layer also between the second contact structure and the substrate, the layer including compositionally different material relative to the source and drain regions.The integrated circuit of claim 5, wherein the layer includes one or more dielectrics.The integrated circuit of any of claims 1-6, wherein the first contact structure is between two portions of the source region, and the second contact structure is between two portions of the drain region.The integrated circuit of any of claims 1-6, wherein the source region is between two structures, the two structures including one or more dielectrics, and wherein the drain region is also between the two structures.The integrated circuit of any of claims 1-8, wherein the one or more metals included in the first and second contact structures include one or more transition metals.The integrated circuit of claim 9, wherein the one or more transition metals include one or more of tungsten, titanium, tantalum, copper, cobalt, gold, nickel, or ruthenium.The integrated circuit of any of claims 1-10, wherein the body includes germanium or group III-V semiconductor material.The integrated circuit of any of claims 1-11, wherein the body is a fin, the fin between two portions of the gate electrode.The integrated circuit of claim 12, wherein the fin has a height of at least 20 nanometers between the two portions of the gate electrode.The integrated circuit of any of claims 1-13, wherein the gate electrode wraps around the body.The integrated circuit of claim 14, wherein the body is a nanowire or a nanoribbon. |
BACKGROUNDSemiconductor devices are electronic components that exploit the electronic properties of semiconductor materials, such as silicon (Si), germanium (Ge), and gallium arsenide (GaAs). A field-effect transistor (FET) is a semiconductor device that includes three terminals: a gate, a source, and a drain. A FET uses an electric field applied by the gate to control the electrical conductivity of a channel through which charge carriers (e.g., electrons or holes) flow between the source and drain. In instances where the charge carriers are electrons, the FET is referred to as an n-channel or n-type device, and in instances where the charge carriers are holes, the FET is referred to as a p-channel or p-type device. Some FETs have a fourth terminal called the body or substrate, which can be used to bias the transistor. In addition, metal-oxide-semiconductor FETs (MOSFETs) include a gate dielectric between the gate and the channel. MOSFETs may also be known as metal-insulator-semiconductor FETs (MISFETSs) or insulated-gate FETs (IGFETs). Complementary MOS (CMOS) structures use a combination of p-channel MOSFET (PMOS) and n-channel MOSFET (NMOS) devices to implement logic gates and other digital circuits.A FinFET is a MOSFET transistor built around a thin strip of semiconductor material (generally referred to as a fin). The conductive channel of the FinFET device resides on the outer portions of the fin adjacent to the gate dielectric. Specifically, current runs along/within both sidewalls of the fin (sides perpendicular to the substrate surface) as well as along the top of the fin (side parallel to the substrate surface). Because the conductive channel of such configurations essentially resides along the three different outer regions of the fin (e.g., top and two sides), such a FinFET design is sometimes referred to as a tri-gate transistor. Other types of FinFET configurations are also available, such as so-called double-gate FinFETs, in which the conductive channel principally resides only along the two sidewalls of the fin (and not along the top of the fin). A gate-all-around (GAA) transistor, where the channel region includes, for example, one or more nanowires or nanoribbons, is configured similarly to a fin-based transistor, but instead of a finned channel region where the gate is on three portions (and thus, there are three effective gates), the gate material generally surrounds each nanowire or nanoribbon.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 illustrates a cross-sectional view of an example integrated circuit (IC) structure showing source/drain contacts that are only above the source/drain regions.Figure 2 illustrates example method 200 of forming an integrated circuit (IC) including at least one transistor having increased source/drain contact area by employing a sacrificial source/drain layer, in accordance with some embodiments.Figures 3A-3H illustrate cross-sectional views of example IC structures formed when carrying out the method of Figure 2 using a gate-first process flow, in accordance with some embodiments. Figures 3B' , 3F' , and 3H' illustrate variations to corresponding example structures of Figures 3B , 3F , and 3H , respectively, that occur when carrying out the method of Figure 2 using a gate-last process flow, in accordance with some embodiments. The cross-sectional views in Figures 3A-3H (as well as Figures 5 and 6 ) are along the body of channel material and perpendicular to the gate line to help illustrate the structures formed.Figures 4A-4D illustrate example cross-sectional views of a plane taken through a source/drain region of the structures of Figures 3D , 3E , 3G , and 3H , respectively, to help show the processing described herein, in accordance with some embodiments.Figure 5 illustrates the example integrated circuit structure of Figure 3H , illustrating a portion of the sacrificial source/drain layer remaining in the final structure, in accordance with some embodiments.Figure 6 illustrates a cross-sectional view of an example integrated circuit structure including increased source/drain contact area and employing a gate-all-around (GAA) configuration, in accordance with some embodiments.Figures 7A-7D illustrate example cross-sectional integrated circuit views through a source/drain region of the structure of Figure 6 to illustrate forming the source/drain contact structure around that source/drain region when employing dielectric wall structures, in accordance with some embodiments.Figures 8A-8D illustrate example cross-sectional integrated circuit views through the channel region and gate structure of transistor devices described herein, in accordance with some embodiments. For instance, Figure 8A is an example planar view taken along dashed line 8A-8A in the example structures of Figures 3H, 3H' , and 5 . In addition, Figure 8B is an example planar view taken along dashed line 8B-8B in the example structure of Figure 8B. Figures 8C and 8D illustrate other example channel region configurations.Figure 9 illustrates a computing system implemented with integrated circuit structures including at least one transistor having increased source/drain contact area as disclosed herein, in accordance with some embodiments.These and other features of the present embodiments will be understood better by reading the following detailed description, taken together with the figures herein described. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Furthermore, as will be appreciated, the figures are not necessarily drawn to scale or intended to limit the described embodiments to the specific configurations shown. For instance, while some figures generally indicate straight lines, right angles, and smooth surfaces, an actual implementation of the disclosed techniques may have less than perfect straight lines and right angles, and some features may have surface topography or otherwise be non-smooth, given real-world limitations of fabrication processes. Further still, some of the features in the drawings may include a patterned and/or shaded fill, which is merely provided to assist in visually identifying distinct features. In short, the figures are provided merely to show example structures.DETAILED DESCRIPTIONIn transistor devices, such as MOSFET devices, there are numerous sources of undesired resistance. One such source of undesired resistance is from the contact resistance that is present between the source/drain (S/D) semiconductor material and corresponding contact metal structures (which are referred to as S/D contacts). S/D contact resistance, which is parasitic, is an important limiting factor in drive currents, performance, and circuit delay for modern transistor technologies (such as CMOS technologies). There are two main aspects of S/D contact resistance - electrical resistance at the interface between S/D metal and S/D semiconductor; and contact area, which is the total surface area of the contact interface. Electrical resistance across the S/D metal/semiconductor interface is related to material properties and is not discussed further in this disclosure. For a given electrical resistance across the interface, however, the total contact resistance can be lowered by increasing the total contact area. Typical device designs allow for the metal to only contact the S/D from the top, in so-called top-interface contacts. For instance, Figure 1 illustrates a cross-sectional view of an example integrated circuit (IC) structure showing S/D contacts that are only above the S/D regions (top-interface contacts). In more detail, the IC structure of Figure 1 includes substrate 100 (such as a silicon substrate), channel region 110, gate dielectric 132, gate electrode 134, gate sidewall spacers 136, S/D regions 160, S/D contacts 180, and contact interfaces 195, which lie between 160 and 180 regions. As shown, corresponding S/D contacts 180 are only above S/D regions 160, which only provides a small contact area - the area at the interface 195 between the top surface of a S/D region 160 and its corresponding contact 180. Such a small contact area results in undesirably high S/D contact resistance. Moreover, the contact resistance goes up in scaled transistors due to reduced contact area between the metal contacts and the semiconductor material included in the S/D regions.Thus, and in accordance with various embodiments of the present disclosure, transistors with increased S/D contact area achieved using a sacrificial S/D layer are provided herein. The purpose of this disclosure is to describe an integrated process enabled by a sacrificial S/D layer resulting in increased contact area of embodiments herein relative to traditional top-interface contacts. In some embodiments, the sacrificial layer is deposited prior to the epitaxial growth of the S/D material, such that the sacrificial layer is below that epitaxial S/D material (also referred to herein as "epi"). During the S/D contact processing, the sacrificial layer can then be etched away to expose the underside of the S/D material, such that the S/D contact metal can be deposited under (and in some cases, in-between) the epitaxial S/D material. In addition, the sacrificial S/D layer provides multiple integration advantages. For instance, the sacrificial layer as variously described herein allows reliable etch-biasing and removal at contact, in accordance with some embodiments. Further, the sacrificial layer as variously described herein can act as an etch stop when etching down into the epitaxial S/D material, in accordance with some embodiments. Further still, the sacrificial layer as variously described herein can be employed for non-planar transistors, such as finned transistors (e.g., FinFETs) and gate-all-around or GAA transistors (e.g., that employ one or more nanowires or nanoribbons), in accordance with some embodiments. Thus, the contact area between the metal and epi (semiconductor material) in the S/D regions is increased, thereby reducing the contact resistance at those locations and improving overall device performance.Note that the use of "source/drain" or "S/D" herein is simply intended to refer to a source region or a drain region or both a source region and a drain region. To this end, the forward slash ("/") as used herein means "and/or" unless otherwise specified, and is not intended to implicate any particular structural limitation or arrangement with respect to source and drain regions, or any other materials or features that are listed herein in conjunction with a forward slash.In some embodiments, the sacrificial S/D layer includes dielectric material or semiconductor material that is compositionally different from the S/D semiconductor material. Materials that are "compositionally different" or "compositionally distinct" as used herein refers to two materials that have different chemical compositions. This compositional difference may be, for instance, by virtue of an element that is in one material but not the other (e.g., silicon germanium is compositionally different from silicon and silicon dioxide is compositionally different from silicon), or by way of one material having all the same elements as a second material but at least one of those elements is intentionally provided at a different concentration in one material relative to the other material (e.g., SiGe having 70 atomic percent germanium is compositionally different from SiGe having 25 atomic percent germanium). In addition to such chemical composition diversity, the materials may also have distinct dopants (e.g., boron versus arsenic/phosphorous) or the same dopants but at differing concentrations. In still other embodiments, compositionally different materials may further refer to two materials that have different crystallographic orientations. For instance, (110) Si is compositionally distinct or different from (100) Si.In some embodiments, the sacrificial layer is deposited after forming the S/D trenches but prior to forming the final S/D material, such that the sacrificial layer is formed at least on the bottom portion of the S/D trenches. In some such embodiments, the S/D trenches are formed via etch processing to remove the channel material layer in the S/D locations, and such etch processing may be referred to as epi-undercut (EUC) processing. After EUC processing and before epi, a sacrificial layer is deposited in the S/D trenches. In some embodiments, the processing proceeds with the epi growth, which is interrupted before epis in neighboring cells merge. In some such embodiments, another sacrificial layer is deposited which encapsulates the epi. In some embodiments, the epi growth processing allows the epis of neighboring cells to merge. In some such embodiments, a deep etch is used to punch through the epi and stop at the sacrificial layer (e.g., during the contact processing) to provide access to that sacrificial layer. Then, at the contact processing, the sacrificial layers are selectively etched with respect to the S/D epi. The S/D contact metal is then deposited (e.g., via ALD and/or CVD), which deposits metal all-around the epi, including the underside of epi and between epis in neighboring cells.Some embodiments employ dielectric wall structures (which may be referred to as self-aligned gate endcap walls or other tall dielectric isolation structures) at transistor boundaries to provide a tall wall between adjacent fins/nanowires/nanoribbons, for example. After epi-undercut (EUC) processing and before epi S/D material is formed, the sacrificial layer as variously described herein is deposited in the S/D trenches and on the sidewalls of the dielectric wall structures that are in the S/D trenches. The sacrificial layer encapsulates the epi S/D material as it grows, thereby providing isolation between the sidewall of the epi and the dielectric wall structures, and between the underside of the epi and the substrate. At the S/D contact processing, the sacrificial layer is selectively etched with respect to the epi. The S/D contact metal is then deposited (e.g., via ALD and/or CVD), which deposits metal all-around the epi, including the underside of the epi and along the sidewalls of the epi between the dielectric wall structures.The techniques and structures disclosed herein provide many benefits. For instance, the techniques increase contact area between epi (in the S/D regions) and metal (in the S/D contacts) by allowing contact of the epi on the underside and, in some cases, in-between adjacent epi S/D portions. This increased contact area reduces contact resistance. In addition, by forming the S/D contact structures in such a manner, a better conduction path is achieved for the transistor, as the path from source contact to source to channel to drain to drain contact is a straighter path (and may even be an exact straight line). Comparing this to the S/D contacts only being above the S/D regions (such as is shown in Figure 1 ), which include the carriers going around a corner as they move from the metal contact to the source-channel-drain path, it can be understood based on this disclosure that having the S/D contacts in-line with the transport direction provides additional benefits. Numerous other benefits will be apparent in light of this disclosure.Note that, as used herein, the expression "X includes at least one of A or B" refers to an X that includes, for example, just A only, just B only, or both A and B. To this end, an X that includes at least one of A or B is not to be understood as an X that requires each of A and B, unless expressly so stated. For instance, the expression "X includes A and B" refers to an X that expressly includes both A and B. Moreover, this is true for any number of items greater than two, where "at least one of' those items is included in X. For example, as used herein, the expression "X includes at least one of A, B, or C" refers to an X that includes just A only, just B only, just C only, only A and B (and not C), only A and C (and not B), only B and C (and not A), or each of A, B, and C. This is true even if any of A, B, or C happens to include multiple types or variations. To this end, an X that includes at least one of A, B, or C is not to be understood as an X that requires each of A, B, and C, unless expressly so stated. For instance, the expression "X includes A, B, and C" refers to an X that expressly includes each of A, B, and C. Likewise, the expression "X included in at least one of A or B" refers to an X that is included, for example, in just A only, in just B only, or in both A and B. The above discussion with respect to "X includes at least one of A or B" equally applies here, as will be appreciated. Moreover, this is true for any number of items.Use of the techniques and structures provided herein can be detected using tools such as: electron microscopy including scanning/transmission electron microscopy (SEM/TEM), scanning transmission electron microscopy (STEM), nano-beam electron diffraction (NBD or NBED), and reflection electron microscopy (REM); composition mapping; x-ray crystallography or diffraction (XRD); energy-dispersive x-ray spectroscopy (EDX); secondary ion mass spectrometry (SIMS); time-of-flight SIMS (ToF-SIMS); atom probe imaging or tomography; local electrode atom probe (LEAP) techniques; 3D tomography; or high resolution physical or chemical analysis, to name a few suitable example analytical tools. In particular, in some embodiments, such tools can indicate an integrated circuit including at least one transistor having increased S/D contact area as variously described herein. For instance, the S/D contact structures are above and below the S/D regions in accordance with some embodiments, as opposed to being just above the S/D regions (such as is shown in Figure 1 ). In other words, the presence of contact all-around epi processing achieved through sacrificial S/D layers can be identified by the presence of metal on the underside of epi (and in some cases, between epi S/Ds on adjacent structures) through, for example, high resolution TEM imaging. In some embodiments, the techniques and structures described herein can be detected based on remnants from the sacrificial layer as variously described herein, where such a sacrificial layer would not otherwise be present. The S/D contact structures might chemically consist of metals identified via SIMS, TEM, EDX mapping, and/or atom probe tomography, for example. In some embodiments, the techniques described herein can be detected based on the structures formed therefrom. In addition, in some embodiments, the techniques and structures described herein can be detected based on the benefits derived therefrom. Numerous configurations and variations will be apparent in light of this disclosure.Architecture and MethodologyFigure 2 illustrates example method 200 of forming an integrated circuit (IC) including at least one transistor having increased S/D contact area by employing a sacrificial S/D layer, in accordance with some embodiments. Figures 3A-3H illustrate cross-sectional views of example IC structures formed when carrying out method 200 of Figure 2 using a gate-first process flow, such that the final gate structure is formed at 206 and optional process 214 is not performed, in accordance with some embodiments. Figures 3B' , 3F' , and 3H' illustrate variations to corresponding example structures of Figures 3B , 3F , and 3H , respectively, that occur when carrying out method 200 of Figure 2 using a gate-last process flow, such that a dummy gate structure 334' is formed at 206 and optional process 214 is performed, in accordance with some embodiments. The cross-sectional views in Figures 3A-3H (as well as Figures 5 and 6 ) are along the body of channel material and perpendicular to the gate lines to assist with illustrating the processing, including formation and removal of the sacrificial S/D layer that helps increase the transistor S/D contact area.A multitude of different transistor devices can benefit from the techniques described herein, which includes, but is not limited to, various field-effect transistors (FETs), such as metal-oxide-semiconductor FETs (MOSFETs), tunnel FETs (TFETs), and Fermi filter FETs (FFFETs) (also known as tunnel source MOSFETs), to name a few examples. For example, the techniques can be used to benefit an n-channel MOSFET (NMOS) device, which may include a source-channel-drain scheme of n-p-n or n-i-n, where 'n' indicates n-type doped semiconductor material, 'p' indicates p-type doped semiconductor material, and 'i' indicates intrinsic/undoped semiconductor material (which may also include nominally undoped semiconductor material, including dopant concentrations of less than 1E16 atoms per cubic centimeter (cm), for example), in accordance with some embodiments. In another example, the techniques can be used to benefit a p-channel MOSFET (PMOS) device, which may include a source-channel-drain scheme of p-n-p or p-i-p, in accordance with some embodiments. In yet another example, the techniques can be used to benefit a TFET device, which may include a source-channel-drain scheme of p-i-n or n-i-p, in accordance with some embodiments. In other words, a TFET device may appear the same as a MOSFET device, except that the source and drain regions include opposite type dopant. In still another example, the techniques can be used to benefit a FFFET device, which may include a source-channel-drain scheme of np-i-p (or np-n-p) or pn-i-n (or pn-p-n), in accordance with some embodiments. In other words, such FFFET devices include a bilayer source region configuration where one of the sub-layers of the bilayer includes n-type dopant and the other includes p-type dopant. In general, the techniques disclosed herein to increase contact area using S/D sacrificial layers can benefit any device incorporating S/D contacts.In addition, in some embodiments, the techniques can be used to benefit transistors including a multitude of configurations, such as planar and/or non-planar configurations, where the non-planar configurations may include finned or FinFET configurations (e.g., dual-gate or tri-gate), gate-all-around (GAA) configurations (e.g., employing one or more nanowires or nanoribbons), or some combination thereof (e.g., a beaded-fin configuration), to provide a few examples. Further, the techniques are used in some embodiments to benefit complementary transistor circuits, such as complementary MOS (CMOS) circuits, where the techniques may be used to benefit one or more of the included n-channel and/or p-channel transistors making up the CMOS circuit. Other example transistor devices that can benefit from the techniques described herein include few to single electron quantum transistor devices, in accordance with some embodiments. Further still, any such devices may employ semiconductor materials that are three-dimensional crystals as well as two dimensional crystals or nanotubes, for example. In some embodiments, the techniques may be used to benefit devices of varying scales, such as IC devices having critical dimensions in the micrometer (micron) range and/or in the nanometer (nm) range (e.g., formed at the 22, 14, 10, 7, 5, or 3 nm process nodes, or beyond).Note that deposition or epitaxial growth techniques (or more generally, additive processing) where described herein can use any suitable techniques, such as chemical vapor deposition (CVD), physical vapor deposition (PVD), atomic layer deposition (ALD), and/or molecular beam epitaxy (MBE), to provide some examples. Also note that etching techniques (or more generally, subtractive processing) where described herein can use any suitable techniques, such as wet and/or dry etch processing which may be isotropic (e.g., uniform etch rate in all directions) or anisotropic (e.g., etch rates that are orientation or directionally dependent), and which may be non-selective (e.g., etches all exposed materials at the same or similar rates) or selective (e.g., etches different materials that are exposed at different rates). Further note that other processing may be used to form the integrated circuit structures described herein, as will be apparent in light of this disclosure, such as hardmasking, patterning or lithography (via suitable lithography techniques, such as, e.g., photolithography, extreme ultraviolet lithography, x-ray lithography, or electron beam lithography), planarizing or polishing (e.g., via chemical-mechanical planarization (CMP) processing), doping (e.g., via ion implantation, diffusion, or including dopant in the base material during formation), and annealing, to name some examples.In embodiments where semiconductor material described herein includes dopant, the dopant is any suitable n-type and/or p-type dopant that is known to be used for the specific semiconductor material. For instance, in the case of group IV semiconductor materials (e.g., Si, SiGe, Ge), p-type dopant includes group III atoms (e.g., boron, gallium, aluminum), and n-type dopant includes group V atoms (e.g., phosphorous, arsenic, antimony). In the case of group III-V semiconductor materials (e.g., GaAs, InGaAs, InP, GaP), p-type dopant includes group II atoms (e.g., beryllium, zinc, cadmium), and n-type dopant includes group VI atoms (e.g., selenium, tellurium). However, for group III-V semiconductor materials, group VI atoms (e.g., silicon, germanium) can be employed for either p-type or n-type dopant, depending on the conditions (e.g., formation temperatures). In embodiments where dopant is included in semiconductor material, the dopant can be included at quantities in the range of 1E16 to 1E22 atoms per cubic cm, or higher, for example. In some embodiments, dopant is included in semiconductor material in a quantity of at least 1E16, 1E17, 1E18, 5E18, 1E19, 5E19, 1E20, 5E20, or 1E21 atoms per cubic cm and/or of at most 1E22, 5E21, 1E21, 5E20, 1E20, 5E19, 1E19, 5E18, or 1E18 atoms per cubic cm, for example. In some embodiments, semiconductor material described herein is undoped/intrinsic, or includes relatively minimal dopant, such as a dopant concentration of less than 1E16 atoms per cubic cm, for example.Note that the use of "group IV semiconductor material" (or "group IV material" or generally, "IV") herein includes at least one group IV element (e.g., silicon, germanium, carbon, tin), such as silicon (Si), germanium (Ge), silicon germanium (SiGe), and so forth. The use of "group III-V semiconductor material" (or "group III-V material" or generally, "III-V") herein includes at least one group III element (e.g., aluminum, gallium, indium) and at least one group V element (e.g., nitrogen, phosphorus, arsenic, antimony, bismuth), such as gallium arsenide (GaAs), indium gallium arsenide (InGaAs), indium aluminum arsenide (InAlAs), gallium phosphide (GaP), gallium antimonide (GaSb), indium phosphide (InP), and so forth. Also note that group III may also be known as the boron group or IUPAC group 13, group IV may also be known as the carbon group or IUPAC group 14, and group V may also be known as the nitrogen group or IUPAC group 15, for example. Further note that semiconductor material described herein has a monocrystalline or single-crystal structure (also referred to as a crystalline structure) unless otherwise explicitly stated (e.g., unless referred to as having a polycrystalline or amorphous structure).Method 200 of Figure 2 includes providing 202 a body of channel material, such as providing the example body of channel material 310 shown in Figure 3A , in accordance with some embodiments. Note that body of channel material 310 may be referred to as simply body 310 herein for ease of description. In some cases, body of channel material 310 may be referred to as a layer or a layer of channel material herein or a channel layer. In some embodiments, body 310 is native to and a part of a substrate used for the integrated circuit, such as substrate 300. Thus, although substrate 300 and body 310 are shown in Figure 3A as having a distinct interface, that need not be the case in embodiments where body 310 is native to substrate 300. In other embodiments, body 310 includes compositionally different material formed above and/or directly on the integrated circuit substrate 300. Thus, in some such embodiments, a distinct interface, such as is shown in Figure 3A , can be detected.Substrate 300, in some embodiments, is: a bulk substrate including group IV semiconductor material, such as silicon (Si), germanium (Ge), silicon germanium (SiGe), or silicon carbide (SiC), group III-V semiconductor material, and/or any other suitable material as can be understood based on this disclosure; an X on insulator (XOI) structure where X is one of the aforementioned semiconductor materials and the insulator material is an oxide material or dielectric material, such that the XOI structure includes the electrically insulating material layer between two semiconductor layers; or some other suitable multilayer structure where the top layer includes semiconductor material to be used for body 310. In some embodiments, the substrate can be an insulator or dielectric substrate, such as a glass substrate. In some such embodiments, the semiconductor material for body 310 can be transferred to that insulator or dielectric substrate to achieve a desired quality (e.g., mono crystalline quality). In some embodiments, substrate 300 is a bulk silicon substrate (that either does or does not include dopant), which may be utilized based on the relatively low cost and availability of such bulk silicon substrates.In some embodiments, substrate 300 includes a surface crystalline orientation described by a Miller index of (100), (110), or (111), or its equivalents. Although substrate 300 is shown in the figures as having a thickness (dimension in the Y-axis direction) similar to other layers for ease of illustration, in some instances, substrate 300 may be relatively much thicker than the other layers, such as having a thickness in the range of 1 to 950 microns (or in the sub-range of 20 to 800 microns), for example, or any other suitable thickness value or range as can be understood based on this disclosure. In some embodiments, substrate 300 includes a multilayer structure including two or more distinct layers (that may or may not be compositionally different). In some embodiments, substrate 300 includes grading (e.g., increasing and/or decreasing) of one or more material concentrations throughout at least a portion of the substrate 300. In some embodiments, substrate 300 is used for one or more other IC devices, such as various diodes (e.g., light-emitting diodes (LEDs) or laser diodes), various transistors (e.g., MOSFETs, TFETs), various capacitors (e.g., MOSCAPs), various microelectromechanical systems (MEMS), various nanoelectromechanical systems (NEMS), various radio frequency (RF) devices, various sensors, and/or any other suitable semiconductor or IC devices, depending on the end use or target application. Accordingly, in some embodiments, the structures described herein are included in system-on-chip (SoC) applications.As previously described, in some embodiments, the body 310 is merely a top portion of substrate 300 that may or may not be formed into a desired shape (e.g., a fin) using patterning and/or lithography techniques, for example. However, in other embodiments, the body 310 includes material that is different from and not native to the material of underlying substrate 300. For instance, in some embodiments, body 310 can be formed by blanket depositing (on at least a portion of substrate 300) a layer of the channel material and then patterning that layer of channel material into body 310, for example. In another embodiment, body 310 can be formed in dielectric (or insulator) material trenches, which can be achieved by forming the top of the substrate into fins, forming the dielectric material around the fins, and then recessing or removing the fins via etching to form the trenches, for example. In some such embodiments, the dielectric material can then be recessed to expose more of the body of replacement material (e.g., which may be shaped like a fin for non-planar configurations), while in other embodiments, the dielectric material is not recessed (e.g., for planar configurations). In some embodiments, a multilayer stack is formed either by blanket deposition or by forming the stack in the dielectric trenches to enable the subsequent formation of gate-all-around configurations, for example, where some of the layers in the stack are sacrificial and intended to be removed via selective etching (e.g., during replacement gate processing) to release the one or more bodies of channel material, as will be described in more detail herein.In some embodiments, the body of channel material 310 includes semiconductor material. In some embodiments, body 310 includes group IV and/or group III-V semiconductor material. Thus, in some embodiments, body 310 includes one or more of germanium, silicon, tin, indium, gallium, aluminum, arsenic, phosphorous, antimony, bismuth, or nitrogen. In some embodiments, semiconductor material included in body 310 also includes dopant (with corresponding n-type and/or p-type dopant), while in other embodiments, semiconductor material included in body 310 is undoped/intrinsic. In some embodiments, body 310 is silicon (that either does or does not include dopant). In some embodiments, body 310 includes germanium-based group IV semiconductor materials, such as germanium (Ge) or silicon germanium (SiGe). In some such embodiments, the Ge concentration in body 310 is in the range of 10-100 atomic percent (or in a sub-range of 10-30, 10-50, 10-70, 20-50, 20-80, 30-50, 30-70, 30-100, 50-75, 50-100, or 70-100 atomic percent), for example. In some embodiments, body 310 includes group III-V semiconductor material, such as gallium arsenide (GaAs), indium gallium arsenide (InGaAs), indium phosphide (InP), indium arsenide (InAs), indium antimonide (InSb), gallium nitride (GaN), and/or indium gallium nitride (InGaN), to provide some examples.In some embodiments, the body of channel material 310 includes a multilayer structure of two or more sub-layers including compositionally different material. For instance, in gate-all-around (GAA) embodiments, layer of channel material 310 is a multilayer stack including one or more sacrificial layers and one or more final layers, where the sacrificial layers are to be later removed (e.g., during replacement gate processing) to release the final layers in the channel region, thereby allowing the gate structure to be formed around those one or more final layers or body structures (which may be referred to as nanowires or nanoribbons). In some embodiments, the body/layer of channel material 310 includes grading (e.g., increasing and/or decreasing) of one or more material concentrations throughout at least a portion of body 310. In some embodiments, body 310 includes strain, either in the form of tensile strain or compressive strain, where the strain may be formed by subsequent processing (e.g., as a result of the S/D material formation). In some such embodiments, the strain is throughout the entirety of body 310, while in other embodiments, the strain is only in one or more portions of body 310 (such as the outer portions nearest the S/D regions).In some embodiments, body of channel material 310 has a thickness (dimension in the Y-axis direction) in the range of 5-200 nm (or in a subrange of 5-25, 5-50, 5-100, 10-25, 10-50, 10-80, 10-100, 10-200, 20-80, 20-100, 20-200, 40-80, 40-120, 40-200, 50-100, 50-200, or 100-200 nm) or greater, or within any other suitable range or having any other suitable value as can be understood based on this disclosure. In some embodiments, body 310 has a thickness of at least 5, 10, 15, 20, 25, 50, 80, 100, 120, or 150 nm, and/or at most 200, 150, 120, 100, 80, 50, or 25 nm, for example. In some embodiments, body 310 is used for a planar configuration, where the channel only resides in/near the top surface of the body 310, such as where the final gate structure described herein is formed only above the body 310. In other embodiments, body 310 is used for non-planar configurations, where the channel resides in/near multiple sides of the body 310. For instance, in some non-planar embodiments, channel layer or body 310 is a fin or includes a fin-like shape, where the finned body is between portions of the final gate structure. Such configurations may be referred to as having a FinFET, tri-gate structure, or dual-gate structure. In some non-planar embodiments, a gate-all-around configuration is employed where the final gate structure is around the body 310, such that the body 310 is a nanowire or nanoribbon (where multiple nanowires or nanoribbons, and thus, multiple bodies, may be present), for example. Non-planar configurations are described in more detail herein. Note that the figures and accompanying description provided herein generally apply to both planar and non-planar configurations, unless explicitly stated otherwise.Method 200 of Figure 2 continues with optionally forming 204 dielectric wall structures to provide isolation between adjacent transistors, in accordance with some embodiments. Examples of the dielectric wall structures are shown in Figures 7A-7D as structures 320, and are included in some embodiments to, for example, provide isolation between adjacent transistors. However, they are not included in other embodiments, thereby making the structures optional. The dielectric wall structures 320 may also be referred to as self-aligned gate endcap wall structures, or other tall dielectric isolation structures. The dielectric wall structures 320, where present, can assist with depositing the sacrificial S/D layer 340, as that sacrificial layer can be formed along the sidewalls of the dielectric wall structure 320 to, for example, provide isolation between the epitaxial S/D material and those dielectric wall structures. Again, such dielectric wall structures 320 will be described in more detail with reference to Figures 7A-7D .Method 200 of Figure 2 continues with forming 206 the final (or dummy) gate structure(s), such as to form the example resulting structure of Figure 3B , in accordance with some embodiments. Note that there is one complete gate structure shown in the middle, while partial gate structures are shown on the left and right sides. However, the relevant description of the gate structure provided herein is equally applicable to all three structures, and so, their features are identified with the same numbers. The gate structure or gate stack in the example structure of Figure 3B is shown as a final gate structure that will be in the final integrated circuit structure, and include gate dielectric 332 and gate electrode 334. In such embodiments, the processing includes a gate-first flow (also called up-front hi-k gate processing), where the final gate structure is formed prior to performing the S/D region processing. Alternatively, in some embodiments, dummy gate structures are initially formed at 206 in a gate-last flow (also called a replacement gate or replacement metal gate (RMG) process). For instance, Figure 3B' is a blown-out portion of Figure 3B illustrating the alternative gate-last processing, which includes forming dummy gate structures at 206 instead of final gate structures, in accordance with some embodiments. As shown in Figure 3B' , dummy gate structure 334' was formed instead of the final gate structure, in accordance with some embodiments. The dummy gate structure 334', where employed, may include a dummy gate dielectric (e.g., dummy oxide material) and a dummy gate electrode (e.g., dummy poly-silicon material) to be used for the replacement gate process, where those dummy materials are intended to be sacrificial such that they can be later removed and replaced by the final gate structure.Regardless of whether the final gate structure is formed using a gate-first or a gate-last process flow, it includes gate dielectric 332 and gate electrode 334. In some embodiments, the gate structure, whether final or dummy, may be formed by blanket depositing the final or dummy gate materials and then patterning the materials to the desired gate structure. However, any suitable techniques can be used to form the final and/or dummy gate structures, in accordance with some embodiments. In some embodiments, gate dielectric 332 includes an oxide (e.g., silicon dioxide), nitride (e.g., silicon nitride), high-k dielectric, low-k dielectric, and/or any other suitable material as can be understood based on this disclosure. Examples of high-k dielectrics include, for instance, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate, to provide some examples. Examples of low-k dielectrics include, for instance, fluorine-doped silicon dioxide, carbon-doped silicon dioxide, porous silicon dioxide, porous carbon-doped silicon dioxide, spin-on organic polymeric dielectrics (e.g., polytetrafluoroethylene, benzocyclobutene, polynorbornenes, polyimide), spin-on silicon based polymeric dielectrics (e.g., hydrogen silsesquioxane, methylsilsesquioxane), to provide some examples. In some embodiments, an annealing process is carried out on the gate dielectric 332 to improve its quality when, for example, high-k dielectric material is employed.In some embodiments, the gate dielectric 332 includes oxygen. In some such embodiments where the gate dielectric 332 includes oxygen, the gate dielectric 332 also includes one or more other materials, such as one or more of hafnium, silicon, lanthanum, aluminum, zirconium, tantalum, titanium, barium, strontium, yttrium, lead, scandium, zinc, lithium, or niobium. For instance, the gate dielectric 332 may include hafnium and oxygen (e.g., in the form of hafnium oxide or hafnium silicon oxide), or the gate dielectric 332 may include silicon and oxygen (e.g., in the form of silicon dioxide, hafnium silicon oxide, or zirconium silicon oxide), in accordance with some embodiments. In some embodiments, the gate dielectric 332 includes nitrogen. In some such embodiments where the gate dielectric 332 includes nitrogen, the gate dielectric 332 may also include one or more other materials, such as silicon (e.g., silicon nitride) for instance. In some embodiments, the gate dielectric 332 includes silicon and oxygen, such as in the form of one or more silicates (e.g., titanium silicate, tungsten silicate, niobium silicate, and silicates of other transition metals). In some embodiments, the gate dielectric 332 includes oxygen and nitrogen (e.g., silicon oxynitride or aluminum oxynitride).In some embodiments, the gate dielectric 332 includes a multilayer structure, including two or more compositionally distinct layers. For example, a multilayer gate dielectric can be employed to obtain desired electrical isolation and/or to help transition from the body 310 to gate electrode 334, in accordance with some embodiments. In an example embodiment, a multilayer gate dielectric has a first layer nearest the body 310 that includes oxygen and one or more materials included in the body 310 (such as silicon and/or germanium), which may be in the form of an oxide (e.g., silicon dioxide or germanium oxide), and the multilayer gate dielectric also has a second layer farthest from the body 310 (and nearest the gate electrode 334) that includes at least one high-k dielectric (e.g., hafnium and oxygen, which may be in the form of hafnium oxide or hafnium silicon oxide). In some embodiments where a multilayer gate dielectric is employed, the structure includes a first sub-layer that is only between the gate electrode 334 and the body 310, and a second sub-layer that is both between the gate electrode 334 and the body 310 as well as along sidewalls of the gate electrode 334 (e.g., between gate electrode and spacers 336). This may be achieved via replacement gate processing, where the final gate dielectric 332 is formed along sidewalls of dielectric material after the dummy gate structure (e.g., 334') is removed. In some embodiments, gate dielectric 332 includes grading (e.g., increasing and/or decreasing) the content/concentration of one or more materials through at least a portion of the gate dielectric, such as the oxygen content/concentration within the gate dielectric 332.In some embodiments, gate dielectric 332 has a thickness in the range of 1-30 nm (or in a sub-range of 1-5, 1-10, 1-15, 1-20, 1-25, 2-5, 2-10, 2-15, 2-20, 2-25, 2-30, 3-8, 3-12, 5-10, 5-15, 5-20, 5-25, 5-30, 10-20, 10-30, or 20-30 nm) or greater, for example, or within any other suitable range or having any other suitable value as can be understood based on this disclosure. In some embodiments, the thickness of gate dielectric 332 is at least 1, 2, 3, 5, 10, 15, 20, or 25 nm, and/or at most 30, 25, 20, 15, 10, 8, or 5 nm, for example. Note that the thicknesses described herein for gate dielectric 332 relate at least to the dimension between the channel layer/body 310 and gate electrode 334 (e.g., at least the dimension in the Y-axis). In embodiments where gate dielectric 332 is also on a sidewall of each of gate spacers 336 (such as is shown in Figure 3H' ), then the thickness is also the dimension between the gate electrode 334 and each of the spacers 336, as can be understood based on this disclosure. In some embodiments, the thickness of gate dielectric 332 is selected, at least in part, based on the desired amount of isolation between channel layer 310 and gate electrode 334.In some embodiments, gate dielectric 332 provides means for electrically insulating channel layer/body 310 from gate electrode 334. In some embodiments, the characteristics of gate dielectric 332 are selected based on desired electrical properties. For instance, some embodiments employ a relatively thicker gate dielectric (e.g., at least 5 or 10 nm in thickness) and/or relatively lower-k dielectric material for the gate dielectric, such as silicon dioxide or low-k dielectric material (where the dielectric constant, k, is less than that of silicon dioxide, so less than 3.9) to help reduce parasitic capacitance issues caused between adjacent gate electrodes or between gate electrodes and adjacent S/D contacts, for example. However, in other embodiments, hi-k dielectric material is desired as such material can provide desired electrical properties for some gate configurations.In some embodiments, gate electrode 334 includes one or more metals, such as one or more of aluminum, tungsten, titanium, tantalum, copper, nickel, gold, platinum, ruthenium, or cobalt, for example. In some embodiments, gate electrode 334 includes carbon and/or nitrogen, such as in combination with one or more of the metals in the preceding sentence, for example. For instance, in some embodiments gate electrode 334 includes titanium and nitrogen (e.g., titanium nitride), or tantalum and nitrogen (e.g., tantalum nitride), such as in a liner layer that is in direct contact with the gate dielectric, for example. Thus, in some embodiments, gate electrode 334 includes one or more metals that may or may not include one or more other materials (such as carbon and/or nitrogen). In some embodiments, gate electrode 334 includes a multilayer structure, including two or more compositionally distinct layers. For instance, in some such embodiments, one or more work function layers are employed, such as one or more metal-including layers that are formed with desired electrical characteristics. Further, in some such embodiments, the one or more metal-including layers include tantalum and/or titanium, which may also include nitrogen (e.g., in the form of tantalum nitride or titanium nitride). In some embodiments, a bulk metal structure is formed on and between a conformal layer (such as a liner layer), where the bulk metal structure includes compositionally distinct material from the conformal/liner layer. In some such embodiments, the conformal/liner layer would be "U" shaped, for example.In some embodiments, gate electrode 334 includes a resistance reducing metal layer between a bulk metal structure and the gate dielectric, for instance. Example resistance reducing metals include, for instance one or more of nickel, titanium, titanium with nitrogen (e.g., titanium nitride), tantalum, tantalum with nitrogen (e.g., tantalum nitride), cobalt, gold, gold with germanium (e.g., gold-germanium), nickel, platinum, nickel with platinum (e.g., nickel-platinum), aluminum, and/or nickel with aluminum (e.g., nickel aluminum), for instance. Example bulk metal structures include one or more of aluminum, tungsten, ruthenium, copper, or cobalt, for instance. In some embodiments, gate electrode 334 includes additional layers, such as one or more layers including titanium and nitrogen (e.g., titanium nitride) and/or tantalum and nitrogen (e.g., tantalum nitride), which can be used for adhesion and/or liner/barrier purposes, for example. In some embodiments, the thickness, material, and/or deposition process of sub-layers within a multilayer gate electrode are selected based on a target application, such as whether the gate electrode is to be used with an n-channel device or a p-channel device. In some embodiments, the gate electrode 334 provides means for changing the electrical attributes of the adjacent channel layer/body 310 when a voltage is applied to the gate electrode 334.In some embodiments, gate electrode 334 has a thickness (dimension in the Y-axis direction in the view of Figure 3B ) in the range of 10-100 nm (or in a sub-range of 10-25, 10-50, 10-75, 20-30, 20-50, 20-75, 20-100, 30-50, 30-75, 30-100, 50-75, or 50-100 nm) or greater, for example, or within any other suitable range or having any other suitable value as can be understood based on this disclosure. In an embodiment, gate electrode 334 has a thickness that falls within the sub-range of 20-40 nm. In some embodiments, gate electrode has a thickness of at least 10, 15, 20, 25, 30, 40, or 50 nm and/or at most 100, 50, 40, 30, 25, or 20 nm, for example. In some embodiments, gate electrode 334 includes grading (e.g., increasing and/or decreasing) the content/concentration of one or more materials through at least a portion of the structure.Figure 3B also shows that sidewall spacers 336, referred to generally as gate spacers (or simply, spacers), are on either side of the gate stack, in the example structure. Such spacers 336 can be formed using any suitable techniques, such as depositing the material of spacers 336 and performing spacer pattern and etch processing, for example. In some embodiments, the spacers 336 can be used to help determine the gate length and/or channel length (dimensions in the X-axis direction), and/or to help with replacement gate processing, for example. In some embodiments, spacers 336 include any suitable oxide (e.g., silicon dioxide), nitride (e.g., silicon nitride), high-k dielectric, low-k dielectric, and/or any other suitable electrically insulating material as can be understood based on this disclosure. In some embodiments, spacers 336 include silicon, oxygen, nitrogen, and/or carbon. For instance, in some embodiments, spacers 336 include silicon dioxide, silicon monoxide, silicon nitride, silicon oxynitride, or carbon-doped silicon dioxide (or other carbon-doped oxides). In some embodiments, it is desired to select material for spacers 336 that has a low dielectric constant and a high breakdown voltage. In some embodiments, spacers 336 include a multilayer structure (e.g., a bilayer structure where the sub-layers are laterally adjacent to each other in the X-axis direction), even though it is illustrated as a single layer in the example structure of Figure 3B . In some embodiments, spacers 336 and gate dielectric 332 do not include a distinct interface as shown in Figure 3B , particularly where spacers 336 and gate dielectric 332 include the same material, for example.Method 200 of Figure 2 continues with forming 208 S/D trenches, such as to form the example resulting structure of Figure 3C including S/D trenches 350, in accordance with some embodiments. S/D trenches 350 can be formed using any suitable techniques, such as using wet and/or dry etch techniques to remove the material of channel layer 310 from the S/D locations. Note that although the S/D trenches 350 extend down (in the Y-axis direction) to exactly the top surface of substrate 300 in this example embodiment, in other embodiments, the trenches 350 may have a bottom surface that is higher or lower. Further, although the S/D trenches 350 have a flat or planar bottom surface as shown in Figure 3C (which may be formed based on etch selectivity between channel material layer 310 and substrate 300), in other embodiments, the trenches 350 may have a curved or faceted bottom surface.Method 200 of Figure 2 continues with forming 210 a sacrificial layer in the S/D trenches, such as to form the example resulting structure of Figure 3D including sacrificial layer 340, in accordance with some embodiments. Sacrificial layer 340, in some embodiments, includes material that can be selectively etched relative to the final S/D material (used for S/D regions 360). Thus, in some such embodiments, sacrificial layer 340 includes compositionally different material relative to the final S/D material. Further, in some embodiments, the material of sacrificial layer 340 is selected such that it can be selectively etched relative to dielectric wall structures 320, where such dielectric wall structures 320 are employed. Further still, in some embodiments, the material of sacrificial layer 340 is selected such that it can be selectively etched relative to the material of other exposed features during the S/D contact processing (where the sacrificial layer 340 is at least partially removed via selective etch), where such other exposed features may include, for instance, the material of one or more interlayer dielectric (ILD) layers, the material of channel layer 310, the material of substrate 300, and/or the material of hardmasks covering the gate electrode, to provide some examples. As can be understood based on this disclosure, sacrificial layer 340 acts as a space holder under the final S/D material such that when the sacrificial layer 340 is subsequently accessed and (at least partially) removed via selective etching, the space that it previously occupied can be filled with the S/D contact material to enable forming the S/D contacts 380 below the S/D regions 360.In some embodiments, sacrificial layer 340 includes one or more dielectric materials. In some such embodiments, sacrificial layer 340 includes (or is) any suitable oxide (e.g., silicon dioxide, silicon monoxide), nitride (e.g., silicon nitride), carbide (e.g., silicon carbide), high-k dielectric, low-k dielectric, and/or any other suitable electrically insulating material as can be understood based on this disclosure. In some embodiments, sacrificial layer 340 includes silicon, oxygen, nitrogen, and/or carbon. For instance, in some embodiments, sacrificial layer 340 includes silicon dioxide, silicon monoxide, silicon nitride, silicon oxynitride, or carbon-doped silicon dioxide (or other carbon-doped oxides). In some embodiments, sacrificial layer 340 includes one or more silicates (e.g., titanium silicate, tungsten silicate, niobium silicate, and silicates of other transition metals).In some embodiments, sacrificial layer 340 includes one or more semiconductor materials. In some such embodiments, sacrificial layer 340 includes group IV and/or group III-V semiconductor material. Thus, in some embodiments, sacrificial layer 340 includes one or more of germanium, silicon, tin, indium, gallium, aluminum, arsenic, phosphorous, antimony, bismuth, or nitrogen. In some embodiments, semiconductor material included in sacrificial layer 340 also includes dopant (with corresponding n-type and/or p-type dopant), while in other embodiments, semiconductor material included in body 310 is undoped/intrinsic. Recall that in some embodiments, sacrificial layer 340 includes compositionally different material from S/D regions 360. Recall, materials that are "compositionally different" or "compositionally distinct" as used herein refers to two materials that have different chemical compositions. This compositional difference may be, for instance, by virtue of an element that is in one material but not the other (e.g., silicon germanium is compositionally different from silicon and silicon dioxide is compositionally different from silicon), or by way of one material having all the same elements as a second material but at least one of those elements is intentionally provided at a different concentration in one material relative to the other material (e.g., SiGe having 70 atomic percent germanium is compositionally different from SiGe having 25 atomic percent germanium). In addition to such chemical composition diversity, the materials may also have distinct dopants (e.g., boron versus arsenic/phosphorous) or the same dopants but at differing concentrations. In still other embodiments, compositionally different materials may further refer to two materials that have different crystallographic orientations. For instance, (110) Si is compositionally distinct or different from (100) Si.In some embodiments, sacrificial layer 340 has a thickness (dimension in the Y-axis direction of Figure 3D ) in the range of 2-50 nm (or in a sub-range of 2-5, 2-10, 2-25, 3-8, 3-12, 3-20, 5-10, 5-25, 5-50, 10-25, 10-50, or 25-50 nm) or greater, or any other thickness value or range as can be understood based on this disclosure. In some embodiments, sacrificial layer 340 has a thickness of at least 2, 3, 5, 8, 10, 12, 15, 20, or 25 nm and/or at most 50, 35, 25, 20, 15, 12, 10, 8, or 5 nm, for example. In some embodiments, a thickness of at least 2 nm may be employed to ensure that substrate is adequately covered and to ensure that the material can be subsequently removed to enable forming the S/D contacts 380 below the S/D regions 360, as is described in more detail herein.Figures 4A-4D illustrate example cross-sectional views of a plane taken through a S/D region of the structures of Figures 3D , 3E , 3G , and 3H , respectively, to help show the processing described herein, in accordance with some embodiments. For instance, the cross-sectional view in Figure 4A is indicated by the 4A-4A dashed line in Figure 3D . Note that the structures of Figures 4A-4D show isolation regions 370. In some embodiments, isolation regions 370, which may be referred to as shallow trench isolation (STI) regions 370, include one or more dielectrics. In some such embodiments, the dielectric material included in isolation regions 370 includes any suitable oxide (e.g., silicon dioxide), nitride (e.g., silicon nitride), high-k dielectric, low-k dielectric, and/or any other suitable electrically insulating material as can be understood based on this disclosure. In some embodiments, isolation regions 370 include silicon, oxygen, nitrogen, and/or carbon. For instance, in some embodiments, isolation regions 370 includes silicon dioxide, silicon monoxide, silicon nitride, silicon oxynitride, or carbon-doped silicon dioxide (or other carbon-doped oxides). The other features of the structures are apparent in light of this disclosure.Method 200 of Figure 2 continues with forming 212 S/D regions in the S/D trenches, such as to form the example resulting structure of Figure 3E which includes S/D regions 360 formed in trenches 350, in accordance with some embodiments. Note that the source region and the drain region are referred to herein as simply S/D regions 360 for ease of description, as either of the regions 360 may be the source region thereby making the other region 360 the drain region. In other words, how the transistor device is electrically connected and/or how it operates can dictate which region 360 is the source region and which is the drain region. For instance, in some embodiments, the left S/D region 360 in the structure of Figure 3E is the source region and the right S/D region 360 is the drain region, and vice versa in other embodiments (left region 360 is the drain and right region 360 is the source). Also note that the cross-sectional view in Figure 4B is indicated by the 4B-4B dashed line in Figure 3E .In some embodiments, the S/D regions 360 can be formed using any suitable techniques. For instance, in embodiments where sacrificial layer 340 includes dielectric material, the material of S/D regions 360 may epitaxially grow only from the exposed semiconductor material of channel layer 310. However, in embodiments where sacrificial layer 340 includes semiconductor material, the material of S/D regions 360 may grow from both the exposed semiconductor material of channel layer 310 and from the top surface of sacrificial layer 340.In some embodiments, the epitaxial growth or deposition of the semiconductor material of S/D regions 360 is performed such that the growth from both sides of trench 350 merges to form S/D regions 360 such as those shown in Figure 3E . In some such embodiments, processing may then be performed to achieve the example structure of Figure 3F , where a trench or opening 352 is formed in the S/D regions 360 to gain access to the underlying sacrificial layer 340, for example. Such processing includes a deep etch that punches through the S/D regions 360 and stops at sacrificial layer 340, for example. For instance, the etch may include masking off the sides of the S/D regions 360 that remain and only having an opening where the eventual trench 352 is formed, and then performing a highly-directional etch down through the exposed S/D region such as to form the structure of Figure 3F . This deep etch processing may be performed prior to replacement gate processing or after (such as during source drain contact processing). In embodiments where the deep etch is performed before replacement gate processing (where such replacement gate processing occurs), additional sacrificial layer material 341 may be deposited in trench 352 to form the example resulting structure of Figure 3F' (which also shows the replacement gate structure formed). Note that sacrificial material 341 may or may not be compositionally distinct to sacrificial material 340.In other embodiments, the epitaxial growth of the semiconductor material of S/D regions 360 is controlled such that it is interrupted prior to the merging of the adjacent portions of S/D material. In some such embodiments, the structure of Figure 3F is formed in the first instance, without the intervening structure of Figure 3E having been formed. The epi growth to prevent merging of adjacent portions of S/D material (e.g., as shown in Figure 3F ) can be controlled based on the time of the deposition process, for example. Again, in embodiments employing replacement gate processing, additional sacrificial layer material 341 may be deposited in trench 352 to form the example resulting structure of Figure 3F' (which also shows the replacement gate structure formed). Note that there may be no observable interface as shown between the initial sacrificial layer 340 and the additional sacrificial material 341. Note that although trench 352 is shown in Figure 3F as being in the middle of original S/D trench 350, such a depiction is for ease of illustration and the present disclosure should not be so limited. Also note that in some embodiments, trench 352 has a width (dimension in the X-axis direction) between the portions of S/D material 360 of at least 2, 3, 4, or 5 nm, where such a threshold width may be utilized to ensure that the underlying sacrificial layers 341 and 340 can be accessed for (at least partial) removal via the selective etch processing described herein.In still other embodiments, such a trench or opening 352 need not be formed in the S/D regions 360 when employing dielectric wall structures 320, as will be described in more detail below with reference to Figures 6 and 7A-7D . In such embodiments, the S/D regions 360 need not be separated, as the processing to remove sacrificial layer 340 can be performed on the sides of the S/D regions 360 between those regions and the adjacent dielectric wall structures 320. In other words, in some such embodiments, the sacrificial layer 340 under the S/D regions 360 can be accessed by going around the S/D regions 360 (such as is shown in Figures 7A-7D ) instead of going through it (such as is shown in Figures 3F and 3G ), thereby resulting in a structure having its S/D region in tact between adjacent bodies of channel material 310 (such as is shown in Figure 6 ).S/D regions 360, in some embodiments, include semiconductor material. In some such embodiments, S/D regions 360 include group IV and/or group III-V semiconductor material. In some embodiments, S/D regions 360 include the same group-type of semiconductor material that channel layer 310 includes. For instance, in some such embodiments where channel layer 310 includes group IV semiconductor material (e.g., Si, SiGe, Ge), S/D regions 360 also include group IV semiconductor material. Further, in some such embodiments where channel layer 310 includes group III-V semiconductor material (e.g., GaAs, InGaAs, InP), S/D regions 360 also include group III-V semiconductor material. In some embodiments, S/D regions 360 include one or more of silicon, germanium, tin, carbon, indium, gallium, aluminum, arsenic, nitrogen, phosphorous, arsenic, or antimony. For instance, in an example embodiment, S/D regions 360 include semiconductor material that includes germanium (e.g., in a concentration in the range of 1-100 atomic percent), which may or may not also include silicon (e.g., in the form of Ge or SiGe). In another example embodiment, S/D regions 360 include gallium and arsenic, which may or may not also include indium (e.g., in the form of GaAs or InGaAs).In some embodiments, the S/D regions 360 include the same semiconductor material as one another (e.g., where they are processed simultaneously), while in other embodiments, the S/D regions 360 include compositionally distinct semiconductor material from one another (e.g., where they are processed separately using masking techniques). Further, in some embodiments, the semiconductor material included in S/D regions 360 includes dopant, such as n-type and/or p-type dopant. For instance, in some embodiments, both S/D regions 360 include n-type dopant (e.g., in an NMOS device), while in other embodiments, both S/D regions 360 include p-type dopant (e.g., in a PMOS device). In still other embodiments, one of the S/D regions 360 includes n-type dopant, while the other of the S/D regions 360 includes p-type dopant, such as in a configuration that employs quantum tunneling (e.g., in a TFET device).In some embodiments, one or both of S/D regions 360 include a multilayer structure that includes at least two compositionally distinct material layers or portions. For instance, in some such embodiments employing a multilayer S/D region, there may be a first portion nearest channel layer/body 310 and a second portion nearest S/D contact structure 380, where the first and second portions include compositionally different materials. For example, the second portion may include a relatively higher amount of dopant than the second portion, which may help prevent diffusion of undesired dopant into the adjacent channel layer/body 310 and/or help reduce contact resistance. In another example, the first portion includes a first semiconductor material and the second portion includes a second semiconductor material different form the first semiconductor material. For instance, the first portion may include Si or SiGe with a relatively low Ge concentration (e.g., 0-30 atomic percent), while the second portion may include SiGe or Ge with a relatively high Ge concentration (e.g., 30-100 atomic percent). In some embodiments, one or both of S/D regions 360 include grading (e.g., increasing and/or decreasing) of the concentration of one or more materials within the feature. For example, the atomic percent concentration of a semiconductor compound can be graded or changed throughout at least a portion of a S/D region 360, such as the concentration of Ge or In in the region. In another example, the concentration of dopant is graded in a S/D region 360, such as having the concentration be relatively lower near channel layer/body 310 and relatively higher near the corresponding S/D contact structure 380. This can be achieved by tuning the amount of dopant in the reactant flow (e.g., during an in-situ doping scheme), for example. Further, such a graded configuration can help prevent diffusion of undesired dopant into the channel layer/body 310 and/or help reduce contact resistance, for example.Method 200 of Figure 2 continues with optionally forming 214 the final gate structures if dummy gate structures were employed in a gate-last process flow, in accordance with some embodiments. Recall that if such a gate-last process flow is employed via replacement gate processing, then additional sacrificial material 341 may be formed in trench 352 of Figure 3F , such as to form the example resulting structure of Figure 3F' . This helps protect the S/D regions 360 during such replacement gate processing. The example structures of Figures 3F' and 3H' illustrate that the dummy gate structure (such as dummy gate structure 334' shown in Figure 3B' ) was removed and replaced with the final gate structure, in accordance with some embodiments. The final gate structure or stack still includes gate dielectric 332 and gate electrode 334, which is the same as the gate-first process flow resulting in the example structure of Figure 3H . However, as the gate-last process flow structure of Figures 3F' and 3H' forms the final gate structures in trenches between gate spacers 336 after the removal of the dummy gate structures, the final gate dielectric in those structures is not only formed on the bottom of that trench, but also on the trench sidewalls, as shown. As can be understood, gate dielectric is a conformal layer within that trench. Thus, in some embodiments, gate dielectric 332 has a "U" shape such as is shown in Figures 3F' and 3H' .Method 200 of Figure 2 continues with performing 216 S/D contact processing, such as to form the example resulting structures of Figures 3H and 3H' that include S/D contact structures 380, in accordance with some embodiments. Note that the source contact structure and the drain contact structure may simply be referred to herein as S/D contact structures 380 for ease of description, as either of the contact structures 380 may be to the source region thereby making the other contact structure 380 to the drain region. In other words, in some embodiments, the left S/D region 360 is the source region and thus corresponding contact structure 380 would be the source contact structure, making the right S/D region 360 the drain region and thus corresponding contact structure 380 would be the drain contact structure, while in other embodiments, the opposite configuration applies, with the source on the right and the drain on the left. Also note that the interface 395 between S/D contact 380 and S/D region 360 of Figures 3H and 3H' is increased relative to the interface 195 of Figure 1 . Thus, the structures described herein and enabled through sacrificial S/D layer processing have significantly larger contact area than typical state-of-the-art top-interface contact devices, as can be understood based on this disclosure.The S/D contact processing 216 includes at least partially removing sacrificial layer 340 (and additional sacrificial layer 341, where employed) to enable forming the S/D contacts 380 below the S/D regions 360, such as to form the example resulting structure of Figure 3G , in accordance with some embodiments. Such processing can use wet and/or dry etch techniques that selectively removes the material of sacrificial layer 340 (and additional sacrificial layer 341, where employed) relative to the material of S/D regions 360. For instance, as described herein, the materials included in sacrificial layers 341, 340 and S/D regions 360 can be selected to ensure a desired amount of etch selectivity between the materials, such that sacrificial layers 341 and 340 can be removed using one or more etchants at a rate that is relatively faster than the rate that the one or more etchants remove S/D regions 360. In some embodiments, for a given etchant, material included in sacrificial layers 341 and 340 can be selectively removed relative to the material included in S/D regions 360, such that the given etchant removes the material in sacrificial layers 341 and 340 at least 2, 3, 4, 5, 10, 15, 20, 25, 50, or 100 times faster than the given etchant removes the material in S/D regions. In some embodiments, all of sacrificial layers 341 and 340 are removed, such as is shown in Figure 3G . However, in other embodiments, a remainder of sacrificial layer 340 may remain, such as at the bottom of the trenches 354 as shown in the blown-out portion of Figure 3G' . In either such cases, the techniques described herein that employ sacrificial layer 340 can be detected based on such a remnant or artifact of the sacrificial layer 340. Note that trenches 350, 352, and 354 are all in the S/D regions, but they relate to the trenches at various stages of the processing. Also note that the cross-sectional view in Figure 4C is indicated by the 4C-4C dashed line in Figure 3G . Numerous different material combinations and sacrificial removal techniques can be understood based on this disclosure.After sacrificial layer 340 (and additional sacrificial layer 341, where employed) has been at least partially removed, the S/D contact processing includes forming the S/D contacts 380 in trenches 354, such as to form the example structures of Figures 3H and 3H' , in accordance with some embodiments. Note that the cross-sectional view in Figure 4D is indicated by the 4D-4D dashed line in Figure 3H . In some embodiments, the S/D contacts 380 are deposited using ALD and/or CVD processes, for instance, which enables deposition of the metal all around the S/D regions 360, for example, including the underside of the S/D regions (and between portions of the S/D regions 360 in neighboring cells and/or along the sidewalls of the S/D regions 360 between the dielectric wall structures 320, where applicable). In some embodiments, the S/D contact processing 216 includes silicidation, germanidation, and/or III-V-idation to form a mixture of one or more metals with the exposed semiconductor material surface of the S/D regions 360. In some cases, the mixture of the metal and semiconductor material is referred to as an intermetallic region.In some embodiments, one or both of the S/D contact structures 380 include a resistance reducing metal and a contact plug metal, or just a contact plug, for instance. Example contact resistance reducing metals include, for instance, nickel, titanium, titanium with nitrogen (e.g., in the form of titanium nitride), tantalum, tantalum with nitrogen (e.g., in the form of tantalum nitride), cobalt, gold, gold-germanium, nickel-platinum, nickel aluminum, and/or other such resistance reducing metals or alloys. Example contact plug metals include, for instance, aluminum, tungsten, ruthenium, or cobalt, although any suitable conductive material could be employed. In some embodiments, additional layers are present in the S/D contact trenches, where such additional layers would be a part of the S/D contact structures 380. Examples of additional layers include adhesion layers and/or liner/barrier layers, that include, for example, titanium, titanium with nitrogen (e.g., in the form of titanium nitride), tantalum, and/or tantalum with nitrogen (e.g., in the form of tantalum nitride). Another example of an additional layer is a contact resistance reducing layer between a given S/D region 360 and its corresponding S/D contact structure 380, where the contact resistance reducing layer includes semiconductor material and relatively high dopant (e.g., with dopant concentrations greater than 1E19, 1E20, 1E21, 5E21, or 1E22 atoms per cubic cm), for example.In some embodiments, a dielectric layer (not shown) may be between the top portion of S/D contacts 380 and gate sidewall spacers 336. In some such embodiments, the dielectric layer includes any suitable oxide (e.g., silicon dioxide), nitride (e.g., silicon nitride), high-k dielectric, low-k dielectric, and/or any other suitable electrically insulating material as can be understood based on this disclosure. In some embodiments, the dielectric layer includes silicon, oxygen, nitrogen, and/or carbon. For instance, in some embodiments, the dielectric layer includes silicon dioxide, silicon monoxide, silicon nitride, silicon oxynitride, or carbon-doped silicon dioxide (or other carbon-doped oxides). In some embodiments, it is desired to select material for the dielectric layer that has a low dielectric constant and a high breakdown voltage. In some embodiments, to decrease dielectric constant, the dielectric layer is formed to be intentionally porous, such as including at least one porous carbon-doped oxide (e.g., porous carbon-doped silicon dioxide). In embodiments where the dielectric layer is porous, it includes a plurality of pores throughout at least a portion of the layer. In some embodiments, the dielectric layer includes a multilayer structure. Note that such a dielectric layer may be referred to as an interlayer dielectric (ILD) structure, in some cases.Method 200 of Figure 2 continues with completing 218 integrated circuit processing, as desired, in accordance with some embodiments. Such additional processing to complete the integrated circuit can include back-end or back-end-of-line (BEOL) processing to form one or more metallization layers and/or to interconnect the devices formed during the front-end or front-end-of-line (FEOL) processing, such as the transistor devices described herein. Note that the processes 202-218 of method 200 are shown in a particular order for ease of description, in accordance with some embodiments. However, in some embodiments, one or more of the processes 202-218 are performed in a different order or need not be performed at all. For example, box 204 is an optional process that need not be performed, in some embodiments. Further, box 214 is an optional process that need not be performed in embodiments employing a gate-first process flow, for example. Numerous variations on method 200 and the techniques described herein will be apparent in light of this disclosure.Figure 5 illustrates the example integrated circuit structure of Figure 3H , illustrating a portion of sacrificial layer 340 remaining in the final structure, in accordance with some embodiments. Recall that as shown in the blown-out portion of Figure 3G' , the sacrificial layer 340 may only be partially removed via the selective etch processing, such that a portion of sacrificial layer 340 remains in the final structure, such as is shown in Figure 5 . In some such embodiments, such remaining sacrificial layer 340 portion is intentionally kept to, for example, help isolate the S/D contacts 380 from the underlying substrate 300. In embodiments where a portion of sacrificial layer 340 remains at the bottom of the S/D trenches, the remaining thickness of that sacrificial layer portion (dimension in the Y-axis direction in the example structure of Figure 5 ) may be at least 1, 2, 3, 4, or 5 nm and/or at most 10, 8, 6, or 5 nm, for example, or any other thickness value or range as can be understood based on this disclosure. Note that at least a portion of sacrificial layer 340 can remain in the end structure regardless of whether a gate-first process flow is employed (e.g., resulting in the structure of Figure 3H , such as is shown in Figure 5 ) or a gate-last process flow is employed (e.g., resulting in the structure of Figure 3H' ). Note that observing the remaining portions of sacrificial layer 340 may be useful in detecting the techniques and structures described in this disclosure.Note that the structures described herein are primarily described and shown in the context of non-planar transistor configurations; however, in some embodiments, the techniques can be used for planar transistor configurations. Planar transistor configurations relate to where the gate structure (e.g., gate dielectric 332 and gate electrode 334) is only above or otherwise adjacent to only one side of channel layer or body 310. Non-planar transistor configurations relate to where the gate structure (e.g., gate dielectric 332 and gate electrode 334) is adjacent to multiple sides of channel layer or body 310. For instance, the example integrated circuit structures of Figures 3H, 3H' and 5 include finned transistor configurations, such as for FinFET devices, where the active height of the fin is indicated by 390 in the figures. The fin is better illustrated in Figure 8A , which is along the dashed line 8A-8A shown in Figures 3H, 3H' and 5 . In Figure 8A , body 310 is a fin or fin-shaped, and in addition to being below the gate structure (including gate dielectric 332 and gate electrode 334), body 310 is also between two portions of the gate structure as shown. As is also shown, active height 390 relates to the height of the portion of the fin that extends above the top plane of the isolation or STI regions 370.In embodiments employing a finned transistor configuration (e.g., where the body 310 is a fin, such as is show in Figures 8A and 8C ), the fins can be formed using any suitable techniques, such as blanket depositing the body of channel material and patterning the blanket-deposited layer into fins as desired. Another technique includes forming fins in the top of substrate 300, forming isolation regions including dielectric material in the trenches between fins, recessing or removing the substrate-based fins to make trenches between the isolation regions, depositing the material of body 310 to form fins in those trenches, and then recessing the isolation regions to expose the fins and allow them to protrude or extend above a top surface of the isolation regions. For instance, isolation regions 370 in Figure 8A may be those recessed isolation regions in such cases). Figure 8C illustrates the same view of Figure 8A , but with a different fin-shaped body, where the body 310 includes a rounded or curved top surface (as opposed to a flat or planar top surface, as shown in the structure of Figure 8A ). Further, the structure of Figure 8C includes dielectric wall structures 320, as described herein. Further still, the structure of Figure 8C shows that a portion of body 310 extends down to the sub-fin region that is below the active height 390 (as opposed to all of body 310 being a part of the active height 390 in the structure of Figure 8A ).In some embodiments employing a finned configuration, the fin-shaped body (e.g., 310 in Figures 8A and 8C ) has a width (dimension in the Z-axis direction) in the range of 2-100 nm (or in a subrange of 2-10, 2-25, 2-40, 2-50, 2-75, 4-10, 4-25, 4-40, 4-50, 4-75, 4-100, 10-25, 10-40, 10-50, 10-75, 10-100, 25-40, 25-50, 25-75, 25-100, or 50-100 nm) or greater, or any other suitable value or range as can be understood based on this disclosure. In some embodiments, the fin-shaped body has a width of at least 2, 5, 8, 10, 15, 20, 25, or 50 nm, and/or a width of at most 100, 75, 50, 40, 30, 25, 20, 15, 12, or 10 nm, for example. In some embodiments employing a finned configuration, the active height 390 of the fin-shaped body is a height (dimension in the Y-axis direction) in the range of 5-200 nm (or in a subrange of 5-25, 5-50, 5-100, 10-25, 10-50, 10-80, 10-100, 10-200, 20-80, 20-100, 20-200, 40-80, 40-120, 40-200, 50-100, 50-200, or 100-200 nm) or greater, or any other suitable value or range as can be understood based on this disclosure. In some embodiments, the fin-shaped body has an active height 390 of at least 5, 10, 15, 20, 25, 50, 80, 100, 120, or 150 nm, and/or at most 200, 150, 120, 100, 80, 50, or 25 nm, for example. In some embodiments employing a finned configuration, the active height 390 to width ratio of the fins is greater than 1, such as greater than 1.5, 2, 2.5, 3, 4, 5, 6, 7, 8, 9, or 10, or greater than any other suitable threshold ratio. Numerous different shapes and configurations for the body of channel material (or channel region) of the transistor will be apparent in light of this disclosure.Figure 6 illustrates a cross-sectional view of an example integrated circuit structure including increased S/D contact area and employing a gate-all-around (GAA) configuration, in accordance with some embodiments. The structure of Figure 6 is similar to that of Figure 3H' , as both structures were formed with a gate-last process flow, except that the structure of Figure 3H' (as well as in the structures of Figures 3H and 5 ) has a finned configuration where the active height of the fin is indicated as 390, as opposed to the gate-all-around configuration of Figure 6 . In addition, the S/D regions 360 in the structure of Figure 6 are not separated in the middle (e.g., trench 352 and contacts 380 are not present between portions of the S/D regions 360) as they are in the structure of Figure 3H' (as well as in the structures of Figures 3H and 5 ). This is because dielectric wall structures 320 were employed for the structure of Figure 6 , resulting in the sacrificial layer 340 being able to be accessed to the sides of the S/D regions 360, as described in more detail below with respect to Figures 7A-7D .Again, the structure of Figure 6 is similar to the structure of Figure 3H' , and thus all relevant description of that structure applies equally to the structure of Figure 6 . However, as shown in Figure 6 , the gate structure (including gate dielectric 332 and gate electrode 334) wraps around body 310 in a gate-all-around (GAA) configuration. Thus, in this example structure, body 310 may be considered a nanowire or nanoribbon, for example. Such a structure is also shown in Figure 8B , for example, which is the view along dashed line 8B-8B in Figure 6 . Such a structure can be formed using an initial multilayer stack including one or more sacrificial layers and one or more non-sacrificial layers (such as the layer that becomes body 310). The sacrificial layer(s) of the multilayer stack can then be removed via selective etch processing to release the non-sacrificial layer(s) to be used as the body(ies) of channel material. Thus, the material of sacrificial layer(s) be selectively etched relative to the material of body 310 using a given etchant. Such selective etch processing can occur, for example, during process 214 where the replacement gate processing occurs. Examples of suitable materials for the selective etch processing are provided herein, such as layers of the channel material including SiGe or Ge, while the sacrificial layers include Si or SiGe (with relatively lower Ge concentration, such as at least 20 atomic percent lower Ge). In some embodiments, the stack of nanowires or nanoribbons (even just including the final layers after the sacrificial layers are removed) may be considered fin-shaped. In some embodiments, a nanoribbon may have a height to width ratio as described for the fins herein, but inversed, such that a nanoribbon is similar to a sideways laying fin (e.g., with a width to height ratio of at least 1.5, 2, 2.5, 3, 4, or 5).In some embodiments employing a gate-all-around or GAA configuration, the nanowire/nanoribbon-shaped body (e.g., 310 in Figures 6 and 8B ) has a height (in the Y-axis direction) in the range of 2-100 nm (or in a subrange of 2-10, 2-25, 2-40, 2-50, 2-75, 4-10, 4-25, 4-40, 4-50, 4-75, 4-100, 10-25, 10-40, 10-50, 10-75, 10-100, 25-40, 25-50, 25-75, 25-100, or 50-100 nm) or greater, or any other suitable range as can be understood based on this disclosure. In some embodiments, the nanowire/nanoribbon-shaped body has a height of at least 2, 5, 8, 10, 15, 20, 25, or 50 nm, and/or a height of at most 100, 75, 50, 40, 30, 25, 20, 15, 12, or 10 nm, for example. Although only one body (or nanowire or nanoribbon) is shown in the example structures of Figures 6 and 8B , any number of bodies (or nanowires or nanoribbons) can be employed in a gate-all-around configuration, such as 2-10 or more, in accordance with some embodiments. For instance, Figure 8D also shows a cross-sectional view through the channel region and gate structure, and includes two bodies of channel material 310 (which may be considered nanowires or nanoribbons). Also note that the bodies of channel material 310 in the structure of Figure 8D are square-shaped as opposed to circular, as shown in Figure 8B . Thus, nanowires or nanoribbons could employ various different shapes, such as a circle, oval, ellipse, square, rectangle, sheet, fin, or any other shape as can be understood based on this disclosure. Further note that the structure of Figure 8D does not include dielectric wall structures 320, as shown.Figures 7A-7D illustrate example cross-sectional integrated circuit views through a S/D region of the structure of Figure 6 to illustrate forming the S/D contact structure around that S/D region when employing dielectric wall structures, in accordance with some embodiments. In more detail, the structures are views along dashed line 7D-7D in Figure 6 , with the structure of Figure 7D corresponding to the actual structure of Figure 6 , as can be understood. Recall that the processing optionally includes forming 204 dielectric wall structures, as previously described. Such dielectric wall structures 320 are shown in Figures 7A-7D , and they include one or more dielectrics, in accordance with some embodiments. In some such embodiments, the dielectric material included in dielectric wall structures 320 includes any suitable oxide (e.g., silicon dioxide), nitride (e.g., silicon nitride), high-k dielectric, low-k dielectric, and/or any other suitable electrically insulating material as can be understood based on this disclosure. In some embodiments, dielectric wall structures 320 include silicon, oxygen, nitrogen, and/or carbon. For instance, in some embodiments, dielectric wall structures 320 includes silicon dioxide, silicon monoxide, silicon nitride, silicon oxynitride, or carbon-doped silicon dioxide (or other carbon-doped oxides). In some embodiments, dielectric wall structures 320 have a top portion (farthest from substrate 300) that includes hi-k dielectric material (e.g., to help provide relatively robust etch selectivity when removing sacrificial layer 340) and a bottom portion (nearest to substrate 300) that includes low-k dielectric material (e.g., to help reduce capacitance). Note that in some embodiments, the dielectric wall structures 320 extend from adjacent to the source region (e.g., one of the S/D regions 360) to adjacent to the drain region (e.g., the other of the S/D regions), while in other embodiments, the dielectric wall structures 320 may only be formed adjacent to the source and drain regions (such that they do not extend under the gate line, for example). The other features of the structures are apparent in light of this disclosure.As shown in structures of Figures 7A-7D , dielectric wall structures 320 allow the sacrificial layer to be removed from under the S/D regions 360 without going through a given S/D region 360 (as opposed to the previous description of the techniques), in accordance with some embodiments. For instance, Figure 7A illustrates the sacrificial layer 340 formed at the bottom of the S/D trench 350, similar to the processing described herein to form the structure of Figure 3D . Thus, the structure of Figure 3D also applies to the structure of Figure 7A , where Figure 7A would be the view indicated by the dashed line 4A-4A, for example. Note that compared to the structure of Figure 4A , the sacrificial layer 340 also forms on the sidewalls of the dielectric wall structures 320 in the structure of Figure 7A , which acts as a space-holder to allow later removal via selective etch and access to the bottom portion of the sacrificial layer 340. Figure 7B illustrates the S/D region 360 after it is formed, similar to the processing described herein to form the structure of Figure 3E . Thus, the structure of Figure 3E also applies to the structure of Figure 7B , where 7B would be the view indicated by the dashed line 4B-4B, for example. Note that the sacrificial layer 340 encapsulates the epitaxial semiconductor material of the S/D region 360 as it grows and provides isolation between the sidewall of the S/D material 360 and the dielectric wall structures 320, as well as between the underside of the S/D material 360 and the substrate 300, as shown.Figure 7C illustrates the sacrificial layer 340 having been selectively etched and removed during S/D contact processing (such as during process 216 described herein). Recall that although the sacrificial layer 340 is shown as having been completely removed in Figure 7C , in some cases a portion of the sacrificial layer 340 is not removed and remains in the final structure. Note that the minimum space (indicated as 392 in Figure 7C ) between a side of the S/D region 360 and the adjacent dielectric wall structure 320 may be at least 2, 3, 4, or 5 nm, in accordance with some embodiments. Such minimal clearance (e.g., at least 2 nm or at least 5 nm) may be required to ensure access of the sacrificial layer 340 under the S/D region 360 during the selective etch processing used to remove that sacrificial layer 340, for example. However, too much of a clearance reduces the size of the S/D region 360, which may be undesired. After the sacrificial layer 340 is at least partially removed from below the S/D region 360, Figure 7D illustrates the S/D contact structures 380 having been deposited to form the metal features all-around the S/D material 360, including the underside of the S/D region 360 and along the sidewalls of the S/D region 360 between the dielectric wall structures 320 (such as during process 216 described herein). Note that the interface 395 between S/D contact 380 and S/D region 360 of Figure 7D is increased relative to the interface 195 of Figure 1 . Recall, the structures described herein and enabled through sacrificial S/D layer processing have significantly larger contact area than typical state-of-the-art top-interface contact devices, as can be understood based on this disclosure. Such relatively increased S/D contact area reduces contact resistance and improves device performance. Numerous variations and configurations will be apparent in light of this disclosure.Example SystemFigure 9 illustrates a computing system 1000 implemented with integrated circuit structures including at least one transistor having increased S/D contact area as disclosed herein, in accordance with some embodiments. For example, the integrated circuit structures disclosed herein including at least one transistor having increased S/D contact area can be included in one or more portions of computing system 1000. As can be seen, the computing system 1000 houses a motherboard 1002. The motherboard 1002 can include a number of components, including, but not limited to, a processor 1004 and at least one communication chip 1006, each of which can be physically and electrically coupled to the motherboard 1002, or otherwise integrated therein. As will be appreciated, the motherboard 1002 may be, for example, any printed circuit board, whether a main board, a daughterboard mounted on a main board, or the only board of system 1000, etc.Depending on its applications, computing system 1000 can include one or more other components that may or may not be physically and electrically coupled to the motherboard 1002. These other components can include, but are not limited to, volatile memory (e.g., DRAM or other types of RAM), non-volatile memory (e.g., ROM, ReRAM/RRAM), a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the components included in computing system 1000 can include one or more integrated circuit structures or devices formed using the disclosed techniques in accordance with an example embodiment. In some embodiments, multiple functions can be integrated into one or more chips (e.g., for instance, note that the communication chip 1006 can be part of or otherwise integrated into the processor 1004).The communication chip 1006 enables wireless communications for the transfer of data to and from the computing system 1000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1006 can implement any of a number of wireless standards or protocols, including, but not limited to, Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing system 1000 can include a plurality of communication chips 1006. For instance, a first communication chip 1006 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1006 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 1004 of the computing system 1000 includes an integrated circuit die packaged within the processor 1004. In some embodiments, the integrated circuit die of the processor includes onboard circuitry that is implemented with one or more integrated circuit structures or devices formed using the disclosed techniques, as variously described herein. The term "processor" may refer to any device or portion of a device that processes, for instance, electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 1006 also can include an integrated circuit die packaged within the communication chip 1006. In accordance with some such example embodiments, the integrated circuit die of the communication chip includes one or more integrated circuit structures or devices formed using the disclosed techniques as variously described herein. As will be appreciated in light of this disclosure, note that multi-standard wireless capability can be integrated directly into the processor 1004 (e.g., where functionality of any chips 1006 is integrated into processor 1004, rather than having separate communication chips). Further note that processor 1004 can be a chip set having such wireless capability. In short, any number of processor 1004 and/or communication chips 1006 can be used. Likewise, any one chip or chip set can have multiple functions integrated therein.In various implementations, the computing system 1000 can be a laptop, a netbook, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, a digital video recorder, or any other electronic device or system that processes data or employs one or more integrated circuit structures or devices formed using the disclosed techniques, as variously described herein. Note that reference to a computing system is intended to include computing devices, apparatuses, and other structures configured for computing or processing information.Further Example EmbodimentsThe following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent.Example 1 is an integrated circuit including at least one transistor. The integrated circuit includes a body (or channel region), a gate electrode and a gate dielectric (or, collectively, a gate structure), a source (or first) region and a drain (or second) region, a first (or source) contact structure, and a second (or drain) contact structure. The body includes semiconductor material. The gate electrode is at least above the body, the gate electrode including one or more metals. The gate dielectric is between the gate electrode and the body, the gate dielectric including one or more dielectrics. The body is between the source and drain regions, the source and drain regions including semiconductor material. The first contact structure includes one or more metals. The second contact structure includes one or more metals. Note that the semiconductor material of the body may be the same as the semiconductor material of the source and drain regions (not counting doping) according to some embodiments, while in other embodiments the semiconductor material of the body is compositionally different from the semiconductor material of the source and drain regions (not counting doping).Example 2 includes the subject matter of Example 1, wherein the first contact structure is at least above and below the source region (such that the source region is between two portions of the first contact structure).Example 3 includes the subject matter of Example 1 or 2, wherein the second contact structure is at least above and below the drain region (such that the drain region is between two portions of the second contact structure).Example 4 includes the subject matter of any of Examples 1-3, wherein the first contact structure wraps around (or surrounds) the source region.Example 5 includes the subject matter of any of Examples 1-4, wherein the second contact structure wraps around (or surrounds) the drain region.Example 6 includes the subject matter of any of Examples 1-5, wherein the first contact structure is between two portions of the source region.Example 7 includes the subject matter of any of Examples 1-6, wherein the second contact structure is between two portions of the drain region.Example 8 includes the subject matter of any of Examples 1-7, wherein the first contact structure is adjacent to at least three or four sides of the source region.Example 9 includes the subject matter of any of Examples 1-8, wherein the second contact structure is adjacent to at least three or four sides of the drain region.Example 10 includes the subject matter of any of Examples 1-9, further comprising a substrate.Example 11 includes the subject matter of Example 10, wherein a portion of the first contact structure is between the substrate and the source region.Example 12 includes the subject matter of Example 10 or 11, wherein a portion of the second contact structure is between the substrate and the drain region.Example 13 includes the subject matter of any of Examples 10-12, further comprising a layer between the first contact structure and the substrate, the layer including compositionally different material relative to the source region.Example 14 includes the subject matter of any of Examples 10-13, further comprising a layer between the second contact structure and the substrate, the layer including compositionally different material relative to the drain region. Note that the layer in Examples 13 and 14 may be the same layer.Example 15 includes the subject matter of Example 13 or 14, wherein the layer of Example 13 and/or 14 includes one or more dielectrics.Example 16 includes the subject matter of Example 13 or 14, wherein the layer of Example 13 and/or 14 includes semiconductor material that is compositionally different from the semiconductor material included in the source and/or drain regions, respectively.Example 17 includes the subject matter of any of Examples 1-16, further comprising a first wall structure and a second wall structure, the source region between the first and second wall structures, the first and second wall structures including one or more dielectrics.Example 18 includes the subject matter of any of Examples 1-17, further comprising a first wall structure and a second wall structure, the drain region between the first and second wall structures, the first and second wall structures including one or more dielectrics. Note that the first and second wall structures in Examples 17 and 18 may be the same first and second wall structures that extend from the source region to the drain region.Example 19 includes the subject matter of any of Examples 1-18, wherein the one or more metals included in the first and second contact structures include one or more transition metals.Example 20 includes the subject matter of Example 19, wherein the one or more transition metals include one or more of tungsten, titanium, tantalum, copper, cobalt, gold, nickel, or ruthenium.Example 21 includes the subject matter of any of Examples 1-20, wherein the body includes germanium.Example 22 includes the subject matter of any of Examples 1-21, wherein the body includes group III-V semiconductor material.Example 23 includes the subject matter of any of Examples 1-22, wherein the body is a fin, the fin between two portions of the gate electrode.Example 24 includes the subject matter of Example 23, wherein the fin has a height of at least 20, 50, or 100 nanometers between the two portions of the gate electrode.Example 25 includes the subject matter of any of Examples 1-22, wherein the gate electrode wraps around the body.Example 26 includes the subject matter of Example 25, wherein the body is a nanowire or a nanoribbon.Example 27 is a logic device including the subject matter of any of Examples 1-26.Example 28 is a complementary metal-oxide-semiconductor (CMOS) circuit including the subject matter of any of Examples 1-27.Example 29 is a computing system including the subject matter of any of Examples 1-28.Example 30 is a method of forming the subject matter of any of Examples 1-29. The method includes at least providing the body (or channel region), forming the gate electrode, forming the gate dielectric, forming the source (or first) region and the drain (or second) region, forming the first (or source) contact structure, and forming the second (or drain) contact structure.Example 31 includes the subject matter of Example 30, further including forming a sacrificial layer in the source and drain regions, and removing the sacrificial layer prior to forming the first and second contact structures, such that a cavity is formed below each of the first and second contact structures to allow the first and second contact structures to be respectively formed below the source and drain regions.Example 32 includes the subject matter of Example 30 or 31, further comprising etching an opening in the source and drain regions prior to forming the first and second contact structures.Example 33 includes the subject matter of any of Examples 30-32, further comprising forming a first wall structure and a second wall structure, the first and second wall structures including one or more dielectrics, the source and drain regions between the first and second wall structures.Example 34 includes the subject matter of any of Examples 30-33, wherein forming the source and drain regions includes epitaxially growing semiconductor material included in the regions from the body.Example 35 includes the subject matter of any of Examples 30-34, wherein the gate dielectric and gate electrode are formed after forming the source and drain regions.Example 36 is an integrated circuit including at least one transistor, the integrated circuit comprising: a substrate; a body above the substrate, the body including semiconductor material; a gate electrode at least above the body, the gate electrode including one or more metals; a gate dielectric between the gate electrode and the body, the gate dielectric including one or more dielectrics; a source region and a drain region, the body between the source and drain regions, the source and drain regions including semiconductor material; a first contact structure that wraps around the source region, a portion of the first contact structure between the substrate and the source region, the first contact structure including one or more metals; and a second contact structure that wraps around the drain region, a portion of the second contact structure between the substrate and the drain region, the second contact structure including one or more metals.Example 36 includes the subject matter of Example 35, wherein the body is a fin, a nanowire, or a nanoribbon.Example 37 is a method of forming an integrated circuit including at least one transistor, the method comprising: providing a body including semiconductor material; forming a gate electrode at least above the body, the gate electrode including one or more metals; forming a gate dielectric between the gate electrode and the body, the gate dielectric including one or more dielectrics; forming a source region and a drain region, the body between the source and drain regions, the source and drain regions including semiconductor material; forming a first contact structure at least above and below the source region, the first contact structure including one or more metals; and forming a second contact structure at least above and below the drain region, the second contact structure including one or more metals.Example 38 includes the subject matter of Example 37, the method further comprising: forming a sacrificial layer in the source and drain regions; and removing the sacrificial layer prior to forming the first and second contact structures, such that a cavity is formed below each of the first and second contact structures to allow the first and second contact structures to be respectively formed below the source and drain regions.The foregoing description of example embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the present disclosure be limited not by this detailed description, but rather by the claims appended hereto. Future filed applications claiming priority to this application may claim the disclosed subject matter in a different manner, and may generally include any set of one or more limitations as variously disclosed or otherwise demonstrated herein. |
Methods and apparatus for performing error correction code (ECC) coding techniques for high-speed implementations. The ECC code word is structured to facilitate a very fast single-error-detect (SED) that allows state machines to be stopped within a single cycle when an error is detected and enables a corresponding single-error-correct (SEC) operation to be performed over multiple cycles while the state machines are in a suspended mode. |
CLAIMS What is claimed is: 1. A method, comprising: performing a parity check for first and second subsets of bits in an instruction word, the first and second subsets collectively covering all of the bits in the instruction word; and detecting if a single error is present in the instruction word based on the parity checks for the first and second subsets of bits. 2. The method of claim 1 , further comprising: in response to determining a single error is present in the instruction word, performing a single error correction (SEC) process to correct the single error in the instruction word. 3. The method of claim 2, wherein the SEC process is performed using an error correction coding (ECC) scheme implemented using a plurality of subsets of bits for the instruction word. 4. The method of claim 3, wherein the plurality of subsets of bits include at least one of the first and second subsets of bits. 5. The method of claim 3, further comprising performing a double error detection (DED) operation using the ECC scheme. 6. The method of claim 1 , further comprising: in response to determining a single error is present in the instruction word, stopping state machines for a processor on which the instruction word is to be executed; and saving machine state data for the state machines. 7. The method of claim 6, further comprising: performing a single error correction (SEC) process to correct the single error in the instruction word; restoring the machine states for the state machines; and executing the corrected instruction word. 8. The method of claim 6, further comprising: performing the operations of detecting a single error is present in the instruction word and stopping the state machines within a single cycle. 9. The method of claim 1 , wherein the instruction word contains a plurality of instruction bits concatenated with a plurality of check bits, and the parity check for the first subset of bits is calculated over multiple selected instruction bits from among the plurality of instruction bits and multiple selected check bits from among the plurality of check bits including a check bit for the first subset of bits. 10. The method of claim 1 , further comprising: employing a respective first and second parity check bit for each of the first and second subset of bits, the instruction word including the first and second parity bits; performing the parity checks for each of the first and second subsets of bits using a value of the respective first and second parity check bits in the instruction word. 11. The method of claim 1 , further comprising: performing a memory scrubbing operation by iteratively performing operations including, reading an instruction word from memory; performing a single error detection (SED) operation on the instruction word to determine if it contains an error; in response to detection of an error, performing a single error correction (SEC) operation to produce a corrected instruction word; writing the corrected instruction word back to the memory; beginning evaluation of a next instruction word; otherwise, if an error is not detected beginning evaluation of a next instruction word. 12. An integrated circuit, comprising: logic to implement an error correction code (ECC) mechanism employing a plurality of subsets of bits, each subset of bits comprising a unique set of bit positions for an instruction word; logic to perform a parity check for first and second subsets of bits in the plurality of subsets of bits, the first and second subsets collectively covering all of the bits in the instruction word; and logic to detect if a single error is present in the instruction word based on the parity checks for the first and second subsets of bits. 13. The integrated circuit of claim 12, wherein the logic to perform the parity check for the first and second subsets of bits comprises first and second XOR trees having multiple logic levels, the bits in the first subset comprising inputs to a first logic level in the first XOR tree, the bits in the second subset comprising inputs to a first logic level in the second XOR tree, each of the first and second subset of bits including a corresponding parity bit for the instruction word. 14. The integrated circuit of claim 13, wherein the parity check comprises an even parity check, and the logic further includes a logical OR block having first and second inputs respectively coupled to first and second outputs of the first and second XOR trees. 15. The integrated circuit of claim 12, further comprising: at least one state machine; logic to enunciate a single error detection (SED) signal in response to detecting a single error in the instruction word; and logic to stop the at least one state machine in response to the SED signal and save state information corresponding to the at least one state machine. 16. The integrated circuit of claim 15, further comprising: an execution datapath; logic to perform a single error correction (SEC) operation on the instruction word using the ECC mechanism to generate a corrected instruction word; and logic to forward the corrected instruction word to be executed via the execution datapath. 17. The integrated circuit of claim 12, wherein the integrated circuit comprises a network processor unit having a plurality of compute engines, including at least one compute engine comprising: a control store to store instruction words; an instruction register coupled to the control store; logic to implement an error correction code (ECC) mechanism employing a plurality of subsets of bits, each subset of bits comprising a unique set of bit positions for the instruction register; logic to perform a parity check for first and second subsets of bits in the plurality of subsets of bits, the first and second subsets collectively covering all of the bits in the instruction word; and logic to detect if a single error is present in the instruction word based on the parity checks for the first and second subsets of bits. 18. A network line card, comprising: a printed circuit board (PCB); a backplane connector, coupled to the PCB; an interconnect comprising a plurality of address and data bus lines formed in the PCB; Rambus Dynamic Random Access Memory (RDRAM), coupled to the interconnect; and a network processor unit, coupled to the interconnect, including a plurality of compute engines, at least one compute engine including, a control store to store instruction words; an instruction register coupled to the control store; an execution datapath; logic to implement an error correction code (ECC) mechanism employing a plurality of subsets of bits, each subset of bits comprising a unique set of bit positions for the instruction register; logic to perform a parity check for first and second subsets of bits in the plurality of subsets of bits, the first and second subsets collectively covering all of the bits in the instruction word; and logic to detect if a single error is present in an instruction word based on the parity checks for the first and second subsets of bits.. 19. The network line card of claim 18, wherein the logic to perform the parity check for the first and second subsets of bits comprises first and second XOR trees having multiple logic levels, the bits in the first subset comprising inputs to a first logic level in the first XOR tree, the bits in the second subset comprising inputs to a first logic level in the second XOR tree, each of the first and second subset of bits including a corresponding parity bit for the instruction word, further wherein the parity check comprises an even parity check, and the logic further includes a logical OR block having first and second inputs respectively coupled to first and second outputs of the first and second XOR trees.. 20. The network line card of claim 18, wherein the at least one compute engine further includes: at least one state machine; logic to enunciate a single error detection (SED) signal in response to detecting a single error in the instruction word; logic to stop the at least one state machine in response to the SED signal and save state information corresponding to the at least one state machine logic to perform a single error correction (SEC) operation on the instruction word using the ECC mechanism to generate a corrected instruction word; and logic to forward the corrected instruction word to be executed via the execution datapath. |
ECC CODING FOR HIGH SPEED IMPLEMENTATION FIELD OF THE INVENTION[0001] The field of invention relates generally to computer memories and, more specifically but not exclusively relates to an error correction code (ECC) coding technique for high-speed implementation.BACKGROUND INFORMATION[0002] The relentless progression to smaller feature sizes with each semiconductor process generation has had a negative impact on the soft- error rate (SER) of memory cells, such as SRAM (Static Random Access Memory) cells. Although process scaling has shrunk the charge collection diffusion area, it has also resulted in lower operating voltages, reduced internal node capacitances, and increased device impedances. These factors have reduced the critical charge necessary to upset the state of a SRAM cells faster than the corresponding reduction in the diffusion charge collection area. In addition, process scaling has increased the amount of SRAM that can be integrated into a system-on-a-chip (SOC), and hence increased the aggregate soft-error rate.[0003] The soft error rate is typically measured in terms of FIT'S. One FIT is one failure in 1 billion (1<[Lambda]>9) hours of operation. To achieve a mean time between failures (MBTF) of one year, this requires a FIT rate of approximately 110,000. For computing servers or critical network equipment, a typical system goal is 1 failure in 1000 years, or a goal of under 100 FIT. For these high-availability systems and for data centers with large number of computers, SRAM SER has become a major concern.[0004] There have been multiple solutions proposed to alleviate the SRAM soft-error rate problem. Multiple vendors (e.g., ST-Microelectronics) have proposed semiconductor process changes to increase the capacitance of l SRAM cell internal nodes and hence increase the critical change necessary to cause a SER. Reducing SER through chip architecture changes has been proposed as well. Christopher Weaver et al. ("Techniques to Reduce the Soft Error Rate of a High-Performance Microprocessor," isca, p. 264, Proceedings of the 31st Annual International Symposium on Computer Architecture, Munich, Germany, 2004) propose reducing the number susceptible states to reduce the likelihood of a soft error. Other approaches that combine architectural and circuit changes have been proposed. One such example proposes designing the SRAM cell to reduce the SER susceptibility of a certain transition at the cost of increasing the susceptibility of the inverse transition. For example, one could reduce the "1" to "0" SER failure rate at the cost of increasing "0" to "1" SER failure rate. This would be combined with an asymmetric ECC code which requires fewer bits than a full symmetric ECC code.[0005] The most common solution to the SRAM soft-error rate problem is to layer a SEC-DED (Single Error Correction-Double Error Detection) ECC over the SRAM subsystem. It is common to see a 72-bit ECC code word that contains 64-bits of data and 8 check bits. Other common implementations use a per-byte ECC system that uses 5-bits per byte or a total of 20-bits for a 4-byte word (common in ARM cores), or a 4-byte word coupled directly with 7 ECC check bits. BRIEF DESCRIPTION OF THE DRAWINGS[0006] The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified:[0007] Figure 1 is a diagram illustrating a known error correction coding (ECC) coding scheme for a 44-bit instruction word;[0008] Figure 2a is a diagram illustrating an ECC coding scheme that supports quick single error detect (SED) including detection of errors in the check bits, according to one embodiment of the invention; [0009] Figure 2b is a diagram illustrating an ECC coding scheme that supports quick single error detect (SED) without detecting errors in a check bit, according to one embodiment of the invention;[0010] Figure 3 is a flowchart illustrating operations and logic performed to detect and correct errors using the ECC coding scheme of Figure 2; [0011] Figure 4a is a schematic diagram illustrating an XOR tree implementing logic to perform a parity check for subset C of Figure 2a; [0012] Figure 4b is a schematic diagram illustrating an XOR tree implementing logic to perform a parity check for subset C of Figure 2b;] Figure 5 is a schematic diagram illustrating an XOR tree implementing logic to perform a parity check for subset 0 of Figures 2a and 2b;[0014] Figure 6 is a schematic diagram illustrating an XOR tree implementing logic to perform a parity check over the bits for subset 3 of the ECC coding schemes of Figures 2a and 2b; [0015] Figure 7 is a schematic diagram illustrating exemplary fanouts of instruction word bits to multiple XOR trees to generate ECC check bits; [0016] Figure 8 is a schematic diagram illustrating an SED scheme that employs two subsets of instruction words bits corresponding to the first two subsets of the ECC coding scheme of Figures 2a and 2b; [0017] Figure 9 is a schematic diagram of an Intel IXP 2xxx microengine;] Figure 10 is a schematic flow diagram illustrating components and logic implemented on an Intel IXP 2xxx microengine to perform high-speed error detection and correction in accordance with aspects of the embodiments disclosed herein;[0019] Figure 11 is a flowchart illustrating operations and logic for performing a memory scrubbing operation, in accordance with one embodiment of the invention; and[0020] Figure 12 is a schematic diagram of an exemplary network line card employing a network processor unit including compute engines that implement aspects of the ECC encoding schemes and logic of the embodiments disclosed herein. DETAILED DESCRIPTION[0021] Embodiments of methods and apparatus for performing error correction code (ECC) coding techniques for high-speed implementations are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.[0022] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. [0023] In accordance with aspects of the embodiments now presented, novel SEC-DED ECC coding schemes are disclosed that enable soft error correction to be added with a minimum perturbation to the overall SoC design. The ECC code word is structured to facilitate a very fast single-error-detect (SED) that enables SED in a fraction of a cycle. This quick error detection allows the clock to be stopped to applicable state machines when an error is detected and enables the slower single-error-correct (SEC) to happen over multiple cycles while the state machines are in a suspended mode. Meanwhile, since double errors can be detected but not corrected, the double-error-detect (DED) operation is not timing critical, and this fatal error can be signaled without the need to retain the original state.[0024] An exemplary implementation of one embodiment of the invention is described herein as applied to instruction words for an Intel IXP 2800 network processor unit (NPU). However, it is noted that the principles and teachings of this embodiment may be employed on other processor architectures and for other types of memory error detection and correction implementations. [0025] The Intel IXP 2800 NPU employs multiple compute engines referred to as "microengines." As described below with reference to Figure 9, each microengine has its own set of local resources, including a local control store in which instructions (code) are stored. When an instruction is read from a control store, an ECC check is performed to ensure none of the instruction bits is in error.[0026] For embedded memory subsystems that store memory in bit-widths that are not a fixed multiple of bytes, Hamming's Theory has shown that "n+1" check bits can be used to form a SEC-DED ECC over at most "2<[pi]>-1" bits. For example, for a 44-bit instruction word, such as employed on an IXP 2800 microengine, seven check bits need to be added to the 44-bit instruction word to support an SEC-DED ECC. Figure 1 shows the encoding of these check bits under one conventional implementation that employs seven check bits with even parity for a 44-bit instruction word.[0027] As illustrated in Figure 1 , the first forty bits from the instruction word are grouped into bytes, with the last four bits having a separate group. There are seven subsets of bits, labeled 0-6, with a corresponding check bit for each subset. Each subset includes 22-25 bits. The bits corresponding to a given subset are highlighted in Figure 1 to indicate those bits are used to determine the value of the even parity check bit for the subset. The combination of the bits in the subsets are chosen such that a single error in the instruction word can be identified and corrected using well-known ECC coding techniques. As is also known, the encoding scheme can identify conditions under which two or more errors are present in the instruction word, which is referred to as a double error. However, such double errors cannot be corrected using the scheme.[0028] In further detail, when an instruction is read out of the microengine control store, the checks bits are recalculated (including the check-bit itself in the parity calculation) by performing a parity check on the summation of bits in each subset. If the results of the recalculations all return "0" (indicating the parity of the original and recalculated check bits match), then there is no error; otherwise a single or double error has occurred. Hence that calculation of the SED signal requires XORing seven 22 to 25-bit quantities, where each bit of the 44-bit instruction word fans out to three to five of these calculations, and performing a final seven-bit OR operation to produce the SED signal. [0029] With approximately three levels of logic for the final seven bit OR operation and four gate delays of buffering (two due to the logical fanout and two due from the physical layout of the XOR tree, which required bits from all locations in the instruction word to be included XOR trees), there is approximately seven levels of logic on top of the five levels of logic needed by the 26-bit XOR operation. This amount of logic makes it unfeasible to stop the microengine state machines in the same cycle that the SED signal is generated; and hence, enough state information needs to be maintained to roll-back the microengine to the cycle before the error was detected in order to perform a correction. This is very time-consuming, and increases processing latencies when errors are detected. Furthermore, it requires additional resources for storing the state information. [0030] A significantly faster encoding scheme in accordance with one embodiment of the invention is shown in Figure 2a. Under this approach, the encoding is portioned into two operations: 1 ) a very quick encoding to generate an SED signal indicative of a single error; and 2) ECC encoding to either correct a single error or detect a double error. Since the SED is generated very quickly, the microengine state machines can be stopped during the same cycle the error is detected, while subsequent logic is implemented for either correcting a single error or detecting a double error while the state machines are suspended.[0031] As illustrated in Figure 2a, the encoding of the bits for each subset 0-6 are rearranged relative to the subsets in Figure 1. In addition, an extra subset and corresponding check bit labeled "C" is employed, as illustrated by subset C. An examination of subsets C and 0 reveal that all bits in the instruction word are covered by these two subsets. As a result, a single bit error can be easily identified by simply XOR'ing the bits in each of subsets 1 and 2 to generate the corresponding check bits (0 and 1 ), and then OR'ing these two check bits. If a single error is present, one of the two check bits will change, which can be detected using the OR'ing operation. Furthermore, since each instruction word bit only contributes to one of these two check bits, all the buffering requirements previously imposed by recalculation of all of the full set of the ECC check bits are eliminated.[0032] In addition to calculating the parity check over the instruction word bits, the parity check is calculated over checkbits C and 1-6 for subset C in the encoding scheme of Figure 2a. Meanwhile, in the encoding scheme of Figure 2b, only checkbit C is including in the parity calculation (in addition to the instruction word bits for subset C). [0033] As shown in Figure 2a, the bits contributing to the check bits can ordered to correspond cleanly to the physical ordering of the SRAM output of the control store. Because an error in the check bits does not cause problems in the instruction word, the check bits do not really need to be included in the generation of the addition check bit "C" that is added in the embodiment of Figure 2a. However, including these check bits in this check bit generation allows the standard reading of the code store to be used to "scrub" soft errors from the control store data, as described below with reference to Figure 11. This means that if a soft error is detected in the control store, the correct data is written back into the control store. This reduces the likelihood that multiple single errors will accumulate to generate a double error. [0034] Using the ECC encoding schemes illustrated in Figures 2a and 2b, the SED signal can be calculated with only one additional logic level above that of the standard 26-bit XOR tree discussed above. This is a saving of approximately six levels of logic over the more standard ECC encoding scheme. This quick generation of the SED signal allows the state machines to be stopped in the same cycle the error is detected, and hence very little state is needed to be retained to restart these state machines upon recovery from an error condition.[0035] With reference to the flowchart of Figure 3, high-speed ECC runtime operations in accordance with one embodiment proceed in the following manner. The process begins in a block 300, wherein data is read from a memory device. For example, in the instant 44-bit instruction word example, and instruction word is read from a microengine control store. [0036] In a block 302, a parity check is performed on two subsets of bits that, when combined, cover all the bits in the instruction word. For example, the combination of subsets 0 and 1 for the encoding of Figures 2a and 2b meet this condition. As discussed above, the parity check is performed over the subset bits and the'check bit associated with the subset. The results of the parity checks for each of the two subsets are then logically OR'ed in a block 304. As depicted by a decision block 306, the result of block 304 will either be a logical '0' or a '1'.[0037] In accordance with the foregoing parity check scheme, a change in one of the bits (including the check bit itself), while result in a parity output of '1' (for an even parity implementation). Thus, if a single bit error exists, the output for one of the two parity check will be '1', while the other will be <1>O'; when logically OR'ed, the result is a T. Now suppose that a double error is present having exactly two erroneous bits. In a first example, both bit errors occur in the same subset. As a result, the parity check for that subset will produce a O', while the parity check for another subset will also produce a <1>O', and the logically OR'ed result will indicate no error is present. However, if the double error comprises a single error present in both subsets, each parity check will produce a T, leading to an OR'ed result of 1 , indicating the presence of an error.[0038] As indicated by blocks 308 and 309, and in view of the foregoing discussion, the result of a '0' output from decision block 306 will either represent a no error condition, or and actual double error condition. The double error condition is not actually checked for at this point, since it is uncorrectable even if it was identified. Such a condition will generally lead to an errant condition that may or may not be recoverable, depending on the particular circumstances. Accordingly, the no error condition is presumed (as indicated by block 308), and the instruction forwarded along the execution path to be executed in a normal manner, as depicted by a continuation block 310. If a double error condition does actually exist, it will be subsequently detected (block 309), and the microengine will be restarted in accordance with a continuation block 324 to initiate a recovery process. [0039] If the output of decision block 306 is a '1', a single error condition is detected. Accordingly, a single error detection (SED) signal is generated in a block 312. In response to receiving an SED signal, the microengine state machines are stopped, and the states are saved in a block 314. [0040] At this point, ECC operations are performed to attempt to recover from the error condition. First, in a block 316, a parity check is made for each subset, and the results are checked to determine if the error is a single error or double error. As depicted by a decision block 318, if the operation of block 316 reveals the presence of a double error, the logic proceeds to block 324, as before. However, if the operation of block 316 determines that a single error exists, an ECC single error correction (SEC) operation is performed in a block 320. Such ECC SEC operations are well-known, and enable data errors to be recovered using a predefined error correction scheme. As a result, the instruction word is corrected, and now may be employed in the normal matter. However, prior to employing the instruction word in a normal manner in continuation block 310, the state machines are first recovered in a block 322.[0041] As depicted at the left hand of the Figure-3 flowchart, the first set of operations that are implemented to determine if an error is present and stopping the state machines under such a condition are performed in the same cycle. Thus, the state machines may be stopped, if applicable, during this same cycle, minimizing the amount of state machine "roll-back" that will be necessary to perform the recovery operation of block 322. Meanwhile, the subsequent operations depicted in Figure 3 may consume one or more additional cycles. Notably, since the state machines are stopped prior to performing these operations, the number of additional cycles that are consumed (and thus the overall timing of the process) is not critical. [0042] Figures 4a, 4b, 5 and 6 show exemplary schemes for performing parity checks for various subsets of instruction word bits using XOR trees. For example, the XOR tree structure of Figure 4a (XOR Tree C) is employed for subset C in Figure 2a, which corresponds to the bits 24:43 in the instruction word, along with parity check bits C and 1 :6. As illustrated, the parity result is obtained using five levels of XOR logic blocks. The XOR tree structure of Figure 4b (corresponding to the encoding of subset C of Figure 2b) is similar, except the later parity check bits 1 :6 are not included in the parity check calculation.[0043] The XOR tree structure (XOR Tree 0) of Figure 5 is employed for subset 0 in Figures 2a and 2b, which corresponds to the first 24 bits in the instruction word, along with parity check bit 0. As illustrated, this parity result is also obtained using five levels of XOR logic blocks. (It is noted that in each of the XOR trees illustrated herein, appropriate delay circuitry may be substituted for XOR blocks having one of the inputs depicted as being tied to common (i.e., a 'O' input).)[0044] The XOR tree structure illustrated in Figure 6 is used to calculate the parity for subset 3 in Figures 2a and 2b. In this case, the number of instruction words bits is only 21 , plus the parity check bits C and 3. Accordingly, the number of XOR logic blocks to perform the calculation are less than that employed for Figures 4a and 5. However, there are still five levels of logic employed for the calculation.[0045] Figure 7 schematically illustrates the aforementioned instruction word bit fanout. In this figure, the XOR tree 0-7 blocks each represent an XOR tree similar to the XOR trees shown in Figures 4a, 4b, 5 and 6, wherein the particular configuration of each XOR tree will depend on the corresponding parity calculation to be performed for each subset. (For example, the XOR tree C block corresponds to the XOR tree of Figure 4a, while the XOR tree O blocks corresponds to the XOR tree of Figure 5, etc.). Three exemplary cases are illustrated in Figure 7: a 3 XOR tree fanout for bit 7 of the instruction word; a 5 XOR tree fanout for bit 15; and a 4 XOR tree fanout for bit 40. In a similar manner, a fanout to 3-5 XOR trees would be implemented for each bit in the instruction word.[0046] Figure 8 shows a logic scheme to quickly perform an SED calculation in accordance with blocks 302 and 304 of Figure 3. In a manner similar to that shown in Figure 4a, a respective five-level XOR tree is used to perform the parity calculation for each of the instruction bits corresponding to subsets C and O in Figure 2a. For convenience, these XOR trees are represented as an XOR Tree C block and an XOR Tree O block. The output of these XOR trees are then logically OR'ed, as depicted by an OR block 800. The SED value 802 corresponds to the logic-level of the output from OR block 800.[0047] The scheme of Figure 8 yields the following results. If a single bit error is present in either the instruction word or one of the check bits C or 0, one of the two XOR trees C and 0 will output a logical '1', while the other XOR tree will output a logical <1>O'. Accordingly, a logical 1 will be output by OR block 800, thus enunciating an SED signal in block 312 of Figure 3. A similar situation would result if a double error was present that included a single bit error in each of subset C and subset 0. In this case, the output of both XOR trees C and 0 would be a logical '1', resulting in a logical '1' output by OR block 800. Now consider the case in which a double error is present in a single subset, while the other subset is error-free. Since two errors are present in the same subset, the parity calculation output will remain at logic level <1>O', indicating no error is present. As discussed above with reference to Figure 3, the presence of such a double error will be determined through subsequent operations.[0048] In general, the error detection scheme of the embodiments described herein may be used to detect errors in either instruction words or data. Under the operations and logic of the flowchart of Figure 3, a scheme for detecting errors in instruction words is provided. Such a scheme may be implemented within the instruction load path of various types of processors. [0049] An implementation on an exemplary processor is shown in Figure 9, which depicts a microengine architecture 900 corresponding to a compute engine of an Intel IXP2xxx network processor unit (NPU). Architecture 900 depicts several components typical of compute-engine architectures, including local memory 902, general-purpose register banks 904A and 904B, a next neighbor register 906, a DRAM (Dynamic Random Accessible Memory) read transfer (xfer) register 908, an SRAM read transfer register 910, a control store 912, execution datapath 914, a DRAM write transfer register 916, and a SRAM write transfer register 918.[0050] Architecture 900 support n hardware contexts. For example, in one embodiment n=8, while in other embodiments [pi]=16 and n-A. Each hardware context has its own register set, program counter (PC), condition codes, and context specific local control and status registers (CSRs) 920. Unlike software-based contexts common to modern multi-threaded operating systems that employ a single set of registers that are shared among multiple threads using software-based context swapping, providing a copy of context parameters per context (thread) eliminates the need to move context specific information to or from shared memory and registers to perform a context swap. Fast context swapping allows a thread to do computation while other threads wait for input/output (10) resources (typically external memory accesses) to complete or for a signal from another thread or hardware unit. [0051] Figure 10 shows an implementation of various aspects of the logic employed by the embodiments described herein on a compute engine employing architecture 900. In the illustrated example, each of multiple instruction words 1000 stored in control store 912 are read and loaded into an ECC register 902. In another embodiment, the ECC register 902 is representative of an output port (buffer) for control store 912. In response to loading an instruction word into the register, a very quick SED calculation is made using the results of the parity checks performed by XOR trees C and 0, which have inputs coupled to appropriate bit positions in ECC register 902. If the output of SED block 802 is <1>O', no error is presumed, and the instruction is forwarded to the instruction path for the compute engine for execution. However, if the output of SED block 802 is a '1', an SED signal is generated in block 312, the state machines are stopped and states saved in block 314, and SEC/DED ECC operations are performed using the outputs of XOR trees C and 0-6. If the instruction word includes a single error, it is corrected by the SEC operation, and the corrected instruction word 900A is forwarded to the instruction execution path. If a double error is detected, a corresponding restart operation is initiated.[0052] As mentioned above, the ECC scheme of Figure 2a can also be employed for a memory "scrubbing" operation. Under this technique, instruction words are read from a control store, checked for single errors, and corrected if such errors are present using an on-going background operation so as to not interfere with the normal instruction usage. For example, in one embodiment a dedicated thread for a multi-threaded compute engine is used to perform the memory scrubbing operations as a background task. [0053] Figure 11 shows a flowchart illustrating operations and logic performed during one embodiment to facilitate memory scrubbing. As depicted by start and end loop blocks 1100 and 1114, the operations and logic inside these end loop blocks are performed for each instruction word in the control store, with the instruction address being incremented by one instruction word for each iteration. It is noted that in one embodiment a preemption scheme is implemented such that the background thread is preempted from accessing instruction words that are currently being accessed by other compute engine threads.[0054] In a block 1102, the currently-evaluated instruction word is read from the control store and loaded into a register to which the XOR tree logic is tied in the manner discussed above and illustrated in Figure 10. A parity check on two subsets of bits that cover all of the instruction word bits is performed, with the result being logically OR'ed, as depicted in a block 1104. For example, the parity check is performed on subsets C and 0 for the ECC encoding scheme of Figure 2a. As depicted by a decision block 1106, if the result of the operation of block 1104 produces a <1>O', no error is presumed (as depicted by block 1108), and the logic proceeds to end loop block 1114, wherein the instruction address pointer is incremented to point to the next instruction word to be evaluated.[0055] In contrast, if the result of the operation of block 1104 is a '1', a single error is detected. In response, an ECC SEC operation is performed in a block 1110 to correct the single error in the instruction word, followed by writing the corrected instruction word back to its memory location in the control store, which is identified by the current instruction word pointer (not tobe confused with the instruction pointers used by other threads). The logic then proceeds to end loop block 1114 to increment the instruction address pointer and the logic loops back to start loop block 1110 to begin the next iteration of operations.[0056] By performing the foregoing memory scrubbing technique, single errors can be detected and corrected for instructions that are stored in a control store or similar type of SRAM store prior to loading the instructions for execution. By frequently correcting such single errors (should they be encountered), the likelihood of the presence of a double error in an instruction word is significantly reduced.[0057] Figure 12 shows an exemplary implementation of a network processor 1200 that includes one or more compute engines (e.g., microengines) that implement the error detection and correction schemes discussed herein. In this implementation, network processor 1200 is employed in a line card 1202. In general, line card 1202 is illustrative of various types of network element line cards employing standardized or proprietary architectures. For example, a typical line card of this type may comprise an Advanced Telecommunications and Computer Architecture (ATCA) modular (printed circuit) board (PCB) that is coupled to a common backplane in an ATCA chassis that may further include other ATCA modular boards. Accordingly the line card includes a set of connectors mounted to a PCB that are configured to mate with mating connectors on the backplane, as represented by a backplane interface 1204. In general, backplane interface 1204 supports various input/output (I/O) communication channels, as well as provides power to line card 1202. For simplicity, only selected I/O interfaces are shown in Figure 12, although it will be understood that other I/O and power input interfaces also exist. [0058] Network processor 1200 includes n microengines 1201. In one embodiment, /7=8, while in other embodiment /7=16, 24, or 32. Other numbers of microengines 1201 may also me used. In the illustrated embodiment, 16 microengines 1201 are shown grouped into two clusters of 8 microengines, including an ME cluster 0 and an ME cluster 1. In the illustrated embodiment, each microengine 1201 executes instructions (microcode) that are stored in a local control store 1208.[0059] Each of microengines 1201 is connected to other network processor components via sets of bus and control lines referred to as the processor "chassis". For clarity, these bus sets and control lines are depicted as an internal interconnect 1212. Also connected to the internal interconnect are an SRAM controller 1214, a DRAM controller 1216, a general purpose processor 1218, a media and switch fabric interface 1220, a PCI (peripheral component interconnect) controller 1221 , scratch memory 1222, and a hash unit 1223. Other components not shown that may be provided by network processor 1200 include, but are not limited to, encryption units, a CAP (Control Status Register Access Proxy) unit, and a performance monitor. [0060] The SRAM controller 1214 is used to access an external SRAM store 1224 via an SRAM interface 1226. Similarly, DRAM controller 1216 is used to access an external DRAM store 1228 via a DRAM interface 1230. In one embodiment, DRAM store 1228 employs DDR (double data rate) DRAM. In other embodiment DRAM store may employ Rambus DRAM (RDRAM) or reduced-latency DRAM (RLDRAM).[0061] General-purpose processor 1218 may be employed for various network processor operations. In one embodiment, control plane operations are facilitated by software executing on general-purpose processor 1218, while data plane operations are primarily facilitated by instruction threads executing on microengines 1201.[0062] Media and switch fabric interface 1220 is used to interface with the media switch fabric for the network element in which the line card is installed. In one embodiment, media and switch fabric interface 1220 employs a System Packet Level Interface 4 Phase 2 (SPI4-2) interface 1232. In general, the actual switch fabric may be hosted by one or more separate line cards, or may be built into the chassis backplane. Both of these configurations are illustrated by switch fabric 1234.[0063] PCI controller 1222 enables the network processor to interface with one or more PCI devices that are coupled to backplane interface 1004 via a PCI interface 1236. In one embodiment, PCI interface 1236 comprises a PCI Express interface.[0064] During initialization, coded instructions comprising instruction threads 1210 to facilitate various packet-processing operations are loaded into control stores 1208. An instruction thread for performing memory scrubbing may also be loaded at this time. In one embodiment, the instructions are loaded from a non-volatile store 1238 hosted by line card 1202, such as a flash memory device. Other examples of non-volatile stores include read-only memories (ROMs), programmable ROMs (PROMs), and electronically erasable PROMs (EEPROMs). In one embodiment, nonvolatile store 1238 is accessed by general-purpose processor 1218 via an interface 1240. In another embodiment, non-volatile store 1238 may be accessed via an interface (not shown) coupled to internal interconnect 1212. [0065] In addition to loading the instructions from a local (to line card 1202) store, instructions may be loaded from an external source. For example, in one embodiment, the instructions are stored on a disk drive 1242 hosted by another line card (not shown) or otherwise provided by the network element in which line card 1202 is installed. In yet another embodiment, the instructions are downloaded from a remote server or the like via a network 1244 as a carrier wave.[0066] The schemes described herein can be used to correct errors in an instruction word or any other type of ECC-protected data. Typically, data stored in off-chip RAM or in large on-chip RAM may also employ ECC protection. In general, the principles and techniques described herein may be useful in any applications that are sensitive to the average read latency from the on-chip or off-chip RAM. For applications that are not sensitive to the read latency, one standard approach would be to add an appropriate number of cycles to the data path to allow SEC-DED to be done on every memory access. For latency-sensitive RAM reads, such as for an instruction store, a scheme may be implemented such that it only adds extra latency for SEC operations corresponding to actual detected errors, while the SED aspect adds no additional latency. Because ECC errors are typically not common, the average read delay of the memory systems remains close to that of a system that does not employ ECC. Moreover, the schemes described herein can be used to simplify implementing an ECC sub-system to any latency- sensitive RAM read.[0067] In the embodiments described herein, an extra check bit is added to speed up the calculation of the SED operation. However, the particular number of check bits the are added may be application and data word size specific. Under the conventional example described above with reference to Figure 1 , the data (instruction) word is logically partitioned into bytes and an SEC-DED ECC encoding scheme is used for each byte. Hence, this results in a non-minimum ECC code for covering the whole data word. The inventive schemes differ from this conventional approach in that it uses a non-minimum ECC coding only to support the fast SED aspect of the scheme. A non- minimum encoding would be one where more than the minimum number of ECC bits required by Hamming's theory are used and/or the ECC bit encodings are selected in a manner that does not result in the minimum amount of logic (logic gates or logic fanout) required for the SEC-DED operation. The inventive schemes couple a quick SED operation with an efficient encoding of the SEC-DED operations. For example, Hamming's work indicates that a seven-bit ECC can be employed to implement SEC-DED on a 32 to 63 bit data word. After the SED logic detects an error, the more optimal SEC-DED coding can be used to correct the error. [0068] The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. [0069] These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the drawings. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation. |
Disclosed herein are systems and methods for thermal management of a flexible integrated circuit (IC) package. In some embodiments, a flexible IC package may include a flexible substrate material; a component disposed in the flexible substrate material; a channel disposed in the flexible substrate material forming a closed circuit and having a portion proximate to the component; electrodes disposed in the flexible substrate material and positioned at locations proximate to the channel, wherein the electrodes are coupled to an electrode controller to selectively cause one or more of the electrodes to generate an electric field; and an electrolytic fluid disposed in the channel. In some embodiments, a flexible IC package may be coupled to a wearable support structure. Other embodiments may be disclosed and/or claimed. |
Claims:1. A flexible integrated circuit (IC) package, comprising:flexible substrate material;a component disposed in the flexible substrate material;a channel disposed in the flexible substrate material forming a closed circuit, wherein a portion of the channel is proximate to the component;a plurality of electrodes disposed in the flexible substrate material and positioned at locations proximate to the channel, wherein the plurality of electrodes are coupled to an electrode controller to selectively cause two or more of the plurality of electrodes to generate an electric field; and an electrolytic fluid disposed in the channel.2. The flexible IC package of claim 1, wherein the flexible substrate material includes polyethylene terephthalate or polydimethylsiloxane.3. The flexible IC package of claim 1, wherein:the component is a first component;the portion of the channel is a first portion of the channel;the flexible IC package further comprises a second component disposed in the flexible substrate material; anda second portion of the channel is proximate to the second component.4. The flexible IC package of claim 3, wherein the first component is disposed in a first layer of the flexible substrate material, the second component is disposed in a second layer of the flexible substrate material, the first layer is different from the second layer, and adjacent layers of the flexible substrate material are separated by printed circuitry.5. The flexible IC package of claim 4, wherein the first layer and the second layer are spaced apart by one or more layers of the flexible substrate material.6. The flexible IC package of claim 1, wherein the component is disposed in a first layer of the flexible substrate material, the portion of the channel is disposed in a second layer of the flexible substrate material, and the first and second layers are adjacent layers of the flexible substrate material.7. The flexible IC package of claim 6, wherein:the component is a first component;the portion of the channel is a first portion of the channel;the flexible IC package further comprises a second component disposed in the flexible substrate material;a second portion of the channel is proximate to the second component; and the second component is disposed in a third layer of the flexible substrate material, the second portion of the channel is disposed in a fourth layer of the flexible substrate material, and the third and fourth layers are adjacent layers of the flexible substrate material.8. The flexible IC package of claim 7, wherein the second layer and the fourth layer are a same layer of the flexible substrate material.9. The flexible IC package of claim 1, wherein the plurality of electrodes are disposed between layers of the flexible substrate material.10. The flexible IC package of claim 1, wherein the channel includes a via extending between different layers of the flexible substrate material.11. The flexible IC package of claim 10, wherein the via extends between a first layer of the flexible substrate material and a second layer of the flexible substrate material, and the first layer and second layer are spaced apart by one or more layers of the flexible substrate material.12. The flexible IC package of claim 1, wherein:the component is a first component;the portion of the channel is a first portion of the channel;the flexible IC package further comprises a second component and a third component disposed in the flexible substrate material;a second portion of the channel is proximate to the second component and a third portion of the channel is proximate to the third component; andthe first component is disposed in a first layer of the flexible substrate material, the second component is disposed in a second layer of the flexible substrate material, the third component is disposed in a third layer of the flexible substrate material, and the third layer is disposed between the first layer and the second layer.13. The flexible IC package of claim 12, wherein the first portion of the channel is disposed between the first layer and the third layer, and the second portion of the channel is disposed between the third layer and the second layer.14. The flexible IC package of any of claims 1-13, wherein the portion of the channel has a serpentine structure.15. The flexible IC package of any of claims 1-13, wherein the electrolytic fluid includes electrolyte droplets in oil.16. The flexible IC package of any of claims 1-13, wherein the electrode controller is to selectively cause two or more of the plurality of electrodes to generate the electric field to circulate the electrolytic fluid in the channel.17. A wearable integrated circuit (IC) device, comprising:a flexible integrated circuit (IC) package, including:flexible substrate material, a component disposed in the flexible substrate material, wherein the component comprises a processing device or a memory device,a channel disposed in the flexible substrate material forming a closed circuit, wherein a portion of the channel is proximate to the component,a plurality of electrodes disposed in the flexible substrate material and positioned at locations proximate to the channel, wherein the plurality of electrodes are coupled to an electrode controller to selectively cause two or more of the plurality of electrodes to generate an electric field, and an electrolytic fluid disposed in the channel; anda wearable support structure coupled to the flexible IC package.18. The wearable IC device of claim 17, wherein the wearable support structure comprises an adhesive backing.19. The wearable IC device of claim 17, wherein the wearable support structure comprises a fabric.20. A method of forming a flexible integrated circuit (IC) package, comprising:providing a flexible IC assembly including a flexible substrate material having disposed therein a component, a plurality of electrodes, and a channel, wherein the channel forms a closed circuit having a portion proximate to the component, and wherein the plurality of electrodes are positioned at locations proximate to the channel;providing an electrolytic fluid to the channel via an inlet of the flexible IC assembly; and after providing the electrolytic fluid, sealing the inlet.21. The method of claim 20, wherein providing the flexible IC assembly includes printing one or more electrodes of the plurality of electrodes on one or more layers of the flexible substrate material.22. A method of thermally managing a flexible integrated circuit (IC) package, comprising:causing, by an electrode controller, a first pair of electrodes to generate an electric field, wherein the first pair of electrodes is disposed in a flexible substrate material of the flexible IC package and positioned at locations proximate to a channel in the flexible substrate material, and wherein an electrolytic fluid is disposed in the channel;causing, by the electrode controller, a second pair of electrodes to generate the electric field to cause the movement of at least some of the electrolytic fluid within the channel, wherein the second pair of electrodes is disposed in the flexible substrate material of the flexible IC package and positioned at locations proximate to the channel in the flexible substrate material;wherein the channel forms a closed circuit, a component is disposed in the flexible substrate material, and the channel includes a portion proximate to the component.23. The method of claim 22, further comprising:before causing the first pair of electrodes to generate the electric field or causing the second pair of electrodes to generate the electric field, determining, by the electrode controller, that a temperature of the component exceeds a threshold; wherein causing the first pair of electrodes to generate the electric field and causing the second pair of electrodes to generate the electric field are performed in response to the determination.24. The method of claim 23, wherein:the component is a first component;the portion of the channel is a first portion of the channel;a second component is disposed in the flexible substrate material;the channel includes a second portion proximate to the second component; anddetermining that a temperature of the first component exceeds a threshold comprises determining that the temperature of the first component exceeds a temperature of the second component.25. The method of any of claims 23-24, wherein the first pair of electrodes and the second pair of electrodes share an electrode. |
THERMAL MANAGEMENT FOR FLEXIBLE INTEGRATED CIRCUIT PACKAGESCross-Reference to Related Application[I] This application claims the benefit of priority to U.S. Nonprovisional (Utility) Patent Application No. 14/864,433 filed 24 September 2015 entitled "THERMAL MANAGEM ENT FOR FLEXIBLE INTEGRATED CIRCUIT PACKAGES", which is incorporated herein by reference in its entirety.Technical Field[2] This disclosure relates generally to the field of integrated circuits, and more specifically, to thermal management for flexible integrated circuit packages.Background[3] Integrated circuit (IC) devices generate heat during operation. If this heat causes the temperature of the device to rise to a critical level, performance may be compromised or the device may fail. Conventional techniques for managing the heat generated by conventional IC devices include the use of heat sinks and fans.Brief Description of the Drawings[4] Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.[5] FIG. 1 is a representation of a flexible integrated circuit (IC) package, in accordance with various embodiments.[6] FIG. 2 is a portion of a side view of a first example of a flexible IC package.[7] FIG. 3 is a portion of a top view of the first flexible IC package example of FIG. 2, in accordance with various embodiments.[8] FIG. 4 is a portion of a side view of a second example of a flexible IC package.[9] FIG. 5 is a portion of a top view of the second flexible IC package example of FIG. 4, in accordance with various embodiments.[10] FIG. 6 is a portion of a side view of a third example of a flexible IC package.[II] FIG. 7 is a portion of a top view of the third flexible IC package example of FIG. 6, in accordance with various embodiments.[12] FIG. 8 is a portion of a side view of a fourth example of a flexible IC package.[13] FIG. 9 is a portion of a top view of the fourth flexible IC package example of FIG. 8, in accordance with various embodiments.[14] FIG. 10 is a portion of a side view of a fifth example of a flexible IC package.[15] FIG. 11 is a portion of a top view of the fifth flexible IC package example of FIG. 10, in accordance with various embodiments.[16] FIG. 12 is a portion of a side view of a sixth example of a flexible IC package. [17] FIG. 13 is a portion of a top view of the sixth flexible IC package example of FIG. 12, in accordance with various embodiments.[18] FIG. 14 is a portion of a side view of a seventh example of a flexible IC package.[19] FIG. 15 is a portion of a top view of the seventh flexible IC package example of FIG. 14, in accordance with various embodiments.[20] FIGS. 16 and 17 are portions of side views of additional examples of flexible IC packages, in accordance with various embodiments.[21] FIGS. 18-20 illustrate various assemblies formed during a process of manufacturing a flexible IC package, in accordance with various embodiments.[22] FIG. 21 is a portion of a side view of a flexible IC package coupled to a support structure, in accordance with various embodiments.[23] FIG. 22 is a perspective view of a wearable IC device having an armband support structure coupled to a flexible IC package, in accordance with various embodiments.[24] FIG. 23 is a side cross-sectional view of a wearable IC device having a shoe support structure coupled to a flexible IC package, in accordance with various embodiments.[25] FIG. 24 is a block diagram of an electrode controller arrangement.[26] FIGS. 25-28 illustrate various example structures that may be used for a portion of a channel proximate to a component in the flexible IC package of FIG. 1, in accordance with various embodiments.[27] FIG. 29 is a flow diagram of an illustrative process for forming a flexible IC package, in accordance with various embodiments.[28] FIG. 30 is a flow diagram of an illustrative process for thermally managing a flexible IC package, in accordance with various embodiments.[29] FIG. 31 is a block diagram of an example computing device that may be implemented in or include a flexible IC package as disclosed herein.Detailed Description[30] Disclosed herein are systems and methods for thermal management of a flexible integrated circuit (IC) package. In some embodiments, a flexible IC package may include a flexible substrate material; a component disposed in the flexible substrate material; a channel disposed in the flexible substrate material forming a closed circuit and having a portion proximate to the component;electrodes disposed in the flexible substrate material and positioned at locations proximate to the channel, wherein the electrodes are coupled to an electrode controller to selectively cause one or more of the electrodes to generate an electric field; and an electrolytic fluid disposed in the channel. The electric fields generated by the electrodes may cause the electrolytic fluid to move within the channel (e.g., to circulate within the channel) via electrowetting. When the component disposed proximate to the channel generates heat, some of that heat may be absorbed by the electrolytic fluid and then moved away from the component by movement of the electrolytic fluid, thus cooling the component or mitigating any buildup of heat. In some embodiments, the flexible IC packages disclosed herein may be coupled to a wearable support structure to form a flexible, wearable, thermally managed IC device.[31] Development of flexible electronic devices has been limited by conventional thermal management techniques. For example, conventional IC packages may include a metallic heat spreader thermally coupled to a heat-generating component (e.g., a die) with a thermal interface material. However, heat spreaders may be of limited utility when the heat-generating component is embedded inside one or more layers of flexible substrate material and/or mold material (and thus not readily coupled to the heat spreader). External cooling devices, such as fans and heat sinks, are similarly infeasible for flexible and/or wearable applications, at least due to their large size, moving parts, power requirements, and inability to cool heat-generating devices embedded in insulating material.Additionally, conventionally rigid structures such as heat spreaders and heat sinks may be inappropriate for use in flexible electronic devices, at least because such rigid structures may compromise package bendability and stretchability.[32] Some conventional thermal management techniques attempt to limit or reduce thermal design power (TDP) of an electronic device. The TDP of a device represents the maximum amount of heat that a cooling system may be required to dissipate from the device during typical operation; the lower the TDP, the less thermal management need be performed. One conventional TDP-limiting technique involves "throttling" a device within an IC package (e.g., by reducing the device's operating frequency and thereby slowing the device) so as to limit the amount of heat that the device generates. This approach, however, has the substantial drawback of constraining the device to perform below its true capability, and possibly causing the device to fail to meet performance benchmarks or requirements. Similarly, the heat generated by an IC package may be limited by including fewer and/or less powerful components in the IC package, but this approach also inherently limits the performance achievable by the IC package. Performance limitations due to thermal phenomena (e.g., limitations on battery life, user comfort during normal use, throttled processing) may result in a degraded user experience.[33] In addition to the inapplicability of conventional thermal management techniques to flexible and/or wearable IC devices, many such devices may have more stringent thermal requirements for user comfort than conventional IC devices. For example, for comfortable use, an IC device that will be in regular contact with human skin should not exceed a maximum temperature that is lower than the maximum temperature tolerated for laptop computing devices, tablets, or other conventional handheld computing devices. This maximum temperature may be between approximately 37°C and 45°C and may be a function of a particular location of the IC device on a wearer's body (e.g., with the maximum temperature allowable at the ear and forehead less than the maximum temperature allowable at the fingers). Consequently, many wearables must be maintained at lower operating temperatures than "smartphones" and other mobile computing devices. [34] The challenge of achieving sufficiently low operating temperatures for flexible devices is compounded by low thermal conductivities of many materials that may otherwise be suitable as flexible substrate materials and/or mold materials. For example, polyethylene terephthalate (PET) and polydimethylsiloxane (PDMS) may have thermal conductivities of approximately 0.15 watts per meter- Kelvin, which is approximately 1/9 the thermal conductivity of mold materials used in existing system- on-chip (SoC) products (which often have significant thermal risk themselves). Thus, flexible IC devices may be formed from materials that are less able to conduct heat away from components embedded therein than conventional IC devices.[35] Various ones of the embodiments disclosed herein may enable high-performance computing devices in flexible packages that achieve improved thermal performance relative to conventional devices and techniques. In particular, various ones of the embodiments disclosed herein may extend the TDP of flexible IC devices while maintaining or improving performance and without compromising device stretchability and bendability. The embodiments disclosed herein may be usefully applied in multilayer IC package designs, in which multiple components (e.g., dies or sensors) are embedded between different layers of flexible material, without compromising bendability or stretchability.Flexible IC packages may be readily integrated into wearable supports to form wearable devices, such as jewelry, smart fabrics, or stickers/tattoos for wearing on the skin. Additionally, various ones of the embodiments disclosed herein may be readily manufactured using soft lithography techniques.[36] Additionally, incorporating thermal management techniques disclosed herein in rigid IC packages may improve thermal performance and reduce the yield loss during the manufacturing process due to unsatisfactory thermal performance. In particular, the use of various ones of the thermal management techniques disclosed herein may reduce the maximum or average operating temperature of an IC device relative to conventional techniques, and thus may reduce the number of IC devices whose maximum or average operating temperatures exceed a reliability temperature limit.[37] In the following detailed description, reference is made to the accompanying drawings that form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense.[38] Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order from the described embodiment. Various additional operations may be performed, and/or described operations may be omitted in additionalembodiments. [39] For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C).[40] The description uses the phrases "in an embodiment" or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.[41] FIG. 1 is a representation of a flexible integrated circuit (IC) package 100, in accordance with various embodiments. The flexible IC package 100 may include a first component 102 disposed in a flexible substrate material (FSM) 106. A channel 108 may be disposed in the FSM 106, and an electrolytic fluid 110 may be disposed in the channel 108. A first portion 112 of the channel 108 may be proximate to the first component 102. As used herein, a portion of a channel may be "proximate" to a component when the channel is sufficiently close to the component so that heat generated by the component may be absorbed by thermally conductive fluid within the channel so as to thermally manage the component as the fluid moves through the channel, away from the component.[42] In some embodiments, as illustrated in FIG. 1, a second component 104 may also be disposed in the FSM 106, and a second portion 114 of the channel 108 may be proximate to the second component 104. Although two components are illustrated in FIG. 1, any number of components may be disposed in the FSM 106. For example, in some embodiments, only a single component (e.g., the first component 102) may be disposed in the FSM 106 and may be proximate to the channel 108. In someembodiments, examples of which are discussed in detail below, three or more components may be disposed in the FSM 106 and may be proximate to the channel 108.[43] The component(s) disposed in the FSM 106 may perform any suitable desired computational function or functions. For example, in some embodiments, the first component 102 and/or the second component 104 may include a processing device, a memory device, a sensor, and/or a communication device (e.g., a modem). In some embodiments, the first component 102 and/or the second component 104 may include a die. In some embodiments, the component(s) disposed in the FSM 106 may be formed as fairly thin, semiconductor-based circuits (e.g., silicon-based dies), and may be embedded between layers of the FSM 106. A number of examples of arrangements of components in the FSM 106 are discussed in detail below. Any of the "components" referred to herein may be "component sections"; that is, circuitry configured to implement at least a portion of the functionality of a portion of a singular SoC. An example of a component section may be a die segment. The combination of multiple component sections (or "components," as used herein) may implement the functionality of the SoC. A component section may include silicon or other semiconductor, metal, or other circuit material (as may any "component" referred to herein). [44] The representation and arrangement of elements in FIG. 1 is abstract, and is to be interpreted in accordance with the description below and the remainder of the teachings herein. In particular, FIG. 1 is not intended to require an arrangement of the IC package 100 in which all of the elements of FIG. 1 are co-planar (e.g., arranged in a single layer in a multi-layer IC package 100). Indeed, in embodiments in which the first component 102 and/or the second component 104 have a thin form factor, it may be difficult to achieve adequate heat transfer when the channel 108 is constrained to be solely co-planar with the first component 102 and/or the second component 104 (e.g., when only a narrow side of the first component 102 and/or the second component 104 faces the channel 108), and instead, portions of the channel 108 may be arranged to be non-co-planar with the first component 102 and/or the second component 104 (e.g., in a different layer than the first component 102 and/or the second component 104, so that the larger face of the first component 102 and/or the second component 104 faces the channel 108) so there is a greater area over which heat may be transferred.[45] The channel 108 may form a closed circuit such that the electrolytic fluid 110 is constrained to remain within the channel 108. In some embodiments, the channel 108 may be formed such that the electrolytic fluid 110 is constrained to remain within the channel 108, but the channel 108 may not form a closed circuit (e.g., the channel 108 may be shaped as a tube with one or more bends). In some embodiments, the interior surface of the channel 108 may be coated in a dielectric (e.g., Teflon, barium strontium titanate (BST), or any other suitable dielectric), and electrowetting on dielectric (EWOD) techniques may be used to move the electrolytic fluid 110 in the channel 108. In some embodiments, the interior surface of the channel 108 may be coated in a metal, and metal-based electrowetting techniques may be used.[46] As used herein, "electrolytic fluid" may include any fluid that has an electrolytic component that can undergo electrowetting (as discussed further below). The electrolytic fluid 110 may include any suitable fluids and may not have a uniform composition. For example, in some embodiments, the electrolytic fluid 110 may include electrolyte droplets in oil. One example of the electrolytic fluid 110 may be potassium chloride (KCI) droplets in silicone oil, but any suitable fluid may be used.[47] In some embodiments, the electrolytic fluid 110 may include an organic solvent. When the FSM 106 includes a polymer material that may absorb organic solvents, any of a number of known techniques may be used to improve the hermeticity of the flexible IC package 100. Examples of suitable techniques for improving the hermeticity of the flexible IC package 100 include coating the polymer material with a hybrid organic/inorganic polymer to prevent contact between the polymer material and the organic solvent (e.g., as described in Kim et al., Solvent-resistant PDMS microfluidic devices with hybrid inorganic/organic polymer coatings, Advanced Functional Materials, v. 19, pp. 3796-3803 (2009)), thermal aging during carrying, and changing the ratio of pre-polymer and curing agent of a polymer (both of which are described in, e.g., Huang et al., The improved resistance of PDMS to pressure-induced deformation and chemical solvent swelling for microfluidic devices, Microelectronic Engineering, v. 124, pp. 66-75 (2014)). In some embodiments, the electrolytic fluid 110 may include an inorganic solvent (e.g., water).[48] The flexible IC package 100 may be capable of bending and/or stretching without damaging the components therein. This ability may make some embodiments of the flexible IC package 100 particularly suitable for wearable computing applications, in which the flexible IC package 100 is disposed on or close to a user's body and should be capable of deforming with the user's movement. The FSM 106 may include any suitable flexible substrate material or materials. For example, in some embodiments, the FSM 106 may include PET. In some embodiments, the FSM 106 may include PDMS. In some embodiments, the FSM 106 may include polyimide or another thermoplastic elastomer.[49] Two or more electrodes 116 may be disposed in the FSM 106 and may be positioned at locations proximate to the channel 108. As used herein, an electrode may be positioned at a location "proximate" to a channel when an electric field generated by the electrode is sufficiently strong to move electrolytic fluid in the channel by electrowetting. To achieve a sufficiently strong electric field, it may be advantageous in some embodiments to position the electrodes in a layer of the FSM 106 adjacent to the channel 108 (e.g., "under" the channel 108), but any suitable arrangement may be used in accordance with the teachings herein. Electrowetting generally refers to the application of an electric field to a fluid to change the ability of that fluid to maintain contact with a solid surface and, more specifically herein, refers to the application of an electric field on one side of an electrolyte droplet and a channel to asymmetrically change the interfacial surface tension of that droplet to asymmetrically deform a liquid meniscus and thereby drive bulk fluid motion in the channel. A number of techniques exist for the transportation of fluid droplets through micro-channels via electrowetting, such as those described by Cho et al., Creating, transporting, cutting, and merging liquid droplets by electrowetting- based actuation for digital microfluidic circuits, Journal of Microelectromechanical Systems, v. 12.1, pp. 70-80 (2003). Although a particular number of electrodes 116 are illustrated in FIG. 1, any suitable number of electrodes may be included in the flexible IC package 100. The electrodes 116 may be formed from any suitable conductive material, such as copper.[50] The electrodes 116 may be coupled to an electrode controller 192, which may be configured to selectively cause one or more of the electrodes 116 to generate an electric field. More specifically, electrodes 116 may be arranged along the channel 108 such that electric fields may be generated between two or more of the electrodes 116. In some embodiments, the electrode controller 192 may be included in the first component 102 or the second component 104 (which may be, e.g., dies), while in other embodiments, the electrode controller 192 may be separate from any components cooled using the thermal management techniques disclosed herein.[51] During use, the electrode controller 192 may cause sequential sets of the electrodes 116, beneath the leading meniscus of an electrolytic droplet in the electrolytic fluid 110, to generate electric fields so as to move the droplet of the electrolytic fluid 110, via electrowetting, within the channel 108. In some embodiments, the electrode controller 192 may cause the electrodes 116 to generate electric fields so as to circulate the electrolytic fluid 110 through the channel 108. The electrode controller 192 may cause two or more of the electrodes 116 to generate an electric field by providing a voltage to the two or more electrodes 116. The level and distribution of voltage applied may depend on the particular configuration of the flexible IC package 100 and the desired rate of movement of the electrolytic fluid 110, and in some embodiments may be between approximately 15 and approximately 50 volts.[52] In some embodiments, one or more of the electrodes 116 may be coupled to a reference voltage by the electrode controller 192 (e.g., to ground), and the voltage on the electrodes 116 may not change during operation; instead, the voltages on other ones of the electrodes 116 may change to cause the changing electric fields that drive movement of the electrolytic fluid 110. An example of such a technique is discussed in Pollack et al., Electrowetting-based actuation of droplets for integrated microfluidics, Lab Chip, v. 2, pp. 96-101, (2002) and in Pollack et al., Electrowetting-based actuation of liquid droplets for microfluidic applications, Appl. Phys. Lett., v. 77, n. 11, pp. 1725-1726 (2000). The use of electrodes to drive electrolytic fluid through a channel may be performed in accordance with the teachings disclosed herein and the techniques known in the art (including those referred to herein), and thus are not discussed in detail herein.[53] The electrolytic fluid 110 may absorb heat from the first component 102 and/or the second component 104 and may transport that heat along the channel 108 as the electrolytic fluid 110 moves in the channel 108. This heat may be dissipated in regions of the flexible IC package 100 that are cooler than the first component 102 and/or the second component 104, thereby cooling the first component 102 and/or the second component 104.[54] In some embodiments, circulation of the electrolytic fluid 110 within the channel 108 may occur continuously to distribute heat in regions of the flexible IC package 100 proximate to the channel 108. In some embodiments, circulation of the electrolytic fluid 110 within the channel 108 may occur at predetermined intervals (e.g., periodically after a predetermined number of minutes, periodically after the flexible IC package 100 has been in active use for a predetermined period, etc.).[55] In some embodiments, the electrode controller 192 may be configured to selectively cause one or more of the electrodes 116 to generate an electric field based on one or more indicators of a temperature of the first component 102 or the second component 104. For example, circulation of the electrolytic fluid 110 within the channel 108 may occur when one or more components proximate to the channel 108 exceeds a temperature threshold. The temperature threshold may be different for different ones of the one or more components, and the temperature of the component may be measured by a temperature sensor included in the component itself or a temperature sensor (e.g., a thermocouple) disposed in the flexible IC package 100 proximate to the component. For example, the first component 102 may be associated with a first temperature threshold, and the second component 102 may be associated with a second temperature threshold. When the temperature of the first component 102 exceeds the first threshold, or the temperature of the second component 102 exceeds the second threshold, the electrode controller 192 may cause one or more of the electrodes 116 to generate electric fields to circulate the electrolytic fluid 110. In some embodiments, the temperature threshold associated with a particular component may depend on the temperature of another component or region in the flexible IC package 100 (e.g., another component in the flexible IC package 100). For example, the electrode controller 192 may be configured to cause circulation of the electrolytic fluid 110 in the channel 108 when the temperature of the first component 102 exceeds the temperature of the second component 104. In such an embodiment, the temperature threshold associated with the first component 102 is the temperature of the second component 104 (which will likely change during operation).[56] In some embodiments, the arrangement of the channel 108 within the flexible IC package 100 may be selected so some portions of the channel 108 are proximate to components or other components that are less likely to be "hot" and other portions of the channel 108 are proximate to components or other components that are more likely to be "hot" (and in need of cooling). For example, if the first component 102 is a processing device having a core or other computing element, and the second component 104 is an image processing device (e.g., a graphics component) or a device not having a core (an "uncore" device, such as a communications device), the first component 102 is likely to run hotter than the second component 104. In such an embodiment, the channel 108 may be advantageously routed so that the first portion 112 is proximate to the first component 102 and the second portion 114 is proximate to the second component 104; heat generated by the "hot" first component 102 may be absorbed by the electrolytic fluid 110 in the first portion 112, and the electrolytic fluid 110 may transport that heat through the channel 108 toward the "cool" second component 104 (where the heat may be dissipated). This circulation may occur continuously, periodically, or in response to the first component 102/second component 104 exceeding atemperature threshold. In this manner, temperature gradients within the flexible IC package 100 may be mitigated by dynamically moving heat from higher temperature areas to lower temperature areas.[57] In some embodiments, the arrangement of the channel 108 within the flexible IC package 100 may be selected so different portions of the channel 108 are proximate to components that are not likely to be "hot" (and in need of cooling) at the same time. For example, if the first component 102 and the second component 104 are unlikely to be generating significant heat at the same time, the channel 108 may be advantageously routed so that the first portion 112 is proximate to the first component 102 and the second portion 114 is proximate to the second component 104; when the first component 102 is active, the electrolytic fluid 110 in the first portion 112 may absorb the heat and transport through the channel 108 toward the inactive, cooler second component 104 (where the heat may be dissipated), and vice versa. This circulation may occur continuously, periodically, or in response to the first component 102/second component 104 exceeding a temperature threshold. In this manner, temperature gradients within the flexible IC package 100 may be mitigated by dynamically moving heat from higher temperature areas to lower temperature areas.[58] Various ones of the IC package 100 disclosed herein may reduce the peak temperature of regions within the flexible IC package 100 proximate to the channel 108 during operation and test by selectively and actively transporting heat from these regions to other, cooler regions via the electrowetting-based integrated thermal management system provided by the electrode controller 192, the electrodes 116, the electrolytic fluid 110, and the channel 108. "Hot" regions may be those proximate to components that generate significant heat, and "cool" regions may be those proximate to components or other components in the flexible IC package 100 that generate less or no heat. The degree of reduction of peak component temperature in various embodiments will depend on the particular arrangement of components in the flexible IC package 100, but the inclusion of the thermal management systems disclosed herein may achieve reductions in peak component temperature of 20% or more. The thermal management systems disclosed herein may also consume a minimal amount of power (on the order of fractions of a milliwatt), and thus may be particularly appropriate for low-power wearable computing applications (which may have a typical power consumption on the order of 1 watt).[59] Some of the embodiments of the flexible IC package 100 disclosed herein may provide an active, integrated, multilayer thermal solution for flexible, bendable packages, wherein theincorporation of the thermal solution into the flexible IC package 100 does not compromise the bendability and stretchability of the flexible IC package 100. In some embodiments, as discussed below, the thermal management systems disclosed herein can selectively cool parts of the flexible IC package 100 in different layers of the FSM 106 based on the arrangement of the channel 108 (or multiple channels, as discussed below). In some embodiments, the thermal management systems disclosed herein may minimize thermal yield loss during manufacturing test by actively reducing the temperature of the flexible IC package 100 to keep the temperature below the reliability temperature limit.Additionally, in some embodiments, the thermal management systems disclosed herein may improve performance of the flexible IC package 100 during use in the field by keeping the temperature of the flexible IC package 100 below the maximum allowable temperature without throttling the performance of the flexible IC package 100.[60] In some embodiments, any of the flexible IC packages 100 disclosed herein may include any of the embodiments of the flexible apparatus disclosed in co-pending U.S. Patent Application No.14/227,779, titled "ELECTRIC CIRCUIT ON FLEXIBLE SUBSTRATE." For example, the IC package 100 may include a glass island on a flexible substrate, an interconnect on the flexible substrate and partially overlapping the glass island, a component (e.g., a die) situated on the glass island and electrically coupled to the interconnect, and a layer of glass over the device and at least partially over the interconnect, such that the layer of glass, the glass island, and the interconnect form a hermetic seal for the device. In another example, the IC package 100 may include multiple stacked flexible substrate layers including a first substrate layer on a second substrate layer, first and second component sections situated in the stacked flexible substrate layers, and a first interconnect circuit patterned on a surface of the second substrate layer proximate the first substrate layer, wherein the first and second component sections are electrically coupled through the interconnect circuit. In another example, the IC package 100 may include an apparatus formed by forming an interconnect on a flexible substrate, situating a component (e.g., a die) on the substrate near the interconnect, and selectively depositing a first hermetic material on the device and interconnect so as to hermetically seal the device within the combination of the interconnect and first hermetic material.[61] As noted above, the flexible IC package 100 may include one or more components, such as the first component 102 and the second component 104. Different IC package designs may include different numbers and arrangements of components. For example, in multilayer embedded component packages, different components (e.g., different component segments) may be located between different layers of the FSM 106. FIGS. 2-17 illustrate a number of embodiments of IC packages 100 having different arrangements of components and channels. In these embodiments, one or more components disposed in the same or different layers of a multilayer flexible and bendable IC package 100 are proximate to one or more channels containing the electrolytic fluid 110 (e.g., electrolyte droplets in oil) in which the electrolytic fluid 110 circulates via electrowetting to transport heat from hotter regions of the flexible IC package 100 to cooler regions of the flexible IC package 100. In some embodiments, electrodes 116 printed on different layers of the FSM 106 may drive the motion of the electrolytic fluid 110, thus inducing bulk flow in the channel 108 in different layers. Thus, in some embodiments, the channel 108 may act as a self-contained circular mixer in which fluid is driven in bulk by electrowetting induced by the electrodes 116.[62] The embodiments illustrated in FIGS. 2-17 are simply illustrative, and any suitable arrangements in accordance with the teachings herein are within the scope of this disclosure. In particular, the electrodes illustrated in FIGS. 2-17 may not represent particular sizes, shapes, numbers, orarrangements of the electrodes, but instead indicate potential locations for at least some of the electrodes. Arrangements in accordance with the embodiments disclosed herein may include more or fewer electrodes than illustrated, and the electrodes may be positioned as illustrated or in any other suitable location so that electrowetting-based movement of the electrolytic fluid may occur, in accordance with the teachings herein and the techniques known in the art. The number, size, shape, and arrangement of the electrodes proximate to a channel may take any suitable form, such as any of those described in detail herein or discussed in any of the references cited herein. For example, in embodiments where the electrolytic fluid includes electrolyte droplets in oil, each of the electrodes may be dimensioned such that the area of a face of an electrode facing the channel is similar to the"footprint" of an electrolyte droplet. [63] Additionally, a number of other structures not illustrated in FIGS. 2-17 may be included in the flexible IC packages 100 discussed with reference to FIGS. 2-17. These structures may include conductive vias between different layers of the FSM 106, "horizontal" conductive traces to route electrical signals within the flexible IC package 100, and other components embedded in the flexible IC package 100 (e.g., other electrical components, optical components, etc.). For example, FIGS. 16 and 17 illustrate interlayer conductive material that may be used to route electrical signals between layers of the FSM 106 in some example flexible IC packages 100, and any of the embodiments of the flexible IC package 100 discussed herein may include such conductive material and any other suitable structures.[64] FIGS. 2 and 3 illustrate a first example embodiment of the flexible IC package 100. In particular, FIG. 2 is a portion of a side view of an embodiment of the flexible IC package 100, and FIG. 3 is a portion of a top view of the flexible IC package 100 of FIG. 2. In the embodiment of FIGS. 2 and 3, the flexible IC package 100 includes a first layer 202 of the FSM 106 and a second layer 204 of the FSM 106. Other layers of the FSM 106 may be included in the flexible IC package 100 of FIGS. 2 and 3 (and the flexible IC package 100 as illustrated in FIGS. 4-17), and some examples are illustrated therein. The first component 102 and the second component 104 may be disposed in the first layer 202. The channel 108 may be disposed in the second layer 204, with the electrolytic fluid 110 disposed therein. The first portion 112 of the channel 108 may be proximate to the first component 102, and the second portion 114 of the channel 108 may be proximate to the second component 104.[65] The first portion 112 of the channel 108 and the second portion 114 of the channel 108 may each have a serpentine structure, as illustrated in FIG. 3. The serpentine structure may increase the volume of electrolytic fluid 110 that can absorb heat from the corresponding component, and thus improve the volume of thermal transfer. Although many of the embodiments discussed with reference to FIGS. 2-17 may illustrate serpentine structures for various portions of the channel 108, any other suitable structure may be used, a number of examples of which are discussed below with reference to FIGS. 25-28. The electrodes 116 may include electrodes disposed between the first layer 202 and the second layer 204 (e.g., printed on the first layer 202 prior to formation of the second layer 204). The electrodes 116 of FIGS. 2 and 3 may be positioned at locations proximate to the channel 108 so as to effect movement of the electrolytic fluid 110 via dynamic electric fields under the control of the electrode controller 192 (not shown).[66] FIGS. 4 and 5 illustrate a second example embodiment of the flexible IC package 100. In particular, FIG. 4 is a portion of a side view of an embodiment of the flexible IC package 100, and FIG. 5 is a portion of a top view of the flexible IC package 100 of FIG. 4. In the embodiment of FIGS. 4 and 5, the flexible IC package 100 includes a first layer 402 of the FSM 106, a second layer 404 of the FSM 106, and a third layer 406 of the FSM 106, with the third layer 406 disposed between the first layer 402 and the second layer 404. The first component 102 may be disposed in the first layer 402, the second component 104 may be disposed in the second layer 404, and the channel 108 may be disposed in the third layer 406 (with the electrolytic fluid 110 disposed therein). The first portion 112 of the channel 108 may be proximate to the first component 102, and the second portion 114 of the channel 108 may be proximate to the second component 104. The electrodes 116 may include electrodes disposed between the first layer 402 and the third layer 406 (e.g., printed on the first layer 402 prior to formation of the third layer 406). The electrodes 116 of FIGS. 4 and 5 may be positioned at locations proximate to the channel 108 so as to effect movement of the electrolytic fluid 110 via dynamic electric fields under the control of the electrode controller 192 (not shown).[67] Although FIG. 4 shows the first layer 402 adjacent to the third layer 406, and the third layer 406 adjacent to the second layer 404, this need not be the case. In some embodiments, the first layer 402 may be spaced away from the third layer 406 by one or more intervening layers of the FSM 106. In some embodiments, the third layer 406 may be spaced away from the second layer 404 by one or more intervening layers of the FSM 106. The separation between the components included in the flexible IC package 100 and the channel 108 may be selected based on the requirements and constraints of a particular application, such as the number of layers in the flexible IC package 100, the amount of heat transfer required, other structural constraints, and the material properties of the FSM 106 and other components of the flexible IC package 100. In accordance with these teachings, any of theembodiments discussed herein with reference to FIGS. 2-17 in which two layers of the FSM 106 are adjacent to each other also teach embodiments in which the two layers are spaced apart by one or more intervening layers, as suitable.[68] FIGS. 6 and 7 illustrate a third example embodiment of the flexible IC package 100. In particular, FIG. 6 is a portion of a side view of an embodiment of the flexible IC package 100, and FIG. 7 is a portion of a top view of the flexible IC package 100 of FIG. 6. In the embodiment of FIGS. 6 and 7, the flexible IC package 100 includes a first layer 602 of the FSM 106, a second layer 604 of the FSM 106, and a third layer 606 of the FSM 106, with the third layer 606 disposed between the first layer 602 and the second layer 604. The first component 102 and the second component 104 may be disposed in the first layer 602, and a third component 150 may be disposed in the second layer 604. The channel 108 may be disposed in the third layer 606, with the electrolytic fluid 110 disposed therein. The first portion 112 of the channel 108 may be proximate to the first component 102, the second portion 114 of the channel 108 may be proximate to the second component 104, and a third portion 160 of the channel 108 may be proximate to the third component 150. The electrodes 116 may include electrodes disposed between the first layer 602 and the third layer 606 (e.g., printed on the first layer 602 prior to formation of the third layer 606). The electrodes 116 of FIGS. 6 and 7 may be positioned at locations proximate to the channel 108 so as to effect movement of the electrolytic fluid 110 via dynamic electric fields under the control of the electrode controller 192 (not shown).[69] FIGS. 8 and 9 illustrate a fourth example embodiment of the flexible IC package 100. In particular, FIG. 8 is a portion of a side view of an embodiment of the flexible IC package 100, and FIG. 9 is a portion of a top view of the flexible IC package 100 of FIG. 8. In the embodiment of FIGS. 8 and 9, the flexible IC package 100 includes a first layer 802 of the FSM 106, a second layer 804 of the FSM 106, a third layer 806 of the FSM 106, and a fourth layer 808 of the FSM 106, with the third layer 806 disposed between the first layer 802 and the fourth layer 808, and the fourth layer 808 disposed between the third layer 806 and the second layer 804. The first component 102 may be disposed in the first layer 802 and the second component 104 may be disposed in the second layer 804. The channel 108 may be arranged to be disposed in the third layer 806 and in the fourth layer 808, with the electrolytic fluid 110 disposed therein. In particular, the channel 108 may include an opening 170 between the third layer 806 and the fourth layer 808, allowing the electrolytic fluid 110 to flow between the third layer 806 and the fourth layer 808. The opening 170 may be formed as a via using standard soft lithography techniques (discussed in further detail below). Thus, the channel 108 of FIGS. 8 and 9 is an example of a multilayer channel. The first portion 112 of the channel 108 may be proximate to the first component 102, and the second portion 114 of the channel 108 may be proximate to the second component 104. The electrodes 116 may include electrodes 116a disposed between the first layer 802 and the third layer 806 (e.g., printed on the first layer 802 prior to formation of the third layer 806), and may include electrodes 116b disposed between the fourth layer 808 and the second layer 804 (e.g., printed on the fourth layer 808 prior to formation of the second layer 804). The electrodes 116 of FIGS. 8 and 9 may be positioned at locations proximate to the channel 108 so as to effect movement of the electrolytic fluid 110 via dynamic electric fields under the control of the electrode controller 192 (not shown).[70] FIGS. 10 and 11 illustrate a fifth example embodiment of the flexible IC package 100. In particular, FIG. 10 is a portion of a side view of an embodiment of the flexible IC package 100, and FIG. 11 is a portion of a top view of the flexible IC package 100 of FIG. 10. In the embodiment of FIGS. 10 and 11, the flexible IC package 100 includes a first layer 1002 of the FSM 106 and a second layer 1004 of the FSM 106. The first component 102 may be disposed in the first layer 1002 and the second component 104 may be disposed in the second layer 1004. The channel 108 may be arranged to be disposed in the first layer 1002 and in the second layer 1004, with the electrolytic fluid 110 disposed therein. In particular, the channel 108 may include the opening 170 between the first layer 1002 and the second layer 1004, allowing the electrolytic fluid 110 to flow between the first layer 1002 and the second layer 1004. Thus, the channel 108 of FIGS. 10 and 11 is another example of a multilayer channel. The first portion 112 of the channel 108 may be proximate to the first component 102, and the second portion 114 of the channel 108 may be proximate to the second component 104. The electrodes 116 may include electrodes 116a disposed "under" the first layer 1002 (e.g., printed on an underlying portion of the FSM 106 prior to formation of the first layer 1002) and may include electrodes 116b disposed between the first layer 1002 and the second layer 1004 (e.g., printed on the first layer 1002 prior to formation of the second layer 1004). The electrodes 116 of FIGS. 10 and 11 may be positioned at locations proximate to the channel 108 so as to effect movement of the electrolytic fluid 110 via dynamic electric fields under the control of the electrode controller 192 (not shown).[71] FIGS. 12 and 13 illustrate a sixth example embodiment of the flexible IC package 100. In particular, FIG. 12 is a portion of a side view of an embodiment of the flexible IC package 100, and FIG. 13 is a portion of a top view of the flexible IC package 100 of FIG. 12. In the embodiment of FIGS. 12 and 13, the flexible IC package 100 includes a first layer 1202 of the FSM 106, a second layer 1204 of the FSM 106, a third layer 1206 of the FSM 106, and a fourth layer 1208 of the FSM 106, with the third layer 1206 disposed between the first layer 1202 and the fourth layer 1208, and the fourth layer 1208 disposed between the third layer 1206 and the second layer 1204. The first component 102 may be disposed in the first layer 1202, the second component 104 may be disposed in the second layer 1204, and the third component 150 may be disposed in the fourth layer 1208. The channel 108 may be arranged to be disposed in the third layer 1206 and in the fourth layer 1208, with the electrolytic fluid 110 disposed therein. In particular, the channel 108 may include the opening 170 between the third layer 1206 and the fourth layer 1208, allowing the electrolytic fluid 110 to flow between the third layer 1206 and the fourth layer 1208. Thus, the channel 108 of FIGS. 12 and 13 is another example of a multilayer channel. The first portion 112 of the channel 108 may be proximate to the first component 102, the second portion 114 of the channel 108 may be proximate to the second component 104, and the third portion 160 of the channel 108 be proximate to the third component 150. The electrodes 116 may include electrodes 116a and 116b disposed between the first layer 1202 and the third layer 1206 (e.g., printed on the first layer 1202 prior to formation of the third layer 1206) and may include electrodes 116c disposed between the third layer 1206 and the fourth layer 1208 (e.g., printed on the third layer 1206 prior to formation of the fourth layer 1208). The electrodes 116 of FIGS. 12 and 13 may be positioned at locations proximate to the channel 108 so as to effect movement of the electrolytic fluid 110 via dynamic electric fields under the control of the electrode controller 192 (not shown).[72] FIGS. 14 and 15 illustrate a seventh example embodiment of the flexible IC package 100. In particular, FIG. 14 is a portion of a side view of an embodiment of the flexible IC package 100, and FIG. 15 is a portion of a top view of the flexible IC package 100 of FIG. 14. In the embodiment of FIGS. 14 and 15, the flexible IC package 100 includes a first layer 1402 of the FSM 106, a second layer 1404 of the FSM 106, and a third layer 1406 of the FSM 106, wherein the first layer 1402 is disposed between the second layer 1404 and the third layer 1406. The first component 102 and the third component 150 may be disposed in the first layer 1402, and the second component 104 may be disposed in the second layer 1404. The channel 108 may be arranged to be disposed in the first layer 1402, the second layer 1404, and the third layer 1406, with the electrolytic fluid 110 disposed therein. In particular, the channel 108 may include an opening 170a between the third layer 1406 and the second layer 1404 (spanning the first layer 1402) and an opening 170b between the second layer 1404 and the first layer 1402, allowing the electrolytic fluid 110 to flow between the first layer 1402, second layer 1404, and third layer 1406. Thus, the channel 108 of FIGS. 14 and 15 is another example of a multilayer channel. The first portion 112 of the channel 108 may be proximate to the first component 102, the second portion 114 of the channel 108 may be proximate to the second component 104, and the third portion 160 of the channel 108 may be proximate to the third component 150. The electrodes 116 may include electrodes 116a disposed between the first layer 1402 and the third layer 1406 (e.g., printed on the first layer 1402 prior to formation of the third layer 1406), electrodes 116b disposed "under" the second layer 1404 (e.g., printed on an underlying portion of the FSM 106 prior to formation of the second layer 1404), and electrodes 116c disposed between the second layer 1404 and the first layer 1402 (e.g., printed on the second layer 1404 prior to formation of the first layer 1402). The electrodes 116 of FIGS. 14 and 15 may be positioned at locations proximate to the channel 108 so as to effect movement of the electrolytic fluid 110 via dynamic electric fields under the control of the electrode controller 192 (not shown).[73] Some embodiments of the flexible IC packages 100 disclosed herein may include multiple channels configured as discussed above with reference to the channel 108. For example, FIGS. 16 and 17 are portions of side views of examples of flexible IC packages 100 with multiple channels, in accordance with various embodiments. In particular, FIG. 16 illustrates the flexible IC package 100 having a first layer 1612 of the FSM 106, a second layer 1614 of the FSM 106, and a third layer 1616 of the FSM 106. A number of components, including a first component 1602, a second component 1604, a third component 1606, a fourth component 1608, and a fifth component 1610, may be included in the flexible IC package 100. Any of the components 1602-1610 may take the form of any of the other components disclosed herein (e.g., the first component 102, the second component 104, and the third component 150). The first component 1602, the second component 1604, the fourth component 1608, and the fifth component 1610 may be disposed in the second layer 1614, and the third component 1606 may be disposed in the first layer 1612. A first channel 1620 may be disposed in the second layer 1614 and the third layer 1616, and may include an opening 1622 between the second layer 1614 and the third layer 1616. An electrolytic fluid 1624 may be disposed in the first channel 1620. Various portions of the first channel 1620 may be proximate to the first component 1602, the second component 1604, and the third component 1606. Electrodes 1630a may be disposed proximate to the first channel 1620. A second channel 1640 may be disposed in the third layer 1616. An electrolytic fluid 1644 may be disposed in the second channel 1640. Various portions of the second channel 1640 may be proximate to the fourth component 1608 and the fifth component 1610. The electrodes 1630b may be disposed proximate to the second channel 1640. The first channel 1620 and the second channel 1640 may be formed in accordance with any of the embodiments of the channel 108 discussed herein, the electrolytic fluid 1624 and the electrolytic fluid 1644 may be formed in accordance with any of the embodiments of the electrolytic fluid 110 discussed herein, and the electrodes 1630a and 1630b may be formed in accordance with any of the embodiments of the electrodes 116 discussed herein. [74] The channels 1620 and 1640 may include inlets 1626 and 1646, respectively. These inlets may be extensions of the channels 1620 and 1640, respectively, and may extend to an exterior surface of the flexible IC package 100. During manufacture of the flexible IC package 100, the electrolytic fluids 1624 and 1644 may be provided to the channels 1620 and 1640, respectively, via the inlets 1626 and 1646, respectively. After electrolytic fluid has been provided to the channels, seals 1628 and 1648 may be provided to the inlets 1626 and 1646, respectively, to seal the electrolytic fluid 1624 and 1644 within the channels 1620 and 1640, respectively. The seals 1628 and 1648 (and any of the other seals disclosed herein) may be formed from any suitable material, such as the same material as the FSM 106 (e.g., PDMS or PET), thermoplastics, or adhesives among others. Various seals and sealing techniques that may be used in some embodiments of the flexible IC package 100 are discussed in Yuksel et al., Lab-on-a-chip devices: How to close and plug the lab?, Microelectronic Engineering, v. 132, pp. 156-175 (2015). Any of the embodiments of the flexible IC package 100 disclosed herein may include inlets and seals as discussed above with reference to FIG. 16, with these elements omitted from most drawings for ease of illustration.[75] The flexible IC package 100 illustrated in FIG. 16 also includes portions of conductive material 1618 disposed in various layers of the FSM 106. These portions of conductive material 1618 may route electrical signals (e.g., information signals, power, ground, etc.) across layers of the FSM 106 in conjunction with portions of conductive material disposed between layers (not shown) of the FSM 106 . In particular, the portions of conductive material 1618 may route signals to and/or from the components 1602-1610. The use of "vertical" and "horizontal" conductive traces within an IC package for signal routing is well known, and thus is not discussed further herein.[76] FIG. 17 illustrates a flexible IC package 100 having a first layer 1712 of the FSM 106, a second layer 1714 of the FSM 106, a third layer 1716 of the FSM 106, and a fourth layer 1718 of the FSM 106. A number of components, including a first component 1702, a second component 1704, a third component 1706, a fourth component 1708, and a fifth component 1710, may be included in the flexible IC package 100. Any of the components 1702-1710 may take the form of any of the other components disclosed herein (e.g., the first component 102, the second component 104, and the third component 150). The first component 1702, the third component 1706, and the fifth component 1710 may be disposed in the fourth layer 1718. The second component 1704 may be disposed in the first layer 1712, and the fourth component 1708 may be disposed in the second layer 1714. A first channel 1720 may be disposed in the second layer 1714 and the third layer 1716, and may include an opening 1722 between the second layer 1714 and the third layer 1716. An electrolytic fluid 1724 may be disposed in the first channel 1720. Various portions of the first channel 1720 may be proximate to the first component 1702 and the second component 1704. The electrodes 1730a may be disposed proximate to the first channel 1720. A second channel 1740 may be disposed in the third layer 1716. Various portions of the second channel 1740 may be proximate to the third component 1706 and the fourth component 1708. An electrolytic fluid 1744 may be disposed in the second channel 1740.Electrodes 1730b may be disposed proximate to the second channel 1740. In the embodiment illustrated in FIG. 17, no channel may be proximate to the fifth component 1710. The first channel 1720 and the second channel 1740 may be formed in accordance with any of the embodiments of the channel 108 discussed herein, the electrolytic fluid 1724 and the electrolytic fluid 1744 may be formed in accordance with any of the embodiments of the electrolytic fluid 110 discussed herein, and the electrodes 1730a and 1730b may be formed in accordance with any of the embodiments of the electrodes 116 discussed herein.[77] The flexible IC package 100 illustrated in FIG. 17 also includes portions of conductive material 1750 disposed in various layers of the FSM 106. As discussed above with reference to the embodiment of FIG. 16, these portions of conductive material 1750 may route electrical signals (e.g., information signals, power, ground, etc.) across layers of the FSM 106 in conjunction with portions of conductive material disposed between layers of the FSM 106 (not shown). In particular, the portions of conductive material 1750 may route signals to and/or from the components 1702-1710.[78] The flexible IC packages 100 disclosed herein may be manufactured using any suitable process. For example, FIGS. 18-20 illustrate various assemblies formed during a process of manufacturing the flexible IC package 100 of FIG. 16, in accordance with various embodiments.[79] FIG. 18 illustrates an assembly 1800 including the FSM 106 having one or more components (e.g., the components 1602-1610) disposed therein, along with a channel (e.g., the first channel 1620 and the second channel 1640). In some embodiments, the channel may form a closed circuit having one or more portions proximate to the one or more components (e.g., as discussed above with reference to FIG. 16). Electrodes (e.g., the electrodes 1630) may be disposed proximate to channels (e.g., the electrodes 1630a may be disposed proximate to the first channel 1620 and the electrodes 1630b may be disposed proximate to the second channel 1640). The channel(s) in the assembly 1800 may include inlet(s) for fluid communication with the exterior (e.g., the inlets 1626 and 1646 of the channels 1620 and 1640, respectively).[80] Various ones of the embodiments of the flexible IC package 100 disclosed herein, and subassemblies thereof (like assembly 1800) may be manufactured using existing soft lithography techniques, which utilize flexible stamps, molds, and/or flexible photomasks. The use of such techniques may enable the ready adoption and manufacture of the IC packages 100 disclosed herein by mitigating the time and expense involved in retooling. Soft lithography has been used to form channels in flexible materials (e.g., PET and PDMS), and known soft lithography techniques may be used to form any of the channels in FSM disclosed herein (e.g., the channel 108 in the FSM 106). Examples of such techniques are discussed in, e.g., Qin et al., Soft lithography for micro- and nanoscale patterning, Nature Protocols, v. 5, n. 3, pp. 491-502 (2010); and Wu et al., Construction of microfluidic chips using polydimethylsiloxane bonding, Lab Chip, v. 5, pp. 1393-1398 (2005). Techniques for forming the electrodes 116 in the FSM 106 are also known in the art; examples include the techniques for forming flexible dry copper electrodes discussed in Fernandes et al., Flexible PDMS-based dry electrodes for electro-optic acquisition of ECG signals in wearable devices, Proceedings of the 32nd AnnualInternational Conference of the IEEE EMBS, pp. 3503-3506 (2010); and the techniques for forming flexible patterned metal electrodes discussed in Chou et al., Fabrication of stretchable and flexible electrodes based on PDMS substrate, Proceedings of the 2012 IEEE 25th International Conference on Micro Electro Mechanical Systems, pp. 247-250 (2012). The assembly 1800 may be formed using any such techniques.[81] FIG. 19 illustrates an assembly 1900 subsequent to providing electrolytic fluid to the channel(s) in the assembly 1800 (e.g., providing the electrolytic fluid 1624 to the first channel 1620 and providing the electrolytic fluid 1644 to the second channel 1640 via the inlets 1626 and 1646, respectively).Electrolytic fluid may be provided to a channel using a pipette or any other suitable technique.[82] FIG. 20 illustrates an assembly 2000 subsequent to sealing the inlet(s) of the channel(s) in the assembly 1900 to secure the electrolytic fluid therein (e.g., with the seals 1628 and 1648 in the inlets 1626 and 1646, respectively). The assembly 2000 has the same structure as the flexible IC package 100 of FIG. 16.[83] As discussed above, the flexible IC packages disclosed herein may be advantageously coupled with a support structure to form a wearable IC device. For example, FIG. 21 is a portion of a side view of a wearable IC device 180 including the flexible IC package 100 of FIG. 16 coupled to a support structure 190, in accordance with various embodiments. Although the support structure 190 is shown as coupled to a single planar face of the flexible IC package 100, this need not be the case, and the support structure 190 may surround the flexible IC package 100, partially surround the flexible IC package 100, contact one or more faces of the flexible IC package 100, or be coupled to the flexible IC package 100 in any suitable manner. In some embodiments, the support structure 190 may be coupled to the flexible IC package 100 using an adhesive (e.g., a permanent or removable adhesive). In some embodiments, the support structure 190 may be coupled to the flexible IC package 100 using a mechanical fastener, such as a hook-and-loop fastener (e.g., with hook material coupled to one of the flexible IC package 100 or the support structure 190, and loop material coupled to the other of the flexible IC package 100 or support structure 190), prongs (e.g., as commonly used to secure precious stones in a ring setting), stitches, or snaps (e.g., with the male portion of a snap coupled to the flexible IC package 100 or the support structure 190, and the female portion of the snap coupled to the other of the flexible IC package 100 or the support structure 190). In some embodiments, the support structure 190 may be coupled to the flexible IC package 100 by embedding the flexible IC package 100 in the material of the support structure 190 (e.g., by capturing the flexible IC package 100 within a pocket, or within layers of cloth via stitching, or by embedding the flexible IC package 100 in a flexible mold compound). [84] The wearable IC device 180 may be configured for wear on any suitable portion of a user's body. For example, the support structure 190 may be part of a shoe, sock, anklet, orthopedic brace, undergarment, item of clothing, armband, bracelet, ring, glove, necklace, scarf, eyeglasses, ear jewelry, temporary tattoo, sticker, earbud, headset, hat, hair accessory, or any other item worn on the body. For example, FIG. 22 is a perspective view of the wearable IC device 180 having an armband as the support structure 190 coupled to a flexible IC package 100, in accordance with various embodiments. In the embodiment illustrated in FIG. 22, the flexible IC package 100 may be embedded in fabric or other flexible material of the armband support structure 190. The flexible IC package 100 in the embodiment of FIG. 22 may bend and/or stretch with the bending and/or stretching of the armband support structure 190. In another example, FIG. 23 is a side cross-sectional view of the wearable IC device 180 having shoes as the support structure 190 coupled to a flexible IC package 100, in accordance with various embodiments. In the embodiment of FIG. 23, the flexible IC package 100 may be disposed between a sole 2302 of the shoe support structure 190 (formed from, e.g., an elastomeric or other material) and a fabric layer 2304 (which may include, e.g., foam padding or other suitable materials) on which the user's foot rests when the shoe support structure 190 is being worn. The flexible IC package 100 in the embodiment of FIG. 23 may bend and/or stretch with the bending and/or stretching of the shoe support structure 190.[85] As discussed above, the dynamic electric fields generated by the electrodes 116 in the flexible IC packages 100 disclosed herein may be controlled by the electrode controller 192. FIG. 24 is a block diagram of an electrode controller arrangement 2400 including the electrode controller 192 and an exemplary number of electrodes 116. Each electrode 116 may be coupled to an electrode input 194 of the electrode controller 192. The electrode inputs 194 may be conductive contacts on the electrode controller 192, which may itself be a microcontroller or any other suitable processing device. In some embodiments, the electrode controller 192 may be included in a component that is itself thermally managed in accordance with the techniques disclosed herein (e.g., the first component 102). The electrode controller 192 may also include sensor inputs 198 to which one or more sensors 196 may be coupled. For example, as discussed above, the electrode controller 192 may selectively cause various electrodes 116 to generate an electric field in response to the temperatures of one or more components or other components in the flexible IC package 100; the sensors 196 may include one or more temperature sensors, and the electrode controller 192 may receive the temperature data via the sensor inputs 198. Any other sensor may be coupled to the electrode controller 192 as suitable. In some embodiments, the electrode controller 192 may include timer circuitry for use in timing the dynamic electric fields.[86] As noted above, the channel 108 may have any suitable shape or dimensions. For example, in a number of the embodiments discussed above with reference to FIGS. 2-17, various portions of the channel 108 may have a serpentine structure so as to expose a substantial part of the channel 108 to the components proximate to the channel 108 so that the electrolytic fluid 110 in the channel 108 can absorb heat from the components. FIGS. 25-28 illustrate various example structures that may be used, instead of or in addition to serpentine structures, for the portion(s) of the channel 108 proximate to a component or components in the flexible IC package 100 (e.g., the first portion 112 or the second portion 114), in accordance with various embodiments. For example, FIG. 25 illustrates a linear structure for a portion of the channel 108. FIG. 26 illustrates a boustrophedonic structure for a portion of the channel 108. FIG. 27 illustrates a spiral structure for a portion of the channel 108. FIG. 28 illustrates a zigzag structure for a portion of the channel 108. The structures illustrated in the various drawings are simply illustrative, and any suitable structure may be used.[87] Other dimensions of the channel 108 may be selected using known techniques that may depend on the electrolytic fluid 110, geometric constraints of the flexible IC package 100, the structure of the channel 108 (e.g., the number of FSM layers traversed by the channel 108 traverses), the dimensions of the components to be cooled, and other factors known to one of skill in the art. In some embodiments, a cross-sectional area of the channel 108 (e.g., the area of a plane through which the electrolytic fluid 110 may flow) may be approximately 1 to 100 microns by 1 to 100 microns. In some embodiments, the components included in the flexible IC package 100 may have dimensions in the millimeters (e.g., a footprint of 2 millimeters by 2 millimeters, and a thickness of 20-100 microns).[88] FIG. 29 is a flow diagram of an illustrative process 2900 for forming a flexible IC package, in accordance with various embodiments. An embodiment of this process may be used to form the assemblies discussed above with reference to FIGS. 18-20. While the operations of the process 2900 are arranged in a particular order in FIG. 29 and illustrated once each, in various embodiments, one or more of the operations may be repeated, omitted, or performed out of order. Any of the operations of the process 2900 may be performed in accordance with any of the embodiments of the flexible IC packages 100 described herein.[89] At 2902, a flexible IC assembly may be provided. The flexible IC assembly may include an FSM having disposed therein a component, multiple electrodes, and a channel. In some embodiments, the channel may form a closed circuit having a portion proximate to the component, and the electrodes may be positioned at locations proximate to the channel. In some embodiments, providing the flexible IC assembly may include printing the electrodes on one or more layers of the FSM. In the example of FIG. 1, the flexible IC assembly 100 may include the FSM 106 having disposed therein the first component 102 (and, optionally, the second component 104), multiple electrodes 116, and the channel 108. The first portion 112 of the channel 108 may be proximate to the component 102, and the electrodes 116 may be proximate to the channel 108. An example of a flexible IC assembly that may be provided at 2902 is the assembly 1800 of FIG. 18.[90] At 2904, the electrolytic fluid may be provided to the channel via the inlet of the flexible IC assembly. For example, the electrolytic fluid 110 may be provided to the channel 108 via the inlet (e.g., as discussed above with reference to FIGS. 16 and 19). An example of a flexible IC assembly having electrolytic fluid in the channel is the assembly 1900 of FIG. 19.[91] At 2906, after providing the electrolytic fluid, the inlet may be sealed, thus trapping the electrolytic fluid in the chamber. An example of a flexible IC assembly having a sealed inlet is the assembly 2000 of FIG. 20. In some embodiments, after sealing the inlet, the flexible IC assembly (which may be a flexible IC package) may be coupled to a wearable support structure.[92] FIG. 30 is a flow diagram of an illustrative process 3000 for thermally managing a flexible IC package, in accordance with various embodiments. While the operations of the process 3000 are arranged in a particular order in FIG. 30 and illustrated once each, in various embodiments, one or more of the operations may be repeated, omitted, or performed out of order. Any of the operations of the process 3000 may be performed in accordance with any of the embodiments of the flexible IC packages 100 described herein.[93] At 3002, an electrode controller may cause a first pair of electrodes, of a set of multiple electrodes, to generate an electric field. The set of multiple electrodes may be disposed in a FSM of a flexible IC package, and may be positioned at locations proximate to a channel in the FSM. An electrolytic fluid may be disposed in the channel. In the example of FIG. 1, the electrode controller 192 may cause a first pair of the electrodes 116 to generate an electric field. The electrodes 116 may be disposed in the FSM 106 and may be positioned at locations proximate to the channel 108. The electrolytic fluid 110 may be disposed in the channel 108. In some embodiments, the first pair of electrodes may be part of a group of three or more electrodes across which various electric fields may be generated at 3002.[94] At 3004, the electrode controller may cause a second pair of electrodes, of the set of multiple electrodes, to generate an electric field to cause the movement of at least some of the electrolytic fluid within the channel. In some embodiments, the second pair of electrodes may be part of a group of three or more electrodes across which various electric fields may be generated at 3004. In some embodiments, the first pair of electrodes and the second pair of electrodes may share an electrode. For example, the electrode controller 192 may cause a second pair of the electrodes 116 to generate an electric field to cause the movement of at least some of the electrolytic fluid 110 within the channel 108.[95] The process 3000 may continue as the electrode controller causes various pairs of electrodes to generate electric fields to cause the movement of at least some of the electrolytic fluid within the channel so as to transport heat absorbed by the electrolytic fluid to other portions of the IC package. As discussed above, in some embodiments, the process 3000 may include a determination by the electrode controller that a temperature of a component in the IC package exceeds a threshold, in response to which the first pair of electrodes or the second pair of electrodes may be caused to generate their electric fields. In some embodiments, the temperature threshold associated with a first component is the temperature of a second component; when the temperature of the first component exceeds the temperature of the second component, the electrode controller may cause the generation of electric fields to move the electrolytic fluid. The flexible IC packages disclosed herein may be used to implement any suitable computing device.[96] FIG. 31 is a block diagram of an example computing device 3100 that may include or be included in the flexible IC package 100 (e.g., as a wearable IC device). As shown, the computing device 3100 may include one or more processors 3102 (e.g., one or more processor cores implemented on one or more components) and a system memory 3104 (implemented on one or more components). As used herein, the term "processor" or "processing device" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The processor(s) 3102 may include one or more microprocessors, graphics processors, digital signal processors, crypto processors, or other suitable devices. More generally, the computing device 3100 may include any suitable computational circuitry, such as one or more Application Specific Integrated Circuits (ASICs).[97] The computing device 3100 may include one or more mass storage devices 3106 (such as flash memory devices or any other mass storage device suitable for inclusion in a flexible IC package). The system memory 3104 and the mass storage device 3106 may include any suitable storage devices, such as volatile memory (e.g., dynamic random access memory (DRAM)), nonvolatile memory (e.g., read-only memory (ROM)), and flash memory. The computing device 3100 may include one or more I/O devices 3108 (such as display, user input device, network interface cards, modems, and so forth, suitable for inclusion in a flexible IC device). The elements may be coupled to each other via a system bus 3112, which represents one or more buses.[98] Each of these elements may perform its conventional functions known in the art. In particular, the system memory 3104 and the mass storage device 3106 may be employed to store a working copy and a permanent copy of programming instructions 3122.[99] The permanent copy of the programming instructions 3122 may be placed into permanent mass storage devices 3106 in the factory or through a communication device included in the I/O devices 3108 (e.g., from a distribution server (not shown)). The constitution of elements 3102-3112 are known, and accordingly will not be further described.[100] Machine-accessible media (including non-transitory computer-readable storage media), methods, systems, and devices for performing the above-described techniques are illustrative examples of embodiments disclosed herein for thermal management of an IC device. For example, a computer- readable media (e.g., the system memory 3104 and/or the mass storage device 3106) may have stored thereon instructions (e.g., the instructions 3122) such that, when the instructions are executed by one or more of the processors 3102, the instructions cause the computing device 3100 to perform the thermal management method of FIG. 30. [101] As noted above, although the thermal management systems and techniques disclosed herein may be particularly advantageous when used to thermally manage flexible IC packages, these systems and techniques may also be implemented to improve thermal management of conventional, rigid IC packages. Thus, any of the embodiments disclosed herein and described as applicable in a flexible IC package may also apply in a conventional, rigid IC package setting. Such a rigid IC package may include, for example, a rigid substrate material and/or a rigid overmold material.[102] Additionally, although the thermal management systems and techniques disclosed herein may be particularly advantageous when used to thermally manage components (or "component sections," as discussed above), these systems and techniques may be used to thermally manage any devices included in an IC package, such as a resistor, capacitor, transistor, inductor, radio, memory, processor, laser, light-emitting diode (LED), sensor, a memory gate, combinational or state logic, or other digital or analog component. A device thermally managed by the thermal management systems and techniques disclosed herein may be a packaged component (e.g., a surface mount, flip chip, ball grid array, land grid array, bumpless buildup layer, or other package) or an unpackaged component.[103] The following paragraphs provide examples of various ones of the embodiments disclosed herein.[104] Example 1 is a flexible integrated circuit (IC) package, including: flexible substrate material; a component disposed in the flexible substrate material; a channel disposed in the flexible substrate material forming a closed circuit, wherein a portion of the channel is proximate to the component; a plurality of electrodes disposed in the flexible substrate material and positioned at locations proximate to the channel, wherein the plurality of electrodes are coupled to an electrode controller to selectively cause two or more of the plurality of electrodes to generate an electric field; and an electrolytic fluid disposed in the channel.[105] Example 2 may include the subject matter of Example 1, wherein the flexible substrate material includes polyethylene terephthalate or polydimethylsiloxane.[106] Example 3 may include the subject matter of Example 1, and may further specify that: the component is a first component; the portion of the channel is a first portion of the channel; the flexible IC package further comprises a second component disposed in the flexible substrate material; and a second portion of the channel is proximate to the second component.[107] Example 4 may include the subject matter of Example 3, and may further specify that the first component is disposed in a first layer of the flexible substrate material, the second component is disposed in a second layer of the flexible substrate material, the first layer is different from the second layer, and adjacent layers of the flexible substrate material are separated by printed circuitry.[108] Example 5 may include the subject matter of Example 4, and may further specify that the first layer and the second layer are spaced apart by one or more layers of the flexible substrate material. [109] Example 6 may include the subject matter of Example 4, and may further specify that the plurality of electrodes are disposed between the first layer and the second layer.[110] Example 7 may include the subject matter of Example 1, and may further specify that the component is disposed in a first layer of the flexible substrate material, the portion of the channel is disposed in a second layer of the flexible substrate material, and the first and second layers are adjacent layers of the flexible substrate material.[Ill] Example 8 may include the subject matter of Example 7, and may further specify that: the component is a first component; the portion of the channel is a first portion of the channel; the flexible IC package further comprises a second component disposed in the flexible substrate material; a second portion of the channel is proximate to the second component; and the second component is disposed in a third layer of the flexible substrate material, the second portion of the channel is disposed in a fourth layer of the flexible substrate material, and the third and fourth layers are adjacent layers of the flexible substrate material.[112] Example 9 may include the subject matter of Example 8, and may further specify that the second layer and the fourth layer are different layers of the flexible substrate material.[113] Example 10 may include the subject matter of Example 8, and may further specify that the second layer and the fourth layer are a same layer of the flexible substrate material.[114] Example 11 may include the subject matter of Example 1, and may further specify that the plurality of electrodes are disposed between layers of the flexible substrate material.[115] Example 12 may include the subject matter of Example 1, and may further specify that the channel includes a via extending between different layers of the flexible substrate material.[116] Example 13 may include the subject matter of Example 12, and may further specify that the via extends between a first layer of the flexible substrate material and a second layer of the flexible substrate material, and the first layer and second layer are spaced apart by one or more layers of the flexible substrate material.[117] Example 14 may include the subject matter of Example 1, and may further specify that: the component is a first component; the portion of the channel is a first portion of the channel; the flexible IC package further comprises a second component and a third component disposed in the flexible substrate material; a second portion of the channel is proximate to the second component and a third portion of the channel is proximate to the third component; and the first component is disposed in a first layer of the flexible substrate material, the second component is disposed in a second layer of the flexible substrate material, the third component is disposed in a third layer of the flexible substrate material, and the third layer is disposed between the first layer and the second layer.[118] Example 15 may include the subject matter of Example 14, and may further specify that the first portion of the channel is disposed between the first layer and the third layer, and the second portion of the channel is disposed between the third layer and the second layer. [119] Example 16 may include the subject matter of Example 1, and may further specify that: the component is a first component; the portion of the channel is a first portion of the channel; the flexible IC package further comprises a second component and a third component disposed in the flexible substrate material; a second portion of the channel is proximate to the second component and a third portion of the channel is proximate to the third component; and the first and second components are disposed in a first layer of the flexible substrate material, the third component is disposed in a second layer of the flexible substrate material, and the first and second layers are different layers of the flexible substrate material.[120] Example 17 may include the subject matter of Example 1, and may further specify that the component is a first component, the channel is a first channel, the plurality of electrodes is a first plurality of electrodes, the electrolytic fluid is a first electrolytic fluid, and the flexible IC package further includes: a second component disposed in the flexible substrate material; a second channel disposed in the flexible substrate material forming a closed circuit, and may further specify that a portion of the second channel is proximate to the second component; a second plurality of electrodes disposed in the flexible substrate material and positioned at locations proximate to the second channel; and a second electrolytic fluid disposed in the second channel.[121] Example 18 may include the subject matter of Example 1, and may further specify that the plurality of electrodes includes a first set of electrodes and a second set of electrodes, the component is disposed in a first layer of the flexible substrate material, and the first layer of the flexible substrate material is disposed between the first set of electrodes and the second set of electrodes.[122] Example 19 may include the subject matter of any of Examples 1-18, and may further specify that the portion of the channel has a serpentine structure.[123] Example 20 may include the subject matter of any of Examples 1-18, and may further specify that the electrolytic fluid includes electrolyte droplets in oil.[124] Example 21 may include the subject matter of any of Examples 1-18, and may further include the electrode controller.[125] Example 22 may include the subject matter of any of Examples 1-18, and may further specify that the electrode controller is to selectively cause two or more of the plurality of electrodes to generate the electric field based on one or more indicators of a temperature of the component.[126] Example 23 may include the subject matter of any of Examples 1-18, and may further specify that the electrode controller is to selectively cause two or more of the plurality of electrodes to generate the electric field to circulate the electrolytic fluid in the channel.[127] Example 24 may include the subject matter of any of Examples 1-18, and may further specify that the component is a die or a sensor.[128] Example 25 is a wearable integrated circuit (IC) device, including: a flexible integrated circuit (IC) package, including flexible substrate material, a component disposed in the flexible substrate material, a channel disposed in the flexible substrate material forming a closed circuit, wherein a portion of the channel is proximate to the component, a plurality of electrodes disposed in the flexible substrate material and positioned at locations proximate to the channel, wherein the plurality of electrodes are coupled to an electrode controller to selectively cause two or more of the plurality of electrodes to generate an electric field, and an electrolytic fluid disposed in the channel; and a wearable support structure coupled to the flexible IC package.[129] Example 26 may include the subject matter of Example 25, and may further specify that the wearable support structure comprises an adhesive backing.[130] Example 27 may include the subject matter of Example 25, and may further specify that the wearable support structure comprises a fabric.[131] Example 28 may include the subject matter of any of Examples 25-27, and may further specify that the component includes a processing device or a memory device.[132] Example 29 is a method of forming a flexible integrated circuit (IC) package, including:providing a flexible IC assembly including a flexible substrate material having disposed therein a component, a plurality of electrodes, and a channel, wherein the channel forms a closed circuit having a portion proximate to the component, and wherein the plurality of electrodes are positioned at locations proximate to the channel; providing an electrolytic fluid to the channel via an inlet of the flexible IC assembly; and after providing the electrolytic fluid, sealing the inlet.[133] Example 30 may include the subject matter of Example 29, and may further include, after sealing the inlet, coupling the flexible IC assembly to a wearable support structure.[134] Example 31 may include the subject matter of any of Examples 29-30, and may further specify that providing the flexible IC assembly includes printing one or more electrodes of the plurality of electrodes on one or more layers of the flexible substrate material.[135] Example 32 is a method of thermally managing a flexible integrated circuit (IC) package, including: causing, by an electrode controller, a first pair of electrodes to generate an electric field, wherein the first pair of electrodes is disposed in a flexible substrate material of the flexible IC package and positioned at locations proximate to a channel in the flexible substrate material, and wherein an electrolytic fluid is disposed in the channel; causing, by the electrode controller, a second pair of electrodes to generate the electric field to cause the movement of at least some of the electrolytic fluid within the channel, wherein the second pair of electrodes is disposed in the flexible substrate material of the flexible IC package and positioned at locations proximate to the channel in the flexible substrate material; wherein the channel forms a closed circuit, a component is disposed in the flexible substrate material, and the channel includes a portion proximate to the component.[136] Example 33 may include the subject matter of Example 32, and may further include, before causing the first pair of electrodes to generate the electric field or causing the second pair of electrodes to generate the electric field, determining, by the electrode controller, that a temperature of the component exceeds a threshold; wherein causing the first pair of electrodes to generate the electric field and causing the second pair of electrodes to generate the electric field are performed in response to the determination.[137] Example 34 may include the subject matter of Example 33, and may further specify that: the component is a first component; the portion of the channel is a first portion of the channel; a second component is disposed in the flexible substrate material; the channel includes a second portion proximate to the second component; and determining that a temperature of the first component exceeds a threshold comprises determining that the temperature of the first component exceeds a temperature of the second component.[138] Example 35 may include the subject matter of any of Examples 33-34, and may further specify that the first pair of electrodes and the second pair of electrodes share an electrode. |
Examples include techniques to mirror a command/address or interpret command/address logic at a memory device. A memory device located on a dual in-line memory module (DIMM) may include circuitry having logic capable of receiving a command/address signal and mirror a command/address or interpret command/address logic indicated in the command/address signal based on one or more strap pins for the memory device. |
1. An apparatus for operating a memory device, comprising:A circuit module for a first memory device on a first side of a dual in-line memory module DIMM, the circuit module including logic, at least a portion of which includes hardware, for:receiving a command/address signal indicating a first command/address to the first memory device;determining to mirror the first command/address indicated in the command/address signal based on a shorting pin of the first memory device being connected to a power pin of the first memory device; andMirroring the first command/address to the first memory device such that the first command/address indicated in the command/address signal is to a second memory device on a second side of the DIMM Mirror of the second command/address.2. The apparatus of claim 1, the logic for mirroring the first command/address to the first memory device includes for mirroring a corresponding even numbered command to the first memory device. /Address is swapped with the corresponding next higher odd numbered command/address to the first memory device.3. The device of claim 1, the power pin comprising an output storage drain power voltage (VDDQ) pin.4. The device of claim 1, the DIMMs including registered DIMMs (RDIMMs), low power DIMMs (LPDIMMs), load reduced DIMMs (LRDIMMs), fully buffered DIMMs (FB-DIMMs), unbuffered DIMM (UDIMM) or small outline DIMM (SODIMM).5. The apparatus of claim 1, comprising the first memory device and the second memory device, the first memory device and the second memory device each comprising non-volatile memory or volatile memory. Volatile memory, wherein the volatile memory includes dynamic random access memory (DRAM) and the non-volatile memory includes three-dimensional cross-point memory, memory using chalcogenide phase change materials, multi-threshold level NAND flash Memory, NOR flash memory, single-level or multi-level phase change memory (PCM), resistive memory, Austrian memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access combined with memristor technology Access memory MRAM memory, or spin transfer torque MRAM (STT-MRAM).6. A method for operating a memory device, comprising:receiving, by a circuit module at a target memory device on a first side of a dual in-line memory module DIMM, a command/address signal indicative of a first command/address to the target memory device;determining to mirror the first command/address indicated in the command/address signal based on a shorting pin of the target memory device being connected to a power pin of the target memory device; andMirroring the first command/address to the target memory device such that the first command/address indicated in the command/address signal is to a non-target memory device on the second side of the DIMM Mirror of two commands/addresses.7. The method of claim 6, mirroring the first command/address to the target memory device includes comparing a corresponding even numbered command/address to the target memory device with a corresponding even numbered command/address to the target memory device. The corresponding next higher odd numbered command/address is exchanged.8. The method of claim 6, the power pin comprising an output storage drain power voltage (VDDQ) pin.9. An apparatus for operating a memory device, comprising:A circuit module for a memory device on a first side of a dual in-line memory module DIMM, the circuit module including logic, at least a portion of which includes hardware, for:Receive command/address signals;determining whether the command/address logic indicated by the command/address signal has been inverted based on a shorted pin of the memory device being connected to a power pin of the memory device; andThe command/address logic indicated by the command/address signal is interpreted based on the determination.10. The device of claim 9, wherein the power pin is an output storage drain power voltage (VDDQ) pin and the command/address logic indicated by the command/address signal is provided by the DIMM. Register buffer circuit module to perform inversion.11. The device of claim 10, the DIMMs including registered DIMMs (RDIMMs), low power DIMMs (LPDIMMs), load reduced DIMMs (LRDIMMs), fully buffered DIMMs (FB-DIMMs), unbuffered DIMM (UDIMM) or small outline DIMM (SODIMM).12. The apparatus of claim 9, comprising said memory means, which will comprise non-volatile memory or volatile memory, said volatile memory comprising dynamic random access memory (DRAM), The non-volatile memory includes three-dimensional cross-point memory, memory using chalcogenide phase change materials, multi-threshold level NAND flash memory, NOR flash memory, single-level or multi-level phase change memory (PCM), resistive memory , Austrian memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory MRAM memory combined with memristor technology, or spin transfer torque MRAM (STT-MRAM).13. A method for operating a memory device, comprising:A command/address signal is received by a circuit module at a target memory device at a dual in-line memory module DIMM; determined based on a short pin of the memory device being connected to a power pin of the memory device. Whether the command/address logic indicated by the above command/address signal has been inverted; andThe command/address logic indicated by the command/address signal is interpreted based on the determination.14. The method of claim 13, wherein the power pin is an output storage drain power voltage (VDDQ) pin and the command/address logic indicated by the command/address signal is provided by the DIMM. Register buffer circuit module to perform inversion.15. A system for operating a memory device, comprising:a dual in-line memory module DIMM including one or more first memory devices on a first side and one or more second memory devices on a second side; andA memory device from among the one or more first memory devices, the memory device having a first shorted pin and including logic, at least a portion of which includes hardware, the logic to:receiving a first command/address signal indicating a first command/address targeting the memory device;Determine whether the first shorted pin is connected to a power pin; andThe first command/address targeted to the memory device is mirrored based on the determination such that the first command/address indicated in the first command/address signal is to the DIMM from the DIMM. A mirror of the second command/address of a memory device among the one or more second memory devices on the second side.16. The system of claim 15, logic for mirroring the first command/address from a memory device among the one or more first memory devices includes mirroring the first command/address from the one or more first memory devices. The corresponding even numbered command/address of the memory device among the first memory devices is exchanged with the corresponding next higher odd numbered command/address from the memory device of the one or more first memory devices. logic.17. The system of claim 15, the power pin comprising an output storage drain power voltage (VDDQ) pin.18. The system of claim 15, comprising a memory device from among the one or more first memory devices having a second shorted pin, and further comprising logic to:receiving a second command/address signal; andThe command indicated by the second command/address signal is interpreted based on whether the second shorted pin is connected to the same or a different power pin than the first shorted pin is connected to. /address logic such that the command/address logic indicated by the second command/address signal is interpreted as being inverted, where it is the same as or different from the power pin to which the first shorted pin is connected The power pins include the same or different output storage drain power voltage (VDDQ) pins.19. The system of claim 18, including the command/address logic indicated by the second command/address signal being inverted by a circuit module of a register buffer of the DIMM.20. The system of claim 15, the DIMMs including registered DIMMs (RDIMMs), low power DIMMs (LPDIMMs), load reduced DIMMs (LRDIMMs), fully buffered DIMMs (FB-DIMMs), unbuffered DIMM (UDIMM), or small outline DIMM (SODIMM).21. The system of claim 15, comprising said memory device, said memory device to comprise non-volatile memory or volatile memory, said volatile memory comprising dynamic random access memory (DRAM), The non-volatile memory includes three-dimensional cross-point memory, memory using chalcogenide phase change materials, multi-threshold level NAND flash memory, NOR flash memory, single-level or multi-level phase change memory (PCM), resistive memory , Austrian memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory MRAM memory combined with memristor technology, or spin transfer torque MRAM (STT-MRAM).22. A computer-readable medium having stored thereon instructions which, when executed, cause a computing device to perform the method of any one of claims 6-8.23. A computer-readable medium having stored thereon instructions which, when executed, cause a computing device to perform the method of any one of claims 13-14. |
Techniques for mirroring commands/addresses or interpreting command/address logic in memory devicesRelated casesThis application claims U.S. Patent No. 15/266991, filed on September 15, 2016, entitled "Techniques for Mirroring Commands/Addresses or Interpreting Command/Address Logic in a Memory Device" under 35 U.S.C §365(c) , which in turn claims the priority of U.S. Provisional Application No. 62/304212, filed on March 5, 2016, entitled "Techniques for Mirroring Commands/Addresses or Interpreting Command/Address Logic in a Memory Device" rights and interests. The entire disclosure of these documents is incorporated herein by reference for all purposes.Technical fieldThe examples described herein generally involve memory devices on dual in-line memory modules (DIMMs).Background techniqueMemory modules coupled with computing platforms or systems, such as those configured as servers, may include dual in-line memory modules (DIMMs). DIMMs can include various types of memory, including volatile or non-volatile types of memory. As memory technology has advanced to include memory cells with increasingly higher densities, the memory capacity of DIMMs has also increased considerably. Furthermore, advances in data rates for accessing data to be written to or read from memory included in a DIMM enable large amounts of data to flow between the requestor requiring access and the memory devices included in the DIMM. Higher data rates may result in an increased frequency of signals transmitted to/from the memory included in the DIMM.Description of the drawingsFigure 1 shows an example system.Figure 2 shows an example first part of a dual in-line memory module (DIMM).Figure 3 shows an example second part of a DIMM.Figure 4 shows an example pin diagram.Figure 5 illustrates example memory device logic.Figure 6 shows an example device.Figure 7 illustrates an example first logic flow.Figure 8 illustrates an example first logic flow.Figure 9 shows an example storage medium.Figure 10 illustrates an example computing platform.Detailed waysAs contemplated by this disclosure, higher data rates for accessing data to be written to or read from memory or memory devices in a DIMM may result in an increase in signals transmitted to/from the memory devices in the DIMM. Frequency of. Techniques may be implemented to improve signal integrity and save power to include command/address signal mirroring or inversion.In some examples, a memory bus that communicates data via increased frequency may perform best when interconnecting stubs between memory devices on opposite sides of a DIMM are minimized or made as short as possible . Some existing DIMMs may use special "mirror" packages or tolerate long branch lines and associated sub-optimal signal routing. Other DIMMs can handle this by not using different mirrored packages. Rather, these other DIMMs can perform mirroring of commands/addresses against the memory device's pins, which can be swapped without changing functionality. For example, a pin could be used purely for address bits. For example, the pins used for the command bits may not be swapped. The same situation may occur for this type of exchange for the inversion of command/address signals. This can considerably limit the number of pins available for mirroring.Furthermore, in some examples of how current computing systems implement inversion through memory devices in DIMMs, the memory controller may use multiple command cycles during initialization. The first cycle can be issued normally, and the second cycle can issue a copy of the same command with the logic reversed. This can impose very complex requirements on the host memory controller to flip or invert bits.Figure 1 illustrates system 100. As shown in Figure 1, in some examples, system 100 includes host 110 coupled to DIMMs 120-1 through 120-n, where "n" is any positive integer with a value greater than 2. For these examples, DIMMs 120-1 through 120-n may be coupled to host 110 via one or more channels 140-1 through 140-n. As shown in FIG. 1 , host 110 may include an operating system (OS) 114 , one or more applications (App(s)) 116 , and circuitry 112 . Circuitry 112 may include one or more processing elements 111 (eg, processors or processor cores) coupled to memory controller 113 . Host 110 may include, but is not limited to, a personal computer, desktop computer, laptop computer, tablet computer, server, server array or server farm, web server, network server, Internet server, workstation, minicomputer, mainframe computer, supercomputer , network equipment, web equipment, distributed computing system, multi-processor system, processor-based system, or combinations thereof.In some examples, as shown in Figure 1, DIMMs 120-1 through 120-n may include corresponding memory dies or devices 120-1 through 120-n. Memory devices 120-1 through 120-n may include various types of volatile and/or non-volatile memory. Volatile memory may include, but is not limited to, random access memory (RAM), dynamic RAM (D-RAM), double data rate synchronous dynamic RAM (DDR SDRAM), static random access memory (SRAM), thyristor RAM (T -RAM) or zero-capacitor RAM (Z-RAM). Non-volatile memory may include, but is not limited to, non-volatile types of memory such as byte- or block-addressable 3-dimensional (3-D) crosspoint memory. These block-addressable or byte-addressable non-volatile types of memory of memory devices 120-1 through 120-n may include, but are not limited to, memory using chalcogenide phase change materials (eg, chalcogenide glass) , multi-threshold level NAND flash memory, NOR flash memory, single-level or multi-level phase change memory (PCM), resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), combined with memristor technology Magnetoresistive random access memory (MRAM) memory, or spin transfer torque MRAM (STT-MRAM), or a combination of any of the above, or other non-volatile memory types.According to some examples, memory devices 122-1 through 122-n, including volatile and/or non-volatile types of memory, may be based on a variety of memory technologies, such as new technologies being developed associated with DIMMs, including but not Limited to DDR5 (DDR version 5, currently under discussion by JEDEC), LPDDR5 (LPDDR version 5, currently under discussion by JEDEC), HBM2 (HBM version 2, currently under discussion by JEDEC), and/or derivatives or extensions based on such specifications other new technologies). Memory devices 122-1 through 122-n may also be configured according to other memory technologies such as, but not limited to, DDR4 (Double Data Rate (DDR) version 4, the initial specification published by JEDEC in September 2012), LPDDR4 (Low Power Double Data Rate (LPDDR) version 4, JESD209-4, originally announced by JEDEC in August 2014), WIO2 (WideIO2 (WideIO2), JESD229-2, originally announced by JEDEC in August 2014), HBM (High-bandwidth memory DRAM, JESD235, originally published by JEDEC in October 2013), and/or other technologies based on derivatives or extensions of these specifications).According to some examples, DIMMs 120-1 to 120-n may be designed to function as registered DIMMs (RDIMMs), load reduced DIMMs (LRDIMMs), low power DIMMs (LPDIMMs), fully buffered DIMMs (FB-DIMMs), Unbuffered DIMM (UDIMM), or small outline (SODIMM). Examples are not limited to just these DIMM designs.In some examples, memory devices 122-1 through 122-n in DIMMs 120-1 through 120-n may include all or a combination of types of volatile or non-volatile memory. For example, memory device 122-1 on DIMM 120-1 may include volatile memory (eg, DRAM) on the front or first side, and may include non-volatile memory (eg, DRAM) on the back or second side. For example, 3D intersection memory). In other examples, a hybrid DIMM may include a combination of non-volatile and volatile types of memory for memory device 122-1 on either side of DIMM 120-1. In other examples, all memory devices 122-1 may be volatile types of memory or non-volatile types of memory. In some examples, multiple channels may be coupled with memory devices maintained on a DIMM, and in some examples, separate channels may be routed to different non-volatile/volatile types and/or groups of memory device. For example, a first channel to a memory device including non-volatile memory and a second channel to a memory device including volatile memory. In other examples, a first channel may be routed to a memory device on a first side of the DIMM and a second channel may be routed to a memory device on a second side of the DIMM. Examples are not limited to the above examples of how multiple channels may be routed to a memory device included on a single DIMM.Figure 2 illustrates an example DIMM section 200. In some examples, DIMM section 200 illustrates how a dual-sided memory module assembly may have memory devices or dies 201 and 202 on opposite sides of a printed circuit board (PCB) 203 and share the same for command/address buses A and B. Public address bus. For these examples, pins 212 and 214 on memory device 201 become mirror images of pins 222 and 224 on memory device 202 for common command/address buses A and B.In some examples, mirrored or identical connections between pins on either side of PCT 203 result in branch lines (depicted by letters A and B in Figure 2) that may consume PCB routing resources and may impact Bus frequency scaling. As described more below, the techniques used to implement mirroring can reduce the length of this branch line. However, DIMM section 200 shows an example when mirroring is not implemented.Figure 3 illustrates an example DIMM section 300. In some examples, the command/address signals may be swapped at the target memory device such that the command/address signals may be consistent between memory devices on opposite sides of PCB 303 . Therefore, common vias through PCB 303 can be shared, as shown in Figure 3. Now, a command/address signal, such as command/address A, may be connected to pin 322 of memory device 320, and may also be connected to pin 312 of memory device 310 to form a shortest path or split between these memory devices. Spur (which is routed through PCB 303). As described more below, a strap pin may be utilized on a given memory device to indicate that a given command/address pin has been mirrored. For example, a first command/address to memory device 320 indicated in the command/address signal received via command/address A at pin 322 may be a second command/address to memory device 310 at pin 312 Mirror, or vice versa.According to some examples, the DIMM may use circuitry or logic in a register buffer (not shown) to generate additional copies of the command/address bus to reduce bus loading. For these examples, logic and/or circuitry in the register buffer may cause command/address signals to be propagated from the register buffer to multiple bus segments routed to the memory devices on the DIMM. The propagated command/address signals may indicate corresponding command/address logic having logic levels that are inverted relative to each other. Inversion of the logic levels indicated in these propagated command/address signals can improve power efficiency and signal integrity. However, circuitry and/or logic in the memory device and/or in the register buffer need to know that the command/address logic indicated in the command/address signal has been inverted. In some examples, another shorted pin or bit may be utilized so that the memory device and/or logic in the register buffer may un-invert the command/address logic indicated in the command/address signal to For correct command/address logic interpretation.An example pin diagram 400 is shown in FIG. 4 . In some examples, pinout diagram 400 may be used for a memory device having DRAM included on a DIMM. For these examples, the shorted pins indicated in pin diagram 400 in boxes F2 (Mirror) and G2 (CAI) may indicate whether the memory device should mirror the command/address indicated in the command/address signal and/or will receive The command/address signal indicated in the command/address logic is interpreted as being inverted.According to some examples, the MIRROR pin (F2) of a target memory device designed according to pin diagram 400 may be connected to a power pin, such as an output storage drain power voltage (VDDQ) pin (eg, H1). For these examples, the target memory device may internally swap an even-numbered command/address (CA) with the next higher corresponding odd-numbered CA in order to mirror a given CA to the target memory device. Example swap pairs for mirroring a given CA according to pin diagram 400 may include swapping CA2 with CA3 (not CA1), swapping CA4 with CA5 (not CA3), swapping CA6 with CA7 (not CA5) wait. In some examples, the MIRROR pin may be tied or connected to a ground pin, such as the VSSQ pin (eg, G1) (if CA swapping is not required or required).In some examples, a memory device designed using a pin diagram such as pin diagram 400 may be internally inverted upon receiving The command/address logic level indicated in the command/address signal (e.g., routed from the register buffer). According to some examples, if the command/address logic is not to be interpreted as inverted, the CAI pin may be connected or tied to a ground pin such as the VSSQ pin (eg, H1).Two independent shorting pins for MIRROR and CAI allow four different combinations, which can include [No Mirror, No Inversion], [No Mirror, Invert], [Mirror, No Inversion] or [Mirror, reverse].Figure 5 illustrates example memory device logic 500. In some examples, as shown in Figure 5, one or both of MIRROR's shorting pin 501 or CAI's shorting pin 502 may be connected to the power/VDDQ pin (generating a 1) or connected to the ground/VSSQ pin (generating a 0) to activate the circuitry of memory device logic 500. As shown in FIG. 5 , if a logic 1 is produced from shorted pin 501 , the memory device including memory device logic 500 can flip through CMD/ADD (of commands/addresses CA0 through CA13 ) via the use of multiplexer 530 Pin 510 receives the command/address signal. Furthermore, if a logic 1 is produced from shorting pin 502, the memory device including memory device logic 500 can invert the command received through CMD/ADD pin 510 (of command/address CA0 through CA13) via use of XOR gate 520 The command/address logic indicated in the /address signal.Figure 6 shows an example block diagram of device 600. Although device 600 is shown in Figure 6 with a limited number of elements in a certain topology, it can be appreciated that device 600 may include more or fewer elements in alternative topologies as desired for a given implementation.Device 600 may be supported by circuitry 620 maintained or located at a memory device on a DIMM coupled to a host via one or more channels. Circuitry 620 may be arranged to execute one or more software or firmware implemented components or logic 622-a. Notably, "a" and "b" and "c" and similar designators as used herein are intended to represent any positive integer variable. Thus, for example, if an implementation sets a value of a=3, the complete set of software or firmware for component or logic 622-a may include component or logic 622-1 or 622-2. The examples presented are not limited to this context and the different variables used throughout may represent the same or different integer values. Furthermore, these 'components' or 'logic' may be software/firmware stored on a computer-readable medium, and although the components are shown as discrete blocks in Figure 6, this does not limit these components to different computer-readable media. Storage devices in media components (e.g., separate storage, etc.).According to some examples, circuitry 620 may include a processor or processor circuitry. The processor or processor circuit may be any of a variety of commercially available processors, including, but not limited to, AMD® Athlon®, Duron® and Opteron® processors; ARM® application, embedded and security processors; IBM ® and Motorola® DragonBall® and PowerPC® processors; IBM and Sony® Cell processors; Intel® Atom®, Celeron®, Core(2) Duo®, Corei3, Core i5, Core i7, Itanium®, Pentium®, Xeon ®, Xeon Phi® and XScale® processors; and similar processors. According to some examples, circuitry 620 may also be an application specific integrated circuit (ASIC), and at least some components or logic 622-a may be implemented as hardware elements of the ASIC.According to some examples, device 600 may include mirroring logic 622-1. Mirror logic 622 - 1 may be executed by circuitry 620 to receive a first command/address signal indicating a first command/address to a target memory device that may include device 600 . The target memory device may be located on the first side of the DIMM. Command/address signals may be included in the CMD/ADD 605 to be mirrored. Mirror logic 622-1 may then mirror the first command/address such that the first command/address indicated in the command/address signal is a mirror image of the second command/address to the memory device on the second side of the DIMM. The mirror command/address may be included in the CMD/ADD 630 of the mirror.In some examples, device 600 may also include inversion logic 622-2. Inversion logic 622 - 2 may be executed by circuitry 620 to receive command/address signals at a memory device including device 600 . Inversion logic 622-2 may determine whether the command/address logic indicated by the command/address signal has been inverted based on a shorted pin of the memory device and then interpret the command indicated by the command/address signal based on the determination. /address logic. Inverted command/address logic may be included in CMD/ADD signal 610 and interpreted command/address logic may be included in interpreted CMD/ADD logic 635 .Figure 7 illustrates an example logic flow 700. As shown in Figure 7, the first logic flow includes logic flow 700. Logic flow 700 may represent some or all operations performed by one or more logic, features, or devices described herein, such as device 700 . More specifically, logic flow 700 may be implemented by mirror logic 622-1.According to some examples, at block 702 , logic flow 700 may receive a command/address signal indicating a first command/address to a target memory device on a first side of the DIMM. For these examples, mirror logic 622-1 may receive command/address signals.In some examples, at block 704 , logic flow 700 may determine a first command/address to be mirrored, indicated in the command/address signal, based on the shorted pin of the target memory device. For these examples, mirror logic 622-2 may make this determination.According to some examples, at block 706 , logic flow 700 may mirror the first command/address to the target memory device such that the first command/address indicated in the command/address signal is to a non-target memory device on the second side of the DIMM Mirror of the second command/address. For these examples, mirror logic 622-1 may mirror the first command/address to the target memory device.Figure 8 illustrates an example logic flow 800. As shown in Figure 8, the first logic flow includes logic flow 800. Logic flow 800 may represent some or all operations performed by one or more logic, features, or devices described herein, such as device 800 . More specifically, logic flow 800 may be implemented by inversion logic 622-1.According to some examples, at block 802, logic flow 800 may receive a command/address signal at a memory device on a DIMM. For these examples, inversion logic 622-1 may receive the command/address signal.In some examples, at block 804 , logic flow 800 may determine whether the command/address logic indicated by the command/address signal has been inverted based on a shorted pin of the memory device. For these examples, inversion logic 822-2 may determine whether the command/address logic has been inverted.According to some examples, at block 806 , logic flow 800 may interpret the command/address logic indicated by the command/address signal based on a determination that the command/address logic indicated in the command/address signal has been inverted. For these examples, inversion logic 822-2 may interpret the command/address logic based on the determination.Figure 9 illustrates an example storage medium 900. As shown in FIG. 9 , the first storage medium includes storage medium 900 . Storage medium 900 may include an article of manufacture. In some examples, storage medium 900 may include any non-transitory computer-readable medium or machine-readable medium, such as optical, magnetic, or semiconductor storage devices. Storage medium 900 may store various types of computer-executable instructions, such as instructions for implementing logic flow 700 or 800. Examples of computer-readable or machine-readable storage media may include any tangible media capable of storing electronic data, including volatile or nonvolatile memory, removable or non-removable memory, erasable or non-erasable memory Memory, writable or rewritable memory, etc. Examples of computer-executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. Examples are not limited to this context.Figure 10 illustrates an example computing platform 1000. In some examples, as shown in Figure 10, computing platform 1000 may include a memory system 1030, processing components 1040, other platform components 1050, or communication interfaces 1060. According to some examples, computing platform 1000 may be implemented in a computing device.According to some examples, memory system 1030 may include a controller 1032 and one or more memory devices 1034. For these examples, logic and/or features resident or located at controller 1032 may perform at least some processing operations or logic of device 600 and may include storage media (including storage media 1000 ). Additionally, one or more memory devices 1034 may include similar types of volatile or non-volatile memory (not shown) as described above with respect to the memory devices 122, 210, 220, 310 shown in FIGS. 1-3 or 320 description. In some examples, controller 1032 may be part of the same die as one or more memory devices 1034 . In other examples, controller 1032 and one or more memory devices 1034 may be located on the same die or integrated circuit as the processor (eg, included in processing component 1040 ). In yet other examples, controller 1032 may be in a separate die or integrated circuit coupled to or on one or more memory devices 1034 .According to some examples, processing component 1040 may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, etc.), integrated circuits, ASICs, Programmable Logic Devices (PLD), Digital Signal Processors (DSP), FPGA/Programmable Logic, Memory Cells, Logic Gates, Registers, Semiconductor Devices, Chips, Microchips, Chipsets, and more. Examples of software elements may include software components, programs, applications, computer programs, applications, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, Methods, procedures, software interfaces, APIs, sets of instructions, computational code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. As expected for a given example, the determination of whether to implement an example using hardware elements and/or software elements may vary based on any number of factors, such as desired computation rate, power level, thermal tolerance, processing cycle budget , input data rate, output data rate, memory resources, data bus speed and other design or performance constraints.In some examples, other platform components 1050 may include common computing elements such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices , video cards, audio cards, multimedia I/O components (e.g., digital displays), power supplies, and more. Examples of memory units associated with other platform components 1050 or storage system 1030 may include, but are not limited to, various types of computer-readable and machine-readable storage media in the form of one or more higher speed memory units, such as read-only Memory (ROM), RAM, DRAM, DDR DRAM, Synchronous DRAM (SDRAM), DDR SDRAM, SRAM, Programmable ROM (PROM), EPROM, EEPROM, Flash memory, Ferroelectric memory, SONOS memory, Polymer memory (e.g. Ferroelectric polymer memory), nanowires, FeTRAM or FeRAM, Austenitic memory, phase change memory, memristor, STT-MRAM, magnetic or optical cards, and any other type of storage medium suitable for storing information.In some examples, communication interface 1060 may include logic and/or features to support the communication interface. For these examples, communication interface 1060 may include one or more communication interfaces that operate according to various communication protocols or standards to communicate over direct or network communication links. Direct communication may be via the communication protocol or standards described in one or more industry standards (including descendants and variations), such as those associated with the SMBus specification, PCIe specification, NVMe specification, SATA specification, SAS specification, or USB specification. Occurs through direct interface. Network communications may occur through the network interface via the use of communications protocols or standards, such as those described in one or more Ethernet standards published by the IEEE. For example, one such Ethernet standard could include IEEE 802.3-2012, "Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specification" published in December 2012 (" Carrier senseMultiple access with Collision Detection (CSMA/CD) Access Method and PhysicalLayer Specifications") (hereinafter referred to as "IEEE 802.3").Computing platform 1000 may be part of a computing device, which may be, for example, a user device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet computer, a smartphone, Embedded electronic device, game console, server, server array or server farm, web server, network server, Internet server, workstation, minicomputer, mainframe computer, supercomputer, network equipment, web equipment, distributed computing system, etc. Processor system, processor-based system, or combination thereof. Accordingly, functionality and/or specific configurations of computing platform 1000 described herein may be included or omitted in various embodiments of computing platform 1000 as suitably desired.The components and features of computing platform 1000 may be implemented using any combination of discrete circuits, ASICs, logic gates, and/or single-chip architecture. Additionally, features of computing platform 1000 may be implemented using microcontrollers, programmable logic arrays, and/or microprocessors, or any combination of the foregoing, where appropriate. Note that hardware, firmware, and/or software elements may be collectively or independently referred to herein as "logic," "circuit or circuitry."One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium, the representative instructions representing various logic within a processor, which when operated by a machine, computing device or system, when read, causes a machine, computing device, or system to fabricate logic to perform the techniques described herein. Such representations may be stored on tangible machine-readable media and supplied to various customers or manufacturing facilities for loading into manufacturing machines that actually manufacture the logic or processors.Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, etc.), integrated circuits, ASICs, PLDs, DSPs, FPGAs , memory cells, logic gates, registers, semiconductor devices, chips, microchips, chipsets, etc. In some examples, software elements may include software components, programs, applications, computer programs, applications, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods , procedures, software interfaces, APIs, instruction sets, computational code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. As expected for a given example, the determination of whether to implement the example using hardware elements and/or software elements may vary based on any number of factors, such as desired computation rate, power level, thermal tolerance, processing cycle budget, input data rate , output data rate, memory resources, data bus speed and other design or performance constraints.Some examples may include an article of manufacture or at least one computer-readable medium. Computer-readable media may include non-transitory storage media for storing logic. In some examples, non-transitory storage media may include one or more types of computer-readable storage media capable of storing electronic data, including volatile or non-volatile memory, removable or non-removable memory , erasable or non-erasable memory, writable or rewritable memory, etc. In some examples, logic may include various software elements, such as software components, programs, applications, computer programs, applications, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, sub-instances A program, function, method, procedure, software interface, API, instruction set, computational code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof.According to some examples, computer-readable media may include non-transitory storage media for storing or maintaining instructions that, when executed by a machine, computing device, or system, cause the machine, computing device, or system to perform methods in accordance with the described examples and/or operations. Instructions may include any suitable type of code (such as source code, compiled code, interpreted code, executable code, static code, dynamic code, etc.). Instructions may be implemented in a predefined computer language, manner, or syntax for instructing a machine, computing device, or system to perform a certain function. Instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.Some examples may be described using the expression "in an example" or "example" together with its derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the example is included in at least one example. The appearances of the phrase "in one example" in various places in the specification are not necessarily all referring to the same example.Some examples may be described using the expressions "coupling" and "connection" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms "connected" and/or "coupled" may indicate that two or more elements are in direct physical or electrical contact with each other. However, the term "coupled" can also mean that two or more elements are not in direct contact with each other, but still cooperate or interact with each other.The following examples relate to additional examples of the technology disclosed herein.Example 1. An example apparatus may include circuitry for a memory device on a first side of a DIMM. The circuitry may include logic, at least a portion of which is hardware, that may receive a command/address signal indicative of a first command/address to the target memory device. The logic may also determine to mirror the first command/address indicated in the command/address signal based on a shorted pin of the memory device. The logic may also mirror the first command/address to the memory device such that the first command/address indicated in the command/address signal is to a memory device on a second side of the DIMM Mirror of the second command/address.Example 2. The apparatus of Example 1, the logic for mirroring the first command/address to the target memory device may include mirroring a corresponding even numbered command/address to the target memory device. Logic to swap addresses with the corresponding next higher odd numbered command/address to the target memory device.Example 3. The apparatus of Example 1, for determining, based on the shorted pin, that the first command/address indicated in the command/address signal is the mirror image of the second command/address The logic includes logic for determining that the short pin is connected to a power pin of the target memory device.Example 4. The device of Example 3, the power pin includes a VDDQ pin.Example 5. The device as described in Example 1, the DIMM may be an RDIMM, LPDIMM, LRDIMM, FB-DIMM, UDIMM or SODIMM.Example 6. The apparatus of Example 1, the memory device may include non-volatile memory or volatile memory.Example 7. The device of example 6, the volatile memory may be DRAM.Example 8. The device of Example 6, the non-volatile memory may be a three-dimensional cross-point memory, a memory using chalcogenide phase change materials, a multi-threshold level NAND flash memory, a NOR flash memory, a single-level Or multi-level PCM, resistive memory, Austrian memory, nanowire memory, FeTRAM, MRAM memory combined with memristor technology, or STT-MRAM.Example 9. An example method may include receiving, by circuitry at a target memory device on a first side of a DIMM, a command/address signal indicating a first command/address to the target memory device. The method may further include determining to mirror the first command/address indicated in the command/address signal based on a shorted pin of the target memory device. The method may further include mirroring the first command/address to the target memory device such that the first command/address indicated in the command/address signal is to a second side of the DIMM Image of the second command/address of the non-target memory device.Example 10. The method of Example 9, mirroring the first command/address to the target memory device may include matching a corresponding even numbered command/address to the target memory device to the target memory device. The corresponding next higher odd numbered command/address is exchanged.Example 11. The method of Example 9, determining that the first command/address indicated in the command/address signal is the image of the second command/address based on the shorted pin may include The shorting pin is connected to the power pin of the target memory device.Example 12. The method as described in Example 11, the power pin may be a VDDQ pin.Example 13. The method of Example 9, the DIMM may be an RDIMM, LPDIMM, LRDIMM, FB-DIMM, UDIMM or SODIMM.Example 14. The method of example 9, the memory device may include non-volatile memory or volatile memory.Example 15. The method of Example 14, the volatile memory may be DRAM.Example 16. The method of Example 14, the non-volatile memory may be a three-dimensional cross-point memory, a memory using a chalcogenide phase change material, a multi-threshold level NAND flash memory, a NOR flash memory, a single-level Or multi-level PCM, resistive memory, Austrian memory, nanowire memory, FeTRAM, MRAM memory combined with memristor technology, or STT-MRAM.Example 17. Example At least one machine-readable medium may include a plurality of instructions that, in response to being executed by a system, may cause the system to perform a method according to any of Examples 9-16.Example 18. An example apparatus may include means for performing the method of any of Examples 9-16.Example 19. An example device may include circuitry for a memory device on a first side of a DIMM, the circuitry including logic, at least a portion of which may be hardware, and the logic may receive a command/address signal. The logic may also determine whether the command/address logic indicated by the command/address signal has been inverted based on a shorted pin of the memory device. The logic may also interpret the command/address logic indicated by the command/address signal based on the determination.Example 20. The apparatus of Example 19, the logic may determine that the command/address signal indicates that the command/address logic has is inverted.Example 21. The device of Example 20, the power pin may be a VDDQ pin.Example 22. The device of Example 19, the command/address logic indicated by the command/address signal may be inverted by circuitry of a register buffer of the DIMM.Example 23. The device of Example 19, the DIMM may be an RDIMM, LPDIMM, LRDIMM, FB-DIMM, UDIMM, or SODIMM.Example 24. The apparatus of example 19, the memory device may include non-volatile memory or volatile memory.Example 25. The device of example 24, the volatile memory may be DRAM.Example 26. The device of Example 24, the non-volatile memory may be a three-dimensional cross-point memory, a memory using chalcogenide phase change materials, a multi-threshold level NAND flash memory, a NOR flash memory, a single-level Or multi-level PCM, resistive memory, Austrian memory, nanowire memory, FeTRAM, MRAM memory combined with memristor technology, or STT-MRAM.Example 27. An example method may include receiving a command/address signal by a target memory device on a DIMM. The method may further include determining whether command/address logic indicated by the command/address signal has been inverted based on a shorted pin of the memory device. The method may further include interpreting the command/address logic indicated by the command/address signal based on the determination.Example 28. The method of Example 27, further comprising determining, based on the shorting pin being connected to a power pin of the target memory device, that the command/address signal indicates that the command/address logic has been reverse.Example 29. The method as described in Example 28, the power pin may be a VDDQ pin.Example 30. The method of Example 27, the command/address logic indicated by the command/address signal may have been inverted by circuitry of a register buffer of the DIMM.Example 31. The method of Example 27, the DIMM may be an RDIMM, LPDIMM, LRDIMM, FB-DIMM, UDIMM or SODIMM.Example 32. The method of Example 27, the memory device may include non-volatile memory or volatile memory.Example 33. The method of Example 32, the volatile memory may be DRAM.Example 34. The method of Example 32, the non-volatile memory may be a three-dimensional cross-point memory, a memory using a chalcogenide phase change material, a multi-threshold level NAND flash memory, a NOR flash memory, a single-level Or multi-level PCM, resistive memory, Austrian memory, nanowire memory, FeTRAM, MRAM memory combined with memristor technology, or STT-MRAM.Example 35. Example At least one machine-readable medium may include a plurality of instructions that, in response to being executed by a system, may cause the system to perform the method of any of Examples 27-34.Example 36. An example apparatus may include means for performing the method of any of Examples 27-34.Example 37. An example system may include a DIMM including one or more first memory devices on a first side and one or more second memory devices on a second side. The system may also include a memory device from among the one or more first memory devices, the memory device having a first shorting pin and including logic, at least a portion of which may be hardware. For these examples, the logic may receive a first command/address signal indicating a first command/address targeted to the memory device. The logic may also determine whether the first shorted pin is connected to a power pin. The logic may also mirror the first command/address targeted to the memory device based on the determination such that the first command/address indicated in the first command/address signal is to the first command/address from the memory device. A mirror of a second command/address of a memory device among the one or more second memory devices on the second side of the DIMM.Example 38. The system of Example 37, the logic for mirroring the first command/address to the memory device from the first one or more memory devices may include mirroring the memory device from the first one or more memory devices. The corresponding even-numbered command/address to the memory device among the first one or more memory devices and the corresponding next higher odd-numbered command to the memory device from the first one or more memory devices. /Address exchange logic.Example 39. The system of Example 37, the power pin may be a VDDQ pin.Example 40. The system of Example 37, the memory device from the one or more first memory devices may have a second shorting pin. For these examples, the memory device may further include logic that may receive a second command/address signal and determine whether the second shorted pin is connected to the first shorted pin. The power pins are the same or different power pins to interpret the command/address logic indicated by the second command/address signal, such that the command/address logic indicated by the second command/address signal is interpreted Translated as being reversed.Example 41. The system of Example 40, the same or different power pin as the power pin to which the first shorted pin is connected may be the same or different VDDQ pin.Example 42. The system of Example 40, the command/address logic indicated by the second command/address signal may have been inverted by circuitry of a register buffer of the DIMM.Example 43. The system of example 37, the DIMM may be an RDIMM, LPDIMM, LRDIMM, FB-DIMM, UDIMM, or SODIMM.Example 44. The system of example 37, the memory device may include non-volatile memory or volatile memory.Example 45. The system of example 44, the volatile memory may be DRAM.Example 46. The system of Example 44, the non-volatile memory may be a three-dimensional cross-point memory, a memory using chalcogenide phase change materials, multi-threshold level NAND flash memory, NOR flash memory, single-level Or multi-level PCM, resistive memory, Austrian memory, nanowire memory, FeTRAM, MRAM memory combined with memristor technology, or STT-MRAM.It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. Section 1.72(b), the requirements of which will allow the reader to quickly determine the nature of the technical disclosure. It is claimed with the understanding that it will not be used to interpret or limit the scope or meaning of the examples. Furthermore, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example for the purpose of simplifying the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each example. Rather, as the accompanying examples reflect, inventive subject matter lies in less than all features of a single disclosed example. Accordingly, the following examples are hereby incorporated into the Detailed Description, with each example standing on its own as a separate example. In the accompanying examples, the terms "including" and "wherein" are used as plain English synonyms of the corresponding terms "including" and "wherein" respectively. Furthermore, the terms "first," "second," "third," etc. are merely used as labels and are not intended to impose numerical requirements on their objects.Although the present subject matter has been described in language specific to structural features and/or method acts, it is to be understood that the subject matter defined in the accompanying examples is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. |
A write request is determined to comprise at least a partial translation unit. A size of the partial translation unit is smaller than a size of a predefined translation unit. A first entry in a translation map is identified. The translation map maps a plurality of translation units to a plurality of physical blocks. The first entry identifies a first physical block corresponding to the predefined translation unit. A second entry in the translation map is created. The second entry identifies a second physical block. An association between the first entry and the second entry is created, such that the second entry corresponds to the predefined translation unit. A write operation is performed to write a set of data corresponding to the partial translation unit to the second physical block. |
1.A system comprising:memory device; anda processing device operatively coupled to the memory device to perform operations comprising:determining that the write request includes at least a partial translation unit, wherein a size of the partial translation unit is less than a size of a predefined translation unit;identifying a first entry in a translation map that maps a plurality of translation units to a plurality of physical blocks, wherein the first entry identifies a first physical block corresponding to the predefined translation unit;creating a second entry in the translation map, wherein the second entry identifies a second physical block;creating an association between the first entry and the second entry such that the second entry corresponds to the predefined translation unit; andA write operation is performed to write the data set corresponding to the partial translation unit to the second physical block.2.1. The system of claim 1, wherein the predefined translation unit includes a predefined number of logical pages and represents a base granularity of data managed by the memory device.3.The system of claim 1, determining that the write request includes at least a portion of the translation unit further comprising:It is determined that the starting logical address indicated in the write request does not correspond to the starting address of the predefined translation unit.4.1. The system of claim 1, wherein the partial translation unit is the last element of the set of translation units specified by the write request.5.The system of claim 1, wherein the processing device is further configured to perform operations comprising:It is determined that the first physical block includes existing valid data.6.The system of claim 1, wherein the processing device is further configured to perform operations comprising:designating a first portion of the first entry as having valid data;designating the second portion of the first entry as having invalid data;designating the first portion of the second entry as having valid data; andThe second portion of the second entry is designated as having invalid data.7.The system of claim 1, wherein the processing device is further configured to perform operations comprising:The first physical block and the second physical block are assigned priority for garbage collection.8.A method comprising:determining that a write request includes at least a portion of a translation unit, wherein the starting logical address indicated in the write request does not correspond to a starting address of a predefined translation unit;identifying a first entry in a translation map that maps a plurality of translation units to a plurality of physical blocks, wherein the first entry identifies a first physical block corresponding to the predefined translation unit;associating a second entry in the translation map with the first entry, wherein the second entry identifies a second physical block; andA write operation is performed by the processing device to write the data set corresponding to the partial translation unit to the second physical block.9.9. The method of claim 8, wherein the predefined translation unit includes a predefined number of logical pages and represents a base granularity of data managed by a memory device coupled to the processing device.10.The method of claim 8, further comprising:The second entry is created in the translation map by the processing means prior to associating the second entry to the first entry.11.9. The method of claim 8, wherein the second entry corresponds to the predefined translation unit.12.The method of claim 8, further comprising:designating a first portion of the first entry as having valid data;designating the second portion of the first entry as having invalid data;designating the first portion of the second entry as having valid data; andThe second portion of the second entry is designated as having invalid data.13.The method of claim 8, further comprising:It is determined that the first physical block includes existing valid data.14.A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing apparatus, cause the processing apparatus to:determining that a write request includes at least a partial translation unit, wherein a size of the partial translation unit is less than a size of a predefined translation unit, and the partial translation unit is the last element of the set of translation units specified by the write request;identifying a first entry in a translation map that maps a plurality of translation units to a plurality of physical blocks, wherein the first entry identifies a first physical block corresponding to the predefined translation unit;associating a second entry in the translation map with the first entry, wherein the second entry identifies a second physical block; andA write operation is performed to write the data set corresponding to the partial translation unit to the second physical block.15.15. The non-transitory computer-readable storage medium of claim 14, wherein the predefined translation unit includes a predefined number of logical pages and represents a base granularity of data managed by a memory device coupled to the processing device.16.The non-transitory computer-readable storage medium of claim 14, wherein the processing device is further configured to:The second entry is created in the translation map before associating the second entry to the first entry.17.15. The non-transitory computer-readable storage medium of claim 14, wherein the second entry corresponds to the predefined translation unit.18.The non-transitory computer-readable storage medium of claim 14, wherein the first physical block includes existing valid data.19.The non-transitory computer-readable storage medium of claim 14, wherein the processing device is further configured to:designating a first portion of the first entry as having valid data;designating the second portion of the first entry as having invalid data;designating the first portion of the second entry as having valid data; andThe second portion of the second entry is designated as having invalid data.20.The non-transitory computer-readable storage medium of claim 14, wherein the processing device is further configured to:It is determined that the first physical block includes existing valid data. |
Write request with partial translation unittechnical fieldEmbodiments of the present disclosure relate generally to memory subsystems, and more particularly, to handling write requests with partial translation units in the memory subsystem.Background techniqueA memory subsystem may include one or more memory devices that store data. The memory devices may be, for example, non-volatile memory devices and volatile memory devices. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory device.SUMMARY OF THE INVENTIONOne embodiment of the present disclosure provides a system comprising: a memory device; and a processing device operatively coupled to the memory device to perform operations comprising: determining a write request includes at least a portion of a translation unit, wherein The size of the partial translation unit is smaller than the size of the predefined translation unit; identifies the first entry in the translation map that maps multiple translation units to multiple physical blocks, wherein the first entry identifies the corresponding predefined translation the first physical block of the unit; create a second entry in the translation map, where the second entry identifies the second physical block; create an association between the first entry and the second entry such that the second entry corresponds to predefining translation units; and performing a write operation to write the data set corresponding to the partial translation units to the second physical block.Another embodiment of the present disclosure provides a method, comprising: determining that a write request includes at least a portion of a translation unit, wherein a starting logical address indicated in the write request does not correspond to a starting address of a predefined translation unit ; Identifies the first entry in the translation mapping that maps multiple translation units to a plurality of physical blocks, wherein the first entry identifies the first physical block corresponding to a predefined translation unit; A second entry is associated with the first entry, wherein the second entry identifies the second physical block; and a write operation is performed by the processing device to write the data set corresponding to the partial translation unit to the second physical block.Another embodiment of the present disclosure provides a non-transitory computer-readable storage medium including instructions that, when executed by a processing device, cause the processing device to: determine that the write request includes at least a portion of a translation unit, wherein The size of the partial translation unit is less than the size of the predefined translation unit, and the partial translation unit is the last element of the set of translation units specified by the write request; identifies the first entry in the translation map that converts multiple translations The unit is mapped to a plurality of physical blocks, wherein the first entry identifies the first physical block corresponding to the predefined translation unit; associate a second entry in the translation map with the first entry, wherein the second entry identifies the first two physical blocks; and performing a write operation to write the data set corresponding to the partial translation unit to the second physical block.Description of drawingsThe present disclosure will be more fully understood from the embodiments given below and from the accompanying drawings of various embodiments of the present disclosure. However, the drawings should not be construed as limiting the present disclosure to specific embodiments, but are for explanation and understanding only.FIG. 1 illustrates an example computing system including a memory subsystem in accordance with some embodiments of the present disclosure.2 depicts examples of full and partial translation units in a write request according to some embodiments of the present disclosure.3 depicts an example of handling write requests with partial translation units in accordance with some embodiments of the present disclosure.4 is a flowchart of an example method to handle write requests with partial translation units in accordance with some embodiments of the present disclosure.5 is a flowchart of an example method for executing a write request including a partial translation unit in accordance with some embodiments of the present disclosure.6 is a block diagram of an example computer system in which embodiments of the disclosure may operate.detailed descriptionAspects of the present disclosure are directed to handling write requests with partial translation units in a memory subsystem. The memory subsystem may be a storage device, a memory module, or a hybrid of a storage device and a memory module. Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more memory components, such as memory devices that store data. The host system can provide data to be stored at the memory subsystem and can request data to be retrieved from the memory subsystem.The memory subsystem may include non-volatile memory devices that may store data from the host system. One example of a non-volatile memory device is a NAND memory device. Other examples of non-volatile memory devices are described below in conjunction with FIG. 1 . A memory device may include a set of physical pages for storing binary data bits corresponding to data received from a host system. A physical page is a group of memory cells that store bits of data. For some types of memory devices (eg, NAND), a physical page is the smallest unit of storage ("write unit") that can be written. Physical pages can be grouped together to form physical blocks. For some types of memory devices (eg, NAND), a physical block is the smallest unit of storage ("erase unit") that can be erased. Physical pages within a block cannot be individually erased. If a page needs to be overwritten, the page should be erased before it can be written to, and the memory erase operation can only be performed on the entire block, even if a single page of data needs to be erased.The host system can use the logical address space to access memory devices. A logical address space may identify groups of logical units, such as logical blocks. For some types of memory devices (eg, NAND), a logical block is the smallest unit of erase. For example, the size of data in a logical block may be 512 bytes, 4096 bytes (4KB), etc., depending on the memory device specification. In some instances, a logical block may be a set of logical pages. A logical page is an abstraction of a physical page. A memory subsystem may define a logical page as being equal to a particular unit of physical memory (eg, a physical page, a physical block, etc.). A logical block address (LBA) is an identifier for a logical block. In one addressing scheme for logical blocks, logical blocks may be located at integer indices, where the first block is LBA 0, the second block is LBA 1, and so on.The logical address space may be managed using translation units (TUs). For some types of memory devices (eg, NAND), TU is the underlying granularity of data managed by the memory device. A TU may include a predefined number of logical units (eg, logical pages, logical blocks, etc.). In some instances, a TU is predefined to contain one logical block, so the size of the TU is equal to the size of the logical block. In some instances, a TU is predefined to contain multiple logical blocks. In that case, the size of the TU is an integer multiple of the size of the logical block.In one example, a TU may be predefined to contain a logical block of 512 bytes, so the size of a TU is 512 bytes. In another example, a TU may be predefined to include one logical block of 4KB (which may contain multiple logical pages), so the size of the TU is 4KB. In yet another example, a TU may be predefined to include eight logical blocks of 512 bytes, totaling (8*512) bytes or a size of 4096 bytes (4KB). In the last instance, the size of the TU is 4KB. The logical address space may start at LBA 0 and end at LBAmax. The logical space may be partitioned using several TUs (eg, TUs of size 4KB), where each TU includes eight logical blocks. In one addressing scheme for TUs, TUs may be located at integer indices, where the first TU is TU 0, the second TU is TU 1, and so on. In an example, TU 0 may include eight LBAs starting at LBA 0 and ending at LBA 7. TU 1 may contain the next eight LBAs starting at LBA 8 and ending at LBA 15, and so on. The start and end addresses of a logical unit (eg, logical block, logical page, etc.) may define the boundaries of the TU.When the host system requests to access data (eg, read data, write data), the host system may send the data access request to the memory device for the logical address space. For example, the host system may provide a logical address (eg, LBA, LBA with offset, etc.) that identifies the location where data is to be stored or from which data is to be read. Since data from the host system will ultimately be stored at physical addresses within the memory device, the memory subsystem maintains a logical-to-physical (L2P) translation map or table to identify the physical location where the data corresponding to each logical address resides. The L2P map may contain several L2P entries. Each entry in the L2P map may identify a physical location corresponding to a particular TU. The L2P map keeps track of each TU sector that has been written to the memory subsystem by maintaining its physical address. For example, an L2P entry may include an index of a TU (eg, TU 0, TU 1, etc.), a corresponding range of physical addresses, some metadata such as a flag indicating whether the data at the address is valid or invalid, and the like.The host system can send a write request to write data to the memory device. Write requests can contain various information, such as data sets, logical addresses where data is stored, and so on. In one example, the write request may include a starting logical address at which to begin storing the data set and the length or size of the data. In one example, the starting logical address may include the LBA and the offset. A segment of data received in a write request may be referred to as a received TU. A write request may contain multiple received TUs. The write request may include data having the same size as the size of the TU in the L2P map (eg, 4KB) or an integer multiple thereof (eg, 4KB) and starting at a logical address that is the starting logical address of the TU. Since the boundaries of the received TUs align (eg, match) with the boundaries of the TUs in the L2P map, such write requests are referred to as "aligned write requests."When the host system requests to overwrite existing data with new data at the logical location, the existing data is marked as invalid at the physical location where the existing data is stored, and the new data is stored in the new physical location. The memory subsystem updates the L2P map to indicate that the logical location corresponds to the new physical location. For an aligned write request, since the received TU is aligned with all TUs in the L2P map, existing data corresponding to all TUs is overwritten with new data in the requested TU. The new data is stored in the new physical location, and all TUs in the L2P map point to the new physical location.The write request may contain a received TU whose size is smaller than the size of the TU in the L2P map. Such write requests are referred to as "unaligned write requests". The received TUs are referred to herein as partial TUs. An unaligned write request may have multiple parts, some of which correspond to received TUs that are aligned to TUs in the L2P map, and some parts that correspond to received TUs that are not aligned to TUs in the L2P map. For example, a write request may include a starting logical address that is not aligned with a logical TU boundary. Therefore, the beginning portion of the write request may contain partial TUs with a size smaller than the size of the corresponding TUs in the L2P map. The partial TU in the beginning of the write request is referred to as a "header-unaligned TU". In another example, a partial TU may end at the end portion of the write request, where the ending logical address is not aligned with a logical TU boundary. Partial TUs at the end of a write request are referred to as "tail unaligned TUs". An unaligned write request may contain a header unaligned TU, a tail unaligned TU, or both.For unaligned write requests, since the partial TU is not aligned with all TUs in the L2P map, only a portion of the existing data at the physical location corresponding to a portion of the TU in the L2P map is overwritten with new data in the partial TU . Part of the TU in the L2P map overlaps with part of the TU. Existing data associated with the remainder of the TU remains unchanged because the remainder is outside the write request and will remain. In some implementations, to handle writing data received in an unaligned write request, the memory subsystem may use a mechanism known as a read-write-write (RMW) mechanism. Under the RMW mechanism, the memory subsystem generates an internal read request for reading existing data from the physical location into the memory buffer. The memory subsystem marks the portion of the TU in the L2P map corresponding to the partial TU as invalid. The memory subsystem reads the remaining valid existing data associated with the remainder of the TU (eg, data that will remain unchanged) that does not overlap with the partial TU. The memory subsystem modifies the existing data by merging the valid existing data with the requested new data in the partial TU received from the host system. The memory subsystem writes the merged data containing valid existing data and valid new data to the new physical location. The memory subsystem updates the L2P map to indicate that the TU corresponds to the new physical location. The RMW mechanism for unaligned write requests can degrade the performance of the memory subsystem. Additional reads and modifications performed by the memory subsystem incur a performance penalty. Waste extra resources performing extra read and modify operations. The efficiency of the memory subsystem suffers when unaligned write requests are handled using the RMW mechanism.Aspects of the present disclosure address the above and other deficiencies associated with executing write requests that include partial TUs by writing the partial TUs together with the padded data to a new location, and splitting the corresponding L2P entries into reference corresponding Two L2P entries for both the physical location of the logical address TU (ie, the previously existing physical location where the data for the logical address is stored and the newly created physical location where the data for the partial TU is stored). In one embodiment, when the memory subsystem determines that the write request contains a partial TU, the memory subsystem may identify an existing L2P entry that identifies the original physical location corresponding to the boundary-aligned complete TU containing the partial TU. For the same index of a TU in the L2P map, the memory subsystem may create an additional L2P entry, identifying a new physical location to store data for part of the TU. The memory subsystem may create an association (eg, link) between the existing L2P entry and the additional L2P entry, such that both the L2P entries correspond to TUs. The memory subsystem may then write the new data received in the partial TU at the new physical location. In this way, a TU can point to the original physical location with valid existing data that has not been requested to be overwritten, and can point to a new physical location with new data received in part of the TU, without having to perform additional read and modify operations.Advantages of the present disclosure include, but are not limited to, improved performance of the memory subsystem, including improved random write performance with minimal impact on read performance, reduced power consumption, fewer resources and computational power required, and/or Free up system resources for other functions. Since no additional read and modify operations are performed to handle unaligned write requests, valuable resources are saved and the memory subsystem does not have to slow down to perform unaligned write requests. Implementation is also simpler than conventional mechanisms because aligned and unaligned write operations are handled using the same primary techniques for writing to new physical locations, rather than having to read and modify existing data first. Eliminating the extra steps to perform internal read and modify operations results in reduced power consumption and an overall reduction in resource usage. The time and resources saved can be used to perform other functions.1 illustrates an example computing system 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 140 ), one or more non-volatile memory devices (eg, memory device 130 ), or a combination thereof.Memory subsystem 110 may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, secure digital (SD) Cards and Hard Disk Drives (HDDs). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).Computing system 100 may be a computing device such as a desktop computer, laptop computer, web server, mobile device, vehicle (eg, airplane, drone, train, car, or other vehicle), Internet of Things (IoT) enabled A device, an embedded computer (eg, an embedded computer contained in a vehicle, industrial equipment, or networked business device), or such computing device that includes memory and processing means.Computing system 100 may include host system 120 coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to different types of memory subsystems 110 . FIG. 1 shows an example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (eg, without intervening components), whether wired or wireless, Connections such as electrical connections, optical connections, magnetic connections, etc. are included.Host system 120 may include a processor chipset and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more caches, a memory controller (eg, NVDIMM controller), and a storage protocol controller (eg, Peripheral Component Interconnect Express (PCIe) ) controller, Serial Advanced Technology Attachment (SATA) controller). Host system 120 uses memory subsystem 110 , for example, to write data to and read data from memory subsystem 110 .Host system 120 may be a computing device, such as a desktop computer, laptop computer, web server, mobile device, or such computing device that includes memory and processing means. Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interfaces, Peripheral Component Interconnect Express (PCIe) interfaces, Universal Serial Bus (USB) interfaces, Fibre Channel, Serial Attached SCSI (SAS), Double Data Rate (DDR) memory bus, Small Computer System Interface (SCSI), Dual Inline Memory Module (DIMM) interfaces (eg, DIMM sockets supporting Double Data Rate (DDR)), Open NAND Fast Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), or any other interface. A physical host interface may be used to transfer data between host system 120 and memory subsystem 110 . When the memory subsystem 110 is coupled to the host system 120 through a PCIe interface, the host system 120 may further utilize a Non-Volatile Memory (NVM) high-speed (NVMe) interface to access access components (eg, the memory device 130 ) ). The physical host interface may provide an interface for transferring control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1 shows memory subsystem 110 as an example. In general, host system 120 may access multiple memory subsystems via the same communication connection, multiple independent communication connections, and/or a combination of communication connections.The memory devices 130, 140 may comprise any combination of different types of non-volatile memory devices and/or volatile memory devices. Volatile memory devices (eg, memory device 140) may be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory devices (eg, memory device 130 ) include NAND-type flash memory and write-in-place memory, such as those of non-volatile memory cells. A three-dimensional intersection ("3D intersection") memory device of an array of intersections. Cross-point arrays of non-volatile memory can perform bit storage based on changes in bulk resistance in conjunction with stackable cross-grid data access arrays. In addition, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells . The NAND-type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).Each memory device 130 may include one or more arrays of memory cells. One type of memory cell, such as a single level cell (SLC), can store one bit per cell. Other types of memory cells, such as multi-level cell (MLC), triple-level cell (TLC), quad-level cell (QLC), and five-level computer cell (PLC), can store multiple bits per cell. In some embodiments, each memory device 130 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC, or any combination thereof. In some embodiments, a particular memory device may include an SLC portion of memory cells, as well as an MLC portion, a TLC portion, a QLC portion, or a PLC portion. The memory cells of memory device 130 may be grouped into pages, which may refer to logical units of the memory device used to store data. For some types of memory (eg, NAND), pages may be grouped to form blocks. Some types of memory, such as 3D crosspoints, may group pages across dies and channels.Although non-volatile memory components such as 3D cross-point arrays of non-volatile memory cells and NAND-type flash memory (eg, 2D NAND, 3DNAND) are described, memory device 130 may be based on any other type of non-volatile memory, such as read only memory (ROM), phase change memory (PCM), optional memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), Magnetic Random Access Memory (MRAM), Spin Transfer Torque (STT)-MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide-Based RRAM (OxRAM), or Not (NOR) Flash memory and Electrically Erasable Programmable Read-Only Memory (EEPROM).Memory subsystem controller 115 (or simply controller 115 ) may communicate with memory device 130 to perform operations such as reading data, writing data, or erasing data at memory device 130 and other such operations. The memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. Hardware may include digital circuitry with dedicated (ie, hard-coded) logic to perform the operations described herein. Memory subsystem controller 115 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or other suitable processor.Memory subsystem controller 115 may be a processing device that includes one or more processors (eg, processor 117 ) configured to execute instructions stored in local memory 119 . In the example shown, the local memory 119 of the memory subsystem controller 115 includes embedded memory configured to store instructions for performing operations that control the memory subsystem 110, including handling the memory subsystem 110 and the host. Various processes, operations, logic flows and routines for communications between systems 120 .In some embodiments, local memory 119 may include memory registers that store memory pointers, retrieved data, and the like. Local memory 119 may also include read only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been shown to include the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the memory subsystem controller 115, but may instead depend on external control (eg, provided by an external host or by a processor or controller separate from the memory subsystem).In general, memory subsystem controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or appropriate commands to effect desired accesses to memory device 130 and/or memory device 140 . Memory subsystem controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical addresses associated with memory devices 130 (eg, Address translation between logical block addresses (LBAs, namespaces) and physical addresses (eg, physical block addresses). Memory subsystem controller 115 may further include host interface circuitry to communicate with host system 120 via a physical host interface. The host interface circuitry may translate commands received from the host system into command instructions to access memory device 130 and/or memory device 140, and translate responses associated with memory device 130 and/or memory device 140 for Host system 120 information.Memory subsystem 110 may also include additional circuitry or components not shown. In some embodiments, memory subsystem 110 may include caches or buffers (eg, DRAM) and address circuitry (eg, row and column decoders) that may be controlled from the memory subsystem The memory device 115 receives the address and decodes the address to access the memory device 130 .In some embodiments, memory device 130 includes a local media controller 135 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 130 . An external controller (eg, memory subsystem controller 115 ) may manage memory device 130 externally (eg, perform media management operations on memory device 130 ). In some embodiments, memory device 130 is a managed memory device, which is a raw memory device combined with a local controller (eg, local controller 135 ) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.The memory subsystem 110 includes a partial TU handling component 113 that can be used to handle write requests with partial TUs, where data from the partial TUs is stored on blocks of memory devices 130 and 140 . In some embodiments, memory subsystem controller 115 includes at least a portion of partial TU handling component 113 . For example, memory subsystem controller 115 may include a processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein. In some embodiments, part of the TU handling component 113 is part of the host system 110, an application, or an operating system.In one embodiment, the memory subsystem 110 may receive a write request including a data set and a starting logical address at which to begin storing the data. Partial TU handling component 113 can determine that the write request includes at least one partial translation unit (TU). A translation unit may include one or more logical blocks. The size of the partial TUs is smaller than the predefined TUs utilized in logical-to-physical (L2P) translation mapping. The L2P mapping that maps multiple translation units to multiple physical blocks is used to identify the physical location where the data corresponding to the logical address in the write request resides. In some instances, a TU includes a predefined number of logical units, such as logical pages, logical blocks, and the like. In some instances, a TU represents the underlying granularity of data managed by a memory device. In one example, the size of a TU may be 4KB. A TU may have a boundary defined by a start logical address and an end logical address. In one example, the write request is determined to contain a partial TU by determining that the starting logical address indicated in the write request does not correspond to the starting address of the TU in the L2P map. In another example, the partial TU is the last element of several TUs included in the write request. When partial TU handling component 113 determines that the write request contains a partial TU, partial TU handling component 113 can identify an existing L2P entry that identifies the original physical location corresponding to the TU. The partial TU handling component 113 can create additional L2P entries that identify the new physical location. The partial TU handling component 113 can create an association (eg, link) between the existing L2P entry and the additional L2P entry such that both of the L2P entries correspond to TUs in the L2P map. The memory subsystem may then write the new data received in the partial TU at the new physical location. Further details regarding the operation of the partial TU handling component 113 are described below.2 depicts examples of full and partial translation units in write requests in memory subsystem 200 in accordance with some embodiments of the present disclosure. Six example write requests 201-206 are shown. Write requests 201-206 are received from the host system and may indicate the data set to be stored, the starting logical address for starting the write operation, and the size of the data set.In this particular memory subsystem, axis 210 represents the size in KB for a logical unit of a portion of the logical address space in memory subsystem 200 . Portions of the logical address space can hold data from 0 to 12KB. The portion of the logical address space is divided into three TUs, TU 0, TU1 and TU 2. Each TU is predefined to contain four logical units (eg, logical pages, logical blocks, etc.). The size of each logical unit is 1KB. Therefore, the size of each TU is (4*1KB) or 4KB. TU 0 begins at a logical address corresponding to 0KB and ends at a logical address corresponding to a logical address space of 4KB, TU 1 begins at 4KB and ends at 8KB, TU 2 begins at 8KB and ends at 12KB.For write request 201, the host system indicates a starting logical address corresponding to 0KB, and the size of data sets D1 to D12 in write request 201 is 12KB. The size of the data in write request 201 (eg, 12KB) is an integer multiple of the size of each TU, and the write request starts at 0KB, which is also the start address of TU 0. Thus, the write request contains three data segments each completely overlapping each of the TUs. These data segments are called received TUs. Since each of the received TUs in write request 201 is aligned with a TU in the logical address space, the write request is an aligned write request. For example, the boundaries of the received TUs with data D1-D4 are aligned with the boundaries of TU 0.For write request 202, the starting logical address corresponds to 4KB, and the size of data sets D21 to D24 in write request 202 is 4KB. The size of the data in write request 201 (eg, 4KB) is the same as that of TU 1, and the write request begins at 4KB, which is also the start address of TU 1. Therefore, the write request contains a data segment that completely overlaps with one TU 1. Since the only received TU in write request 202 is aligned with the TU, write request 202 is also an aligned write request. The boundaries of the received TUs with data D21 to D24 are aligned with the boundaries of TU 1 .For write request 203, the starting logical address corresponds to 2KB, and the size of data sets D31 to D34 in write request 203 is 4KB. The size of the data in write request 203 (eg, 4KB) is the same as the size of each TU, but the write request starts at 2KB, which does not correspond to the start address of any TU. Thus, the write request contains two data segments each partially overlapping one of the TUs. For addresses corresponding to 2KB to 4KB, the data segment with data D31 to D32 partially overlaps TU 0. This data segment with data D31 to D32 is a partial TU, where the size of the partial TU is 2KB which is less than the 4KB size of TU 0. The partial TU is at the beginning of the write request 203, so the partial TU is a header-unaligned TU. Similarly, the data segment with data D33 to D34 partially overlaps with TU 1 and is a partial TU having a size of 2KB smaller than the size of TU1's 4KB. The partial TU is at the end of the write request 203, so the partial TU is a tail-unaligned TU.Write request 204 has a starting logical address corresponding to 1KB, and the size of data sets D41-D47 in write request 204 is 7KB. Since the write request begins at 1 KB that is not the starting address of any TU, and the size of the data in write request 204 (eg, 7 KB) is not the same as or an integer multiple of the size of each TU, the write request Requests for unaligned writes. A write request contains two data segments. A data segment partially overlaps one of the TUs (TU 0). For addresses corresponding to 1KB to 4KB, the data segment with data D41 to D43 partially overlaps with TU 0. The size of the partial TUs with data D41 to D43 is 3KB, which is smaller than the 4KB size of TU0. The partial TU is at the beginning of the write request 203, so the partial TU is a header-unaligned TU. On the other hand, the data segment with data D44 to D47 completely overlaps with TU1, has a size of 4KB like TU1, and the start address of this segment is at 4KB, which is the start address of TU1. This received TU is the aligned TU in the unaligned write request 204 .Write request 205 has a starting logical address corresponding to 4KB, and the size of data sets D51-D55 in write request 205 is 5KB. The size of the data in write request 204 (eg, 5KB) is not the same as or an integer multiple of the size of each TU, so the write request is an unaligned write request. However, since the write request starts at 4KB of the starting address of TU 1, and the size of the received TUs with data D51 to D54 is 4KB, the received TUs are aligned TUs. For addresses of 1KB size corresponding to 8KB to 9KB and a size smaller than TU 2, the second data segment with data D55 partially overlaps one of the TUs (TU 2). The partial TU is at the end of the write request 205, so the partial TU is a tail-unaligned TU.Finally, write request 206 contains a starting address corresponding to 2KB and having a size of 9KB. The start address does not correspond to any TU start address, so the write request is an unaligned write request. An unaligned write request contains a header unaligned TU with data D61 to D62 that partially overlaps TU 0, an aligned TU with data D63 to D66 that completely overlaps TU 1, and an aligned TU with data D67 to D66 that partially overlaps TU 2 The tail of D69 is not TU-aligned. The trailing unaligned TU is the last element in the plurality of received TUs in the write request 206 .The partial TU handling component 113 of FIG. 1 can be used to handle write requests with unaligned TUs as shown in the example of FIG. 2 .3 depicts an example of handling write requests with partial translation units for memory device 300 in accordance with some embodiments of the present disclosure. Memory device 300 may correspond to memory devices 130 and/or 140 depicted in FIG. 1 .In one embodiment, logical address space 320 identifying logical units may be used by host system 120 to access memory device 300 . A logical unit may include logical pages, logical blocks, and the like. The granularity of the logical units shown in FIG. 3 for logical address space 320 is logical blocks. A logical block may be a set of logical pages (not shown). In an example, each logical block has a size of 512 bytes. Logical block address space 320 may use logical block addresses (LBAs) to identify groups of logical blocks. Using one addressing scheme, logical blocks are shown at integer indices, where the first block is LBA 0, the second is LBA 1, and so on.Logical address space 320 may be managed using translation units (TUs). TU is the underlying granularity of data managed by memory device 300 . TU is a logical unit group. A TU may include a predefined number of logical units (eg, logical pages, logical blocks, etc.). Here, the TU is predefined to contain eight logical blocks, so the size of the TU is equal to eight times the size of the logical block, ie (8*512 bytes) or 4096 bytes (4KB). Using one addressing scheme, TUs can be located at integer indices, where the first TU is TU 0, the second TU is TU 1, and so on. In an example, TU 0 may include eight LBAs starting at LBA 0 and ending at LBA 7. TU 1 may contain the next eight LBAs starting at LBA 8 and ending at LBA 15, and so on. The start and end addresses of the logical block define the boundaries of the TU. Although a TU is shown here as containing eight logical blocks, in another embodiment, a TU may contain one logical block that contains several logical pages.Host system 120 may send data access requests, such as read requests or write requests, to memory device 300 for logical address space 320 . The host system 120 may provide the LBA on which the data access is to be performed. For example, host system 120 may provide the starting LBA (or the starting LBA plus the logical page offset) and the size of the requested data access. In one embodiment, memory device 300 receives write request 310 . Write request 310 indicates a starting logical address 312 (eg, LBA4, LBA4 plus page offset, etc.). Write request 310 includes data sets D1-D12 to be stored starting at address 312, and the size of the data is indicated as 6KB. Thus, 12 logical blocks equal to 6KB (eg, 12*512 bytes) are covered by a write request 310 , which requests to start storing at start address 312 or LBA 4 .Since a TU contains eight logical blocks, write request 310 contains at least one segment that is smaller than the TU in the logical address space. Segments smaller than TUs represent partial TUs included in multiple TUs (eg, logical block groups) received in write request 310 . The size of the partial TU is smaller than the size of the TU in the logical address space. Since the write request starts at LBA 4, which is not the starting address of any TU (eg, LBA 0, LBA 8, etc.), and the size of the data in write request 310 is 6KB, which is not the same size as each TU is the same or an integer multiple of the size (for example, 4KB), so the write request is an unaligned write request. Each partial TU in the write request represents an unaligned TU in the write request.In one embodiment, partial TU handling component 113 determines whether write request 310 contains at least one unaligned TU. If it is determined that the write request does not contain any unaligned TUs, then the aligned TUs in the write request are processed to write the data in the received TUs to memory device 300 . Any existing data corresponding to the requested logical block is marked invalid at the physical location where the existing data is stored, and the new data is stored in the new physical location, thereby updating the L2P map with the new physical location.In one embodiment, partial TU handling component 113 determines that write request 310 contains at least one unaligned TU (eg, a partial TU). Segment 314 in write request 310 is determined to be a partial TU. Section 314 begins at LBA 4 and contains four logical blocks. Segment 314 partially overlaps the logical blocks (eg, LBA 4 through LBA 7) contained in TU 0, which is in the range of LBA 0 through LBA 7. The size of the partial TU represented by segment 314 is 2KB, which is smaller than the size of TU 0 (eg, 4KB) that partially overlaps the partial TU. When the starting logical address 312 indicated in the write request 310 does not correspond to the starting address of the TU (eg, LBA 0), the partial TU handling component 113 can determine that the size of the partial TU is smaller than the size of the TU in the logical address space (eg, TU 0) size. In this context, since the partial TU 314 is at the beginning of the write request 310, the partial TU is considered a header-unaligned TU. In another example not shown in FIG. 3, a partial TU may end in the end portion of write request 310 (eg, spanning LBA 16 to LBA 17), where the ending logical address (eg, LBA 17) is not associated with the logical address End logical address alignment of TUs in space (eg, TU 2). The partial TU will be the last element of several received TUs contained in the write request. In that case, the partial TUs at the end of the write request would be tail-unaligned TUs. In one example not shown in FIG. 3, unaligned write request 310 may include a header unaligned TU and a tail unaligned TU.In an embodiment, when partial TU handling component 113 determines that write request 310 contains partial TU 314, partial TU handling component 113 identifies an existing entry 322 corresponding to TU 0 in logical-to-physical (L2P) translation map 320. L2P mapping 320 is shown mapping logical units to physical units at the granularity of physical blocks. Thus, L2P mapping 320 maps multiple translation units to multiple physical blocks. The L2P map 320 is used to identify the physical location where the data corresponding to the logical address in the write request resides. Entry 322 identifies physical block P4 corresponding to TU 0.In an embodiment, partial TU handling component 113 determines whether entry 322 is valid. For example, entry 322 may include metadata or flags that indicate whether the entry is valid or invalid. In an example, if entry 322 is determined to be invalid, then physical block P4 contains no existing data. In that case, partial TU handling component 113 can designate entry 322 as valid and update the entry at TU 0 to include the new physical block to which data D1-D4 of partial TU 314 is to be written, and the memory subsystem converts data D1 to D4 to D4 is written in a new physical block.In an example, if entry 322 is determined to be valid, it is determined that physical block P4 identified in entry 322 contains existing valid data. A portion of this existing valid data will remain unchanged because the partial TU does not cover this portion, and another portion of this existing valid data will be overwritten with data D1 to D4 in partial TU 314 . The partial TU handling component 113 creates an additional L2P entry that identifies the new physical location. The partial TU handling component 113 creates an entry 324 in the L2P map 320 that identifies the new physical block P9. The new physical block P9 will store the data D1 to D4 in part TU 314 .The partial TU handling component 113 creates an association (eg, connection) between the existing L2P entry 322 and the additional L2P entry 324 . The association is depicted by arrow 326 . Association 326 is created such that both L2P entries 322 and 324 correspond to TU 0 in L2P map 320 . Thus, the map 320 contains L2P entries at TU 0 that point to two different physical blocks, ie, the parts containing the existing data left unchanged by the host write request 310 (eg, in block P4 A physical block P4 of values A to D), and a physical block P9 that will contain the new data D1 to D4 that the host requests to write. In an example, the association 326 can be created using a linked list, pointer, array, or any other mechanism that can relate two entries.In an example, partial TU handling component 113 designates the first part of entry 322 as having valid data, the second part of entry 322 as having invalid data, the first part of entry 324 as having valid data, and the entry 324 The second part of is specified as having invalid data. For example, a flag may be maintained to indicate whether a portion of the L2P entry has valid or invalid data. In one example, the L2P entry may contain a physical address corresponding to the physical location of a particular TU. Depending on the specifications used for the memory subsystem, physical addresses may be used at different levels of granularity. Each physical address in the L2P entry may have a corresponding flag. If valid data exists at the first physical address, then a first flag corresponding to the first physical address in the L2P entry may be set to indicate that valid data exists. Similarly, if invalid data exists at the second physical address, a second flag corresponding to the second physical address in the L2P entry may be set to indicate that invalid data exists.In an embodiment, memory subsystem 110 performs a write operation to write new data D1 through D4 received in portion TU 314 at new physical block P9. In the example, the new block P9 is also written with some padded data (eg, zeros). In some instances, pages with populated data may be designated as invalid. In an embodiment, the partial TU disposition component 113 designates physical blocks P4 and P9 as having priority for garbage collection. Since blocks P4 and P9 contain some invalid data that was introduced to handle unaligned write requests more efficiently, some additional physical block locations are allocated to accommodate the mechanism. Physical blocks with invalid data are freed for other uses, indicated by prioritized garbage collection. By linking entries 322 and 324, the partial TUs in the write request can be handled without having to perform the additional read and modify operations required by the RMW mechanism. When reading data corresponding to TU 0, the memory subsystem can read valid data from both P4 and P9, since both blocks are contained in the L2P map at TU 0.In one embodiment, data D5 through D12 from alignment TU 316 in write request 310 are written in physical block P5. Before writing this data, map 320 contains old entries (not shown) to point TU 1 to the previous physical block P2 containing existing data (eg, values 1 to P). Since alignment TU 316 is perfectly aligned with TU 1, the data previously pointed to by TU 1 can be completely overwritten and the new data stored in new block P5, thereby marking the old data in block P2 as invalid.4 is a flowchart of an example method 400 to handle write requests with partial translation units in accordance with some embodiments of the present disclosure. The method 400 may comprise hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (eg, instructions that run or execute on a processing device), or The combined processing logic executes. In some embodiments, method 400 is performed by partial TU handling component 113 of FIG. 1 . Although shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.At operation 410, processing logic determines that the write request includes at least a portion of a translation unit. In some examples, the size of the partial translation unit is smaller than the size of the predefined translation unit. In some examples, the predefined translation unit includes a predefined number of logical pages. In some instances, the predefined translation units represent the underlying granularity of data managed by a memory device associated with processing logic. In one example, the write request is determined to contain a partial TU by determining that the starting logical address indicated in the write request does not correspond to the starting address of the predefined translation unit. In another example, a partial translation unit is the last element of a set of translation units specified by the write request.At operation 420, processing logic identifies a first entry in the translation map. In an example, the translation map maps multiple translation units to multiple physical blocks. In an example, the first entry identifies a first physical block corresponding to the predefined translation unit. In an example, processing logic determines that the first physical block contains existing valid data.At operation 430, processing logic creates a second entry in the translation map. In an example, the second entry identifies the second physical block. In some instances, the processing logic further designates the first portion of the first entry as having valid data, the second portion of the first entry as having invalid data, the first portion of the second entry having valid data, and the The second part of the second entry is specified as having invalid data. At operation 440, processing logic creates an association between the first entry and the second entry. In an example, the second entry corresponds to a predefined translation unit.At operation 450, processing logic performs a write operation to write the data set corresponding to the partial translation unit to the second physical block. In some instances, the processing logic further specifies that the first physical block and the second physical block have priority for garbage collection.5 is a flowchart of an example method 500 for executing a write request including a partial translation unit, according to some embodiments of the present disclosure. The method 500 may comprise hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (eg, instructions that run or execute on a processing device), or The combined processing logic executes. In some embodiments, method 500 is performed by partial TU handling component 113 of FIG. 1 . Although shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.At operation 510, processing logic determines that the write request includes at least a portion of a translation unit. In one example, the starting logical address indicated in the write request does not correspond to the starting address of the predefined translation unit. In another example, the size of the partial translation unit is less than the size of the predefined translation unit, and the partial translation unit is the last element of the set of translation units specified by the write request. In some examples, the predefined translation unit includes a predefined number of logical pages. In some instances, the predefined translation units represent the underlying granularity of data managed by a memory device associated with processing logic.At operation 520, processing logic identifies a first entry in the translation map. In an example, the translation map maps multiple translation units to multiple physical blocks. In some instances, the first entry identifies a first physical block corresponding to a predefined translation unit.At operation 530, processing logic associates the second entry in the translation map with the first entry. In some instances, processing logic creates the second entry in the translation map before associating the second entry to the first entry. In some instances, the second entry identifies the second physical block. In some instances, the second entry corresponds to a predefined translation unit. In some instances, the processing logic further designates the first portion of the first entry as having valid data, the second portion of the first entry as having invalid data, the first portion of the second entry having valid data, and the The second part of the second entry is specified as having invalid data.At operation 540, processing logic performs a write operation to write the data set corresponding to the partial translation unit to the second physical block. In some instances, the processing logic further specifies that the first physical block and the second physical block have priority for garbage collection.6 illustrates an example machine of a computer system 600 in which a set of instructions may be executable for causing the machine to perform any one or more of the methods discussed herein. In some embodiments, computer system 600 may correspond to a host system (eg, host system 120 of FIG. 1 ) that includes, is coupled to, or utilizes a memory subsystem (eg, memory subsystem 110 of FIG. 1 ) or Can be used to perform the operations of the controller (eg, run an operating system to perform operations corresponding to portions of the TU handling component 113 of FIG. 1). In alternative embodiments, the machines may be connected (eg, networked) to other machines in a Local Area Network (LAN), intranet, extranet, and/or the Internet. A machine may operate in the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or decentralized) network environment or as a server or client machine in a cloud computing infrastructure or environment .A machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network device, server, network router, switch, or bridge, or capable of (sequentially or otherwise) Any machine that executes a set of instructions specifying actions to be taken by the machine. Furthermore, although a single machine is shown, the term "machine" should also be considered to encompass any machine that executes, individually or collectively, a set (or sets) of instructions to perform any one or more of the methods discussed herein gather.The example computer system 600 includes a processing device 602 in communication with each other via a bus 630, a main memory 604 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM), such as synchronous DRAM (SDRAM) or RDRAM, etc. ), static memory 606 (eg, flash memory, static random access memory (SRAM), etc.), and data storage system 618 .Processing device 602 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. Rather, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or A processor that implements a combination of instruction sets. Processing device 602 may also be one or more special purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. Processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. Computer system 600 may further include a network interface device 608 to communicate via network 620 .The data storage system 618 may include a machine-readable storage medium 624 (also referred to as a computer-readable medium) having stored thereon one or more sets of instructions 626 or embodying any of the methods or functions described herein or various software. Instructions 626 may also reside wholly or at least partially within main memory 604 and/or within processing device 602 during execution thereof by computer system 600, which also constitute machine-readable storage media. Machine-readable storage medium 624 , data storage system 618 , and/or main memory 604 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, instructions 626 include instructions to implement functions corresponding to a partial TU handling component (eg, partial TU handling component 113 of FIG. 1). Although machine-readable storage medium 624 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Accordingly, the term "machine-readable storage medium" should be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Some portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is herein and generally considered to be a self-consistent sequence of operations that produce a desired result. Operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may relate to the manipulation and transformation of data represented as physical (electronic) quantities within the registers and memory of a computer system into other data similarly represented as physical quantities within the memory or registers of a computer system or other such information storage systems The actions and processes of a computer system or similar electronic computing device.The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media such as, but not limited to, any type of disk (including floppy disks, optical disks, Compact Disc Read-Only Memory (CD-ROM), and magneto-optical disks), only Read memory (ROM), random access memory (RAM), EPROM, EEPROM, magnetic or optical cards, or any type of medium suitable for storing electronic instructions, are each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods. Structures for a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure is not described with reference to any particular programming language. It should be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software that may include machine-readable instructions stored thereon that may be used to program a computer system (or other electronic device) to perform processes according to the present disclosure media. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as read-only memory ("ROM"), random-access memory ("RAM"), magnetic disks Storage media, optical storage media, flash memory components, etc.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments of the present disclosure. It will be apparent that various modifications may be made to the present disclosure without departing from the broader spirit and scope of the disclosed embodiments as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
Disclosed herein are transistor arrangements of field-effect transistors with dual thickness gate dielectrics. An example transistor arrangement includes a semiconductor channel material, a source region and a drain region, provided in the semiconductor material, and a gate stack provided over a portion of the semiconductor material that is between the source region and the drain region. The gate stack has a thinner gate dielectric in a portion that is closer to the source region and a thicker gate dielectric in a portion that is closer to the drain region, which may effectively realize tunable ballast resistance integrated with the transistor arrangement and may help increase the breakdown voltage and/or decrease the gate leakage of the transistor. |
1.A transistor device includes:Semiconductor material;Source region and drain region in the semiconductor material; andThe gate stack is above the portion of the semiconductor material between the source region and the drain region, wherein the portion includes a first portion and a second portion, and the gate stack includes :One or more gate electrode materials,A first gate dielectric, between the first portion of the semiconductor material and the one or more gate electrode materials, andA second gate dielectric, between the second portion of the semiconductor material and the one or more gate electrode materials,Wherein, the thickness of the first gate dielectric is different from the thickness of the second gate dielectric.2.The transistor device of claim 1, wherein:The first part of the semiconductor material is closer to the source region than the second part of the semiconductor material, andThe second portion of the semiconductor material is closer to the drain region than the first portion of the semiconductor material.3.The transistor device of claim 2, wherein the distance between the second portion of the semiconductor material and the drain region is between 10 and 1000 nanometers.4.The transistor device of claim 2, wherein the thickness of the second gate dielectric is greater than the thickness of the first gate dielectric.5.4. The transistor device of claim 4, wherein the dielectric constant of the second gate dielectric is at least 3 times smaller than the dielectric constant of the first gate dielectric.6.The transistor device according to any one of claims 1-5, wherein:The first portion of the semiconductor material includes a first type of dopant, andThe second portion of the semiconductor material includes a second type of dopant.7.The transistor device according to any one of claims 1-5, wherein:Each of the first part and the second part of the semiconductor material includes a first type of dopant, andThe portion of the semiconductor material between the second portion and the drain region includes a second type of dopant.8.The transistor device according to any one of claims 1-5, wherein:The first portion of the semiconductor material portion includes a first type of dopant,The portion of the second portion of the semiconductor material closest to the first portion includes a first type of dopant, andA portion of the second portion of the semiconductor material between the portion of the second portion of the semiconductor material closest to the first portion and the drain region includes a second type of dopant.9.The transistor device according to any one of claims 1-5, wherein:The portion of the first portion of the semiconductor material closest to the source region includes a first type of dopant,The portion of the first portion of the semiconductor material between the portion of the first portion of the semiconductor material closest to the source region and the second portion of the semiconductor material includes a second type of dopant, andThe second portion of the semiconductor material includes a second type of dopant.10.The transistor device of claim 9, wherein:The dopant of the first type is at a dopant concentration between 1×1016 and 1×1018 dopant atoms per cubic centimeter, andThe second type of dopant is at a dopant concentration between 1×1016 and 1×1018 dopant atoms per cubic centimeter.11.The transistor device of claim 10, wherein:Each of the source region and the drain region includes the second type of dopant, andThe dopant concentration of the dopant of the second type in each of the source region and the drain region is at least 1×10 21 dopant atoms per cubic centimeter.12.The transistor device according to any one of claims 1-5, wherein:The one or more gate electrode materials above the first gate dielectric include a work function (WF) material and a gate electrode material, so that the WF material is between the gate electrode material and the first gate dielectric Between andThe one or more gate electrode materials above the first gate dielectric include gate electrode materials in contact with the second gate dielectric.13.The transistor device according to any one of claims 1-5, wherein:Each of the source region and the gate stack is coupled to ground potential, andThe drain region is coupled to each of the input/output port and another circuit.14.The transistor device according to claim 13, wherein the other circuit is a receiver circuit.15.An electronic device including:Input/output (I/O) port;A receiver circuit having an input coupled to the I/O port; andAn electrostatic discharge protection (ESD) circuit, coupled to the I/O port and to the input of the receiver circuit,among them:The ESD circuit includes a transistor having a source region, a drain region and a gate stack,Each of the source region and the gate stack is coupled to ground potential,The ESD circuit is coupled to the I/O port and to the input of the receiver circuit by coupling the drain region to the I/O port and to the input of the receiver circuit,The first portion of the gate stack includes a first gate dielectric,The second part of the gate stack includes a second gate dielectric,The thickness of the first gate dielectric is smaller than the thickness of the second gate dielectric, andThe first portion of the gate stack is closer to the source region than the second portion of the gate stack.16.The electronic device according to claim 15, further comprising a diode coupled between the ground potential and the I/O port.17.The electronic device according to claim 16, wherein:The electronic device also includes a silicon controlled rectifier (SCR) circuit,The drain region is coupled to the input of the I/O port and the receiver circuit by being coupled to the SCR circuit, and the SCR circuit is coupled to the I/O port and the receiver The input of the circuit.18.The electronic device according to any one of claims 15-17, wherein the transistor is an extended drain transistor.19.A method of forming a transistor device, the method comprising:Providing a source region and a drain region in the semiconductor material; andA gate stack is provided over a portion of the semiconductor material between the source region and the drain region, wherein the portion includes a first portion and a second portion, and the gate stack includes :One or more gate electrode materials,A first gate dielectric, between the first portion of the semiconductor material and the one or more gate electrode materials, andA second gate dielectric, between the second portion of the semiconductor material and the one or more gate electrode materials,Wherein, the thickness of the first gate dielectric is different from the thickness of the second gate dielectric.20.The method of claim 19, wherein:The first part of the semiconductor material is closer to the source region than the second part of the semiconductor material,The second portion of the semiconductor material is closer to the drain region than the first portion of the semiconductor material, andThe thickness of the second gate dielectric is greater than the thickness of the first gate dielectric. |
Field effect transistor with double thickness gate dielectricTechnical fieldThe present disclosure generally relates to the field of semiconductor devices, and more specifically, to field effect transistors (FETs).Background techniqueA FET, such as a metal oxide semiconductor (MOS) FET (MOSFET), is a three-terminal device that includes source, drain, and gate terminals, and uses an electric field to control the current flowing through the device. The FET generally includes a semiconductor channel material, a source region and a drain region provided in the channel material, and a gate stack including a gate dielectric material and a gate electrode material. The gate stack is provided in the source region and the gate electrode material. Above a portion of the channel material between the drain regions.FETs can be used as electrostatic discharge (EDS) protection devices in high-speed input/output (I/O) designs. For example, a FET implemented as a grounded gate N-type MOSFET (GGNMOS) can be used as a fast return mode ESD protection device. In this embodiment, during an ESD event, the high current/voltage at the drain of the GGNMOS makes it return quickly, and ideally, the transistor starts to shunt the ESD current to ground, protecting the core circuit from ESD stress.The ideal operation described above is not always achievable because conventional GGNMOS devices are very susceptible to gate dielectric breakdown and may fail prematurely during an ESD event. For example, due to high voltage ESD spikes at the drain of GGNMOS (which can induce gate dielectric leakage and/or breakdown in the corner area between the drain and the gate dielectric), this ideal operation may be compromised.Adding a series ballast resistor at the drain node of the GGNMOS device can help distribute the ESD current more evenly on all the pins of the device to achieve higher ESD protection (ie, to withstand higher ESD current). However, adding external ballast resistors has the disadvantages of significantly increasing layout complexity, footprint, and capacitive coupling.Description of the drawingsThe embodiments will be easily understood through the following specific embodiments in conjunction with the drawings. To facilitate this description, similar reference numerals indicate similar structural elements. In the figures of the drawings, embodiments are shown by way of example and not limitation.Figure 1 is a perspective view of an example FinFET that may include a double thickness gate dielectric according to some embodiments of the present disclosure.2A-2C are cross-sectional side views of an integrated circuit (IC) structure having an example FinFET including a double-thickness gate dielectric according to some embodiments of the present disclosure.3-7 are cross-sectional side views of an IC structure having an example FinFET including a double-thickness gate dielectric according to other embodiments of the present disclosure.Figure 8 is a perspective view of an example nanowire FET that may include a double thickness gate dielectric according to some embodiments of the present disclosure.9A-9C are schematic circuit diagrams of electronic devices implementing FETs with double-thickness gate dielectrics according to some embodiments of the present disclosure.FIG. 10 is a flowchart of an example method of manufacturing an IC structure having a FET with a double-thickness gate dielectric according to some embodiments of the present disclosure.Figures 11A-11B are top views of a wafer and die including one or more FETs with a double thickness gate dielectric according to any embodiment of the present disclosure.Figure 12 is a cross-sectional side view of an IC package that may include one or more FETs with a double thickness gate dielectric according to any embodiment of the present disclosure.Figure 13 is a cross-sectional side view of an IC device assembly that may include one or more FETs with double thickness gate dielectrics according to any embodiment of the present disclosure.Figure 14 is a block diagram of an example computing device that can include one or more FETs with a double thickness gate dielectric according to any embodiment of the present disclosure.Figure 15 is a block diagram of an example RF device that may include one or more FETs with a double thickness gate dielectric according to any embodiment of the present disclosure.Detailed waysOverviewTo illustrate the FET with the double thickness gate dielectric described herein, it may be useful to understand the phenomena that may play a role in the transistor. The following basic information can be regarded as a basis that can appropriately explain the present disclosure. Such information is provided for explanatory purposes only, and therefore should not be interpreted in any way as limiting the broad scope of the present disclosure and its potential applications.The performance of a FET can depend on many factors. The breakdown voltage of the FET is one of these factors. Breakdown voltage, usually abbreviated as BVDS, refers to the drain-source voltage VDS, which causes the FET to enter the breakdown region (ie, the area where the transistor receives too much voltage on its drain-source terminal, which makes the drain-source voltage The source terminal breaks down, which makes the drain current ID a sharp increase). The gate leakage of the FET is another of these factors. Gate leakage, sometimes called stress-induced leakage current (SILC), refers to the increase in the gate leakage current of the MOSFET, which may be due to defects in the gate dielectric (usually gate oxide) during electrical stress And happen.Increasing the breakdown voltage of the FET and reducing the gate leakage of the FET may be desirable for various applications. An example application is I/O design, where FETs can implement ESD protection devices, as described above. Another example application is in wireless radio frequency (RF) communications, especially for millimeter wave wireless technologies, such as fifth-generation (5G) wireless (ie, RF's high-frequency/short-wavelength spectrum, for example, with a frequency of about 20 and The frequency in the range between 60 GHz corresponds to the wavelength in the range between about 5 and 15 mm), where FETs can implement circuits such as power amplifiers.However, increasing the breakdown voltage of the FET and reducing the gate leakage of the FET, especially while still having a sufficiently high operating speed, are not easy tasks. Therefore, applications that require high breakdown, low leakage, and high-speed circuits usually rely on technologies other than silicon (such as GaN or other III-N materials). Although III-N materials are very promising, it may still be desirable to implement FETs on silicon due to the cost advantages of using known silicon processing techniques. In addition, for many applications, such as high-power applications or millimeter-wave RF connections, implementing FETs on silicon can advantageously achieve form factor reduction due to the ability to integrate high-power or RF circuits with logic circuits. For high-power or millimeter-wave RF circuits, integration has many other benefits, because such circuits increasingly rely on digital circuits to improve performance while supporting low latency. Therefore, there is a great need for FET devices that can withstand higher breakdown voltage, achieve sufficiently low leakage and sufficiently high speed, and are manufactured using complementary metal oxide semiconductor (CMOS) technology.Disclosed herein are transistor configurations of FETs with double-thickness gate dielectrics, which may advantageously allow increased breakdown voltage and/or reduced gate leakage. Example transistor devices include semiconductor channel materials (which may be interchangeably referred to as "channel materials" or "semiconductor materials"), source and drain regions disposed in the semiconductor material, and sources and drain regions disposed in the semiconductor material. The gate stack above a portion of the semiconductor material between the drain regions. The gate stack has a thinner gate dielectric in a portion closer to the source region, and a thicker gate dielectric in a portion closer to the drain region. Therefore, the gate dielectric of the gate stack can be It is called "Double Thickness Gate Dielectric". Achieving a double-thickness gate dielectric makes the gate stack asymmetric because the thickness of the gate dielectric of the portion of the gate stack closer to one of the source/drain (S/D) regions is different from that of the gate dielectric closer to the other S/ The thickness of the gate dielectric in the gate stack portion of the D region. Implementing a double-thickness gate dielectric in a transistor device can effectively achieve an adjustable ballast resistance integrated with the transistor device, and can help increase the breakdown voltage of the FET and/or reduce the gate leakage of the FET. In various embodiments, the integrated ballast resistance can be further tuned, and thus can be achieved by selectively including or excluding work function (WF) materials in various parts of the gate stack, changing the source and drain regions. The doping concentration of the P-well and N-well between the regions and the size of the P-well and N-well along the channel length (for example, along the line between the source region and the drain region) are changed to further optimize the breakdown voltage and the gate Extremely leaking.As used herein, the term "WF material" refers to any material that can be used to control the threshold voltage of a FET. The term "WF material" is used to denote the WF of the material (ie, the physical properties of the material, which specifies the minimum thermodynamic work required to move electrons from a solid to a point in a vacuum immediately outside the solid surface (ie, Energy)) can affect the threshold voltage of the final FET. In addition, the term "threshold voltage", often abbreviated as Vth, refers to the minimum gate electrode bias (or gate-source) required to create a conductive path (ie, conductive channel) between the source terminal and the drain terminal of the transistor. Voltage).Although some embodiments described herein relate to FinFETs (ie, FETs with a non-planar architecture in which fins formed of one or more semiconductor materials extend away from the substrate), these embodiments are equally applicable to FinFETs other than FinFETs. Any other non-planar FET (for example, suitable for nanowire or nanoribbon transistors), and suitable for FETs with a planar architecture.Each of the structure, package, method, device, and system of the present disclosure may have several innovative aspects, of which no single aspect is solely responsible for all the desired attributes disclosed herein. The details of one or more implementations of the subject matter described in this specification are set forth in the following description and drawings.In the following specific embodiments, terms commonly used by those skilled in the art may be used to describe various aspects of the illustrative embodiments, so as to convey the essence of their work to other skilled persons in the art. For example, the term "connected" means a direct electrical or magnetic connection between connected things without any intermediate devices, while the term "coupled" means a direct electrical or magnetic connection between connected things, or through a Or indirect connection of multiple passive or active intermediate devices. The term "circuit" means one or more passive and/or active components arranged to cooperate with each other to provide a desired function. If used, the terms "oxide", "carbide", "nitride", etc. refer to compounds containing oxygen, carbon, nitrogen, etc., respectively. Similarly, the terms naming various compounds refer to materials having any combination of individual elements within the compound (for example, "gallium arsenide" or "GaAs" may refer to materials including gallium and arsenic). In addition, the term "high-k dielectric" refers to a material having a higher dielectric constant (k) than silicon oxide, and the term "low-k dielectric" refers to a material having a lower k than silicon oxide. The terms "substantially", "close", "approximately", "near", "about" generally refer to the context based on a specific value as described herein or as known in the art, within +/-20% of the target value Within, preferably within +/-10%. Similarly, terms indicating the orientation of various elements, such as "coplanar", "perpendicular", "orthogonal", "parallel" or any other angle between elements, generally refer to The context of the specific value known in the field is within +/-5-20% of the target value.As used herein, terms such as "above", "below", "between" and "above" refer to the relative relationship of a layer or component of a material with respect to other layers or components. position. For example, a layer disposed above or below another layer may directly contact the other layer or may have one or more intermediate layers. In addition, a layer disposed between the two layers may directly contact one or both of the two layers, or may have one or more intermediate layers. Conversely, the first layer described as being "on" the second layer refers to the layer that is in direct contact with the second layer. Similarly, unless explicitly stated otherwise, a feature disposed between two features may be in direct contact with an adjacent feature or may have one or more intermediate layers.For the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B And C). When used with reference to a measurement range, the term "between" includes the end points of the measurement range. As used herein, the notation "A/B/C" means (A), (B), and/or (C).The description uses the phrase "in one embodiment" or "in an embodiment," each of which can refer to one or more of the same or different embodiments. Moreover, the terms "including", "including", "having" and the like as used with respect to the embodiments of the present disclosure are synonyms. The present disclosure may use perspective-based descriptions such as "above", "below", "top", "bottom", and "side"; such descriptions are used to facilitate discussion, not to limit everything. Application of the disclosed embodiments. The drawings are not necessarily drawn to scale. Unless otherwise specified, the use of ordinal numbers "first", "second", and "third" to describe common objects only means that different instances of similar objects are mentioned, and are not intended to imply that the objects so described must Be in a given order in time, space, order, or in any other way.In the following specific embodiments, reference is made to the drawings constituting a part thereof, and practical embodiments are shown through illustrations in the drawings. It should be understood that other embodiments may be utilized, and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following specific embodiments should not be construed as restrictive. For convenience, if there is a collection of drawings designated with different letters, such as Figures 2A-2C, such a collection can be referred to herein without letters, for example, referred to as "Figure 2". In the drawings, the same reference signs refer to the same or similar elements/materials shown, so that unless otherwise stated, the elements/materials with the given reference signs are provided in the context of one of the drawings. The explanation can be applied to other drawings in which elements/materials with the same reference numerals can be shown.In the drawings, some schematic diagrams of the exemplary structures of various structures, devices and components described herein may be shown with precise right angles and straight lines, but it should be understood that such schematic diagrams may not reflect actual process limitations. When used, for example, When scanning electron microscope (SEM) images or transmission electron microscope (TEM) images to inspect any of the structures described herein, practical process limitations may cause the features to look less "ideal." In this image of the real structure, possible processing defects are also visible, such as imperfect straight edges of materials, tapered vias or other openings, changes in thickness of different material layers or unintentional rounding, and crystallized areas Accidental spiral, edge or combined dislocations, and/or accidental dislocation defects of single atoms or clusters of atoms. There may be other defects not listed here but common in the field of device manufacturing.The multiple operations are described as sequential multiple separate actions or operations in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as implying that these operations must be order-related. Specifically, these operations may not be performed in the order presented. The operations described may be performed in a different order than the described embodiment. In other embodiments, multiple additional operations may be performed and/or the operations described may be omitted.Various IC structures including at least one FET having a double thickness gate dielectric described herein may be implemented in one or more components associated with the IC and/or between various such components. In various embodiments, the components associated with the IC include, for example, transistors, diodes, power supplies, resistors, capacitors, inductors, sensors, transceivers, transmitters, receivers, antennas, and the like. The components associated with the IC may include those mounted on the IC, provided as an integral part of the IC, or those connected to the IC. The IC can be analog or digital, or it can include a combination of analog and digital circuits, and can be used in many applications, such as microprocessors, optoelectronics, logic blocks, audio amplifiers, etc., depending on what is associated with the IC part. In some embodiments, an IC structure including at least one FET with a double-thickness gate dielectric as described herein may be included in an RFIC, which may be included in an RF receiver, RF transmitter, or RF transceiver, or for example, In any component associated with the IC of any other RF equipment used in telecommunications in a base station (BS) or user equipment (UE) device. These components may include, but are not limited to, power amplifiers, RF switches, RF filters (including RF filter arrays or RF filter banks), or impedance tuners. In some embodiments, an IC structure including at least one FET with a double-thickness gate dielectric described herein may be used as part of a chipset for performing one or more related functions in a computer.Example FinFET with double thickness gate dielectricThe transistor can have a planar or non-planar architecture. Recently, non-planar transistors have been widely explored as an alternative to transistors with planar architectures.FinFET refers to a transistor with a non-planar architecture in which a fin formed of one or more semiconductor materials extends away from the substrate (wherein the term "substrate" can refer to any suitable support structure on which a transistor can be built, such as a substrate , Die, wafer or chip). The portion of the fin closest to the substrate may be surrounded by an insulator material. This insulator material (typically oxide) is often called "shallow trench isolation" (STI), and the part of the fin surrounded by STI is usually called the "sub-fin part" or simply It is a "sub-fin". A gate stack including at least a gate electrode material layer and optionally a gate dielectric layer may be provided over the top and side surfaces of the remaining upper part of the fin (for example, the part above the STI and not surrounded by the STI), thereby surrounding The uppermost part of the fin. The part of the fin that the gate stack surrounds can be referred to as the "channel part" of the fin, because this is where the conductive channel can be formed during the operation of the transistor and is the part of the fin Part of the active region. A source region and a drain region are provided on opposite sides of the gate stack, forming the source terminal and the drain terminal of the transistor, respectively.A FinFET can be implemented as a "Tri-Gate Transistor", where the name "Tri-Gate" is derived from the fact that, in use, such a transistor can form conductive channels on the three "sides" of the fin. FinFETs have potentially improved performance over single-gate transistors and dual-gate transistors.FIG. 1 is a perspective view of an IC structure having an example FinFET 100 in which a double-thickness gate dielectric can be implemented according to some embodiments of the present disclosure. Note that the FinFET 100 shown in FIG. 1 is intended to show the relative arrangement of some of the components, and the FinFET 100 or part thereof may include other components not shown (for example, any other material surrounding the gate stack of the FinFET 100 ( For example, spacer materials), electrical contacts to the S/D area of FinFET 100, etc.).As shown in FIG. 1, the FinFET 100 may include a substrate 102, a fin 104, an STI material 106 surrounding the sub-fin portion of the fin 104, and an S/D region (also commonly referred to as a "diffusion region") 114 -1 and 114-2. As also shown in the figure, FinFET 100 further includes a gate stack 108 that includes a gate dielectric 110 and a gate electrode 112. Although not specifically shown in FIG. 1, it can be seen, for example, from FIGS. 2-7 that the gate dielectric 110 may include two portions of different thicknesses, and each portion may include a stack of one or more gate dielectric materials. .Generally, the embodiments of the present disclosure may be formed or executed on a support structure such as a semiconductor substrate, the support structure being composed of a semiconductor material system including, for example, an N-type or P-type material system. In one embodiment, the semiconductor substrate may be a crystalline substrate formed using bulk silicon or silicon-on-insulator substructures. In other embodiments, alternative materials may be used to form the semiconductor substrate, which may or may not be combined with silicon. Alternative materials include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, arsenic Gallium, indium gallium arsenide, gallium antimonide, or other combinations of III-V, II-VI, or IV materials. Although a few examples of materials that can form a substrate are described here, any material that can be used as a base falls within the spirit and scope of the present disclosure, where it can be built on the basis to achieve a double thickness as described herein. Any FET semiconductor device in the gate dielectric FET. In various embodiments, the substrate 102 may include any such substrate material that provides a suitable surface for forming the FinFET 100.As shown in FIG. 1, the fin 104 may extend away from the base 102 and may be substantially perpendicular to the base 102. The fin 104 may include one or more semiconductor materials, such as a stack of semiconductor materials, so that the uppermost portion of the fin (ie, the portion of the fin 104 surrounded by the gate stack 108) can be used as The channel region of FinFET 100. Therefore, as used herein, the term "channel material" of the transistor may refer to this uppermost portion of the fin 104, or more generally, to the source and drain regions formed therein during the operation of the transistor. The conductive channel between any part of one or more semiconductor materials.As shown in FIG. 1, the STI material 106 may surround the sides of the fin 104. The portion of the fin 104 surrounded by the STI 106 forms a sub-fin. In various embodiments, the STI material 106 may be a low-k or high-k dielectric, including but not limited to such as hafnium, silicon, oxygen, nitrogen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, Elements of scandium, niobium and zinc. Additional examples of dielectric materials that can be used in the STI material 106 may include, but are not limited to, silicon nitride, silicon oxide, silicon dioxide, silicon carbide, carbon-doped silicon nitride, silicon oxynitride, hafnium oxide, hafnium silicon Oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, tantalum oxide, tantalum Silicon oxide, lead scandium tantalum oxide, and zinc lead niobate.Above the sub-fin portion of the fin 104, the gate stack 108 may surround the fin 104 as shown in FIG. 1, wherein the channel portion of the fin 104 corresponds to the gate of the fin 104 The part surrounded by the stacked body 108. In particular, the gate dielectric 110 may surround the uppermost part of the fin 104, and the gate electrode 112 may surround the gate dielectric 110. The interface between the channel portion of the fin 104 and the sub-fin portion is located near the position where the gate electrode 112 terminates.The gate electrode 112 may include one or more gate electrode materials, wherein the selection of the gate electrode material may depend on whether the FinFET 100 is a P-type metal oxide semiconductor (PMOS) transistor or an N-type metal oxide semiconductor (NMOS) transistor. For PMOS transistors, gate electrode materials that can be used in different parts of the gate electrode 112 may include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides (for example, ruthenium oxide). For NMOS transistors, gate electrode materials that can be used in different parts of the gate electrode 112 include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals (for example, hafnium carbide, zirconium carbide). , Titanium carbide, tantalum carbide and aluminum carbide). In some embodiments, the gate electrode 112 may include a stack of multiple gate electrode materials, wherein zero or more materials of the stack are WF materials as described herein, and at least one material of the stack is a filler metal. Floor. For other purposes, additional materials/layers may be included beside the gate electrode 112, for example as a diffusion barrier layer or/and an adhesion layer.Although not specifically shown in FIG. 1 (but will be shown in more detail with reference to FIGS. 2-7), according to various embodiments of the present disclosure, the gate dielectric 110 includes at least two portions having different thicknesses (where the gate The thickness of the polar dielectric refers to the size measured in the direction of the y-axis on the sidewall of the fin 104 and the size measured in the direction of the z-axis on the top of the fin 104. The y-axis and the z-axis are shown in FIG. 1 The different axes of the reference coordinate system xyz are shown), where each part may include a stack of one or more gate dielectric materials. In some embodiments, the gate dielectric 110 may include one or more high-k dielectric materials. In various embodiments, the high-k dielectric material of the gate dielectric 110 may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc . Examples of high-k materials that can be used in the gate dielectric 110 may include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium Strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, tantalum oxide, tantalum silicon oxide, lead scandium tantalum oxide, and zinc lead niobate. In some embodiments, during the fabrication of the FinFET 100, an annealing process may be performed on the gate dielectric 110 to improve the quality of the gate dielectric 110.In some embodiments, the gate stack 108 may be surrounded by a dielectric spacer, which is not specifically shown in FIG. 1. The dielectric spacers can be configured as gates of different FinFETs 100 that can be arranged along a single fin (for example, different FinFETs arranged along the fin 104, although FIG. 1 shows only one of such FinFETs). Spaces are provided between the pole stacks 108 and between the gate stack 108 and the source/drain contacts provided on each side of the gate stack 108. Such dielectric spacers may include one or more low-k dielectric materials. Examples of low-k dielectric materials that can be used as dielectric spacers include, but are not limited to, silicon dioxide, carbon-doped oxides, silicon nitride, fused silica glass (FSG), and organosilicates such as silsesquioxane , Siloxane and organosilicate glass. Other examples of low-k dielectric materials that can be used as dielectric spacers include organic polymers such as polyimide, polynorbornene, benzocyclobutene, perfluorocyclobutane, or polytetrafluoroethylene (PTFE). Other examples of low-k dielectric materials that can be used as dielectric spacers include silicon-based polymeric dielectrics such as hydrogen silsesquioxane (HSQ) and methyl silsesquioxane (MSQ). Other examples of low-k materials that can be used for dielectric spacers include various porous dielectric materials, such as porous silica or porous carbon-doped silica, in which large voids or pores are created in the dielectric to reduce the thickness of the layer. The total dielectric constant, because the void can have a dielectric constant close to 1. When using such a dielectric spacer, the lower portion of the fin 104, such as the sub-fin portion of the fin 104, may be surrounded by the STI material 106, which may, for example, include any of the high-k dielectrics described herein material.In some embodiments, the fin 104 may be composed of a semiconductor material system, including, for example, an N-type or P-type material system. In some embodiments, the fin 104 may include a high-mobility oxide semiconductor material, such as tin oxide, antimony oxide, indium oxide, indium tin oxide, titanium oxide, zinc oxide, indium zinc oxide, gallium oxide, titanium oxynitride , Ruthenium oxide or tungsten oxide. In some embodiments, the fin 104 may include a combination of semiconductor materials, in which one semiconductor material is used for the channel portion, and another material, sometimes referred to as a "blocking material", is used for the substructure of the fin 104. At least a part of the fin portion. In some embodiments, the sub-fin and the channel portion of the fin 104 are each formed of a single crystal semiconductor such as Si or Ge. In the first embodiment, the sub-fin and the channel portion of the fin 104 are each formed of a compound semiconductor having at least one element (for example, Al, Ga, In) from group III of the periodic table. The first sub-lattice and the second sub-lattice of at least one element (for example, P, As, Sb) from group V of the periodic table. The sub-fin can be a binary, ternary or quaternary III-V compound semiconductor, which is an alloy of two, three, or even four elements from the III and V groups of the periodic table, including boron, aluminum , Indium, Gallium, Nitrogen, Arsenic, Phosphorus, Antimony and Bismuth.For some example N-type transistor embodiments (ie, for embodiments where FinFET 100 is NMOS), the channel portion of fin 104 may advantageously include a III-V group material with high electron mobility, such as but not limited to InGaAs , InP, InSb and InAs. For some such embodiments, the channel portion of the fin 104 may be a ternary III-V group alloy, such as InGaAs, GaAsSb, InAsP, or InP. For some InxGa1-xAs fin embodiments, the In content (x) may be between 0.6 and 0.9, and may advantageously be at least 0.7 (e.g., In0.7Ga0.3As). In some embodiments with the highest mobility, the channel portion of the fin 104 may be an intrinsic III-V group material, that is, a group III-V semiconductor material that is not intentionally doped with any electroactive impurities. In an alternative embodiment, a nominal impurity dopant level may be present in the channel portion of the fin 104, for example, to further fine-tune the threshold voltage Vt, or to provide HALO pocket implantation, etc. However, even for impurity-doped embodiments, the impurity dopant level in the channel portion of the fin 104 can be relatively low, for example, less than 1015 dopant atoms per cubic centimeter (cm-3), and It is advantageously below 1013 cm-3. The sub-fin portion of the fin 104 may be a III-V group material with a band offset from the channel portion (eg, conduction band offset for N-type devices). Exemplary materials include, but are not limited to, GaAs, GaSb, GaAsSb, GaP, InAlAs, GaAsSb, AlAs, AlP, AlSb, and AlGaAs. In some N-type transistor embodiments of FinFET 100 in which the channel portion of the fin 104 is InGaAs, the sub-fin may be GaAs, and at least a part of the sub-fin may also be doped with impurities (for example, P-type ) To achieve a greater impurity level than the channel part. In an alternative heterojunction embodiment, the sub-fin and the channel portion of the fin 104 are each or include a group IV semiconductor (e.g., Si, Ge, SiGe). The sub-fin of the fin 104 may be a first element semiconductor (for example, Si or Ge) or a first SiGe alloy (for example, with a wide band gap).For some example P-type transistor embodiments (ie, for the FinFET 100 is a PMOS embodiment), the channel portion of the fin 104 may advantageously be a group IV material with high hole mobility, such as but not limited to Ge or rich SiGe alloy of Ge. For some example embodiments, the channel portion of the fin 104 may have a Ge content between 0.6 and 0.9, and advantageously may be at least 0.7. In some embodiments with the highest mobility, the channel portion may be an intrinsic III-V (or IV for P-type devices) material, and is not intentionally doped with any electroactive impurities. In an alternative embodiment, there may be one or more nominal impurity dopant levels in the channel portion of the fin 104, for example, to further set the threshold voltage Vt, or to provide HALO pocket implantation, etc. However, even for impurity-doped embodiments, the impurity dopant level in the channel portion is relatively low, for example, lower than 1015 cm-3, and advantageously lower than 1013 cm-3. The sub-fins of the fin 104 may be a group IV material with a band offset from the channel portion (for example, a valence band offset for a P-type device). Example materials include, but are not limited to, Si or Si-rich SiGe. In some P-type transistor embodiments, the sub-fin of the fin 104 is Si, and at least a part of the sub-fin may also be doped with impurities (for example, N-type) to achieve a higher Impurity level.The fin 104 may include a first source or drain (S/D) region 114-1 and a second S/D region 114-2 on respective different sides of the gate stack 108, as shown in FIG. 1 , So as to realize the transistor. In some embodiments, the first S/D region 114-1 may be a source region, and the second S/D region 114-2 may be a drain region, although in some embodiments, the source and drain The designation can be exchanged with each other. Although not specifically shown in FIG. 1, the FinFET 100 may further include S/D electrodes (commonly also referred to as "S/D contacts") formed of one or more conductive materials for supplying to the source electrodes respectively. And the electrical connection to the drain region 114. In some embodiments, the S/D region 114 (sometimes interchangeably referred to as the "diffusion region") of the FinFET 100 may be a semiconductor-doped region, such as a channel material doped region of the fin 104, In order to provide charge carriers for the transistor channel. In some embodiments, the S/D region 114 may be highly doped, for example, the dopant concentration is about 1×1021 cm-3, so as to advantageously form an ohmic contact with the corresponding S/D electrode, although in some implementations In this way, these regions can also have lower dopant concentrations and can form Schottky contacts. Regardless of the exact degree of doping, the S/D region 114 of the FinFET 100 is a region having a higher dopant concentration than other regions, for example, than the region between the source region 114-1 and the drain region 114-2 The dopant concentration in the semiconductor channel material region is higher, so it can be referred to as a "highly doped" (HD) region.In some embodiments, an implantation/diffusion process or an etching/deposition process can generally be used to form the source and drain regions 114. In the previous process, dopants such as boron, aluminum, antimony, phosphorus, or arsenic may be ion-implanted into one or more semiconductor materials in the upper portion of the fin 104 to form source and drain regions 114. The annealing process that activates the dopants and causes them to further diffuse into the fin 104 may be after the ion implantation process. In the latter process, one or more semiconductor materials of the fin 104 may be etched first to form grooves at locations for future source and drain regions. An epitaxial deposition process may then be performed to fill the recesses with the material used to fabricate the source and drain regions 114 (which may include a combination of different materials). In some embodiments, a silicon alloy such as silicon germanium or silicon carbide may be used to fabricate the source and drain regions 114. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorus. In other embodiments, one or more alternative semiconductor materials such as germanium or III-V materials or alloys may be used to form the source and drain regions 114. Although not specifically shown in the perspective view of FIG. 1, in other embodiments, one or more metal and/or metal alloy layers may be used to form source and drain contacts (ie, to each source Electrical contacts of the electrode and drain region 114).The FinFET 100 may have a gate length GL (ie, the distance between the source region 114-1 and the drain region 114-2), that is, along the x-axis direction of the example reference coordinate system xyz shown in FIG. The size of the fin 104 measured. In some embodiments, the size may be between about 5 and 40 nanometers, including all values and ranges therein (for example, between about 22 and 35 nanometers, or between about 20 and 40 nanometers). Between 30 nanometers). The fin 104 may have a thickness, that is, a size measured in the y-axis direction of the reference coordinate system xyz shown in FIG. 1. In some embodiments, the thickness may be between about 5 and 30 nanometers, including all of them. Value and range (e.g., between about 7 and 20 nanometers, or between about 10 and 15 nanometers). The fin 104 may have a height, that is, a size measured in the z-axis direction of the reference coordinate system xyz shown in FIG. 1. In some embodiments, the height may be between about 30 and 350 nanometers, including all of them. Value and range (e.g., between about 30 and 200 nanometers, between about 75 and 250 nanometers, or between about 150 and 300 nanometers).Although the fin 104 shown in FIG. 1 is shown as having a rectangular cross-section in the zy plane of the reference coordinate system shown in FIG. ”Is a rounded or inclined cross-section, and the gate stack 108 (including different parts of the gate dielectric 110) may conform to the rounded or inclined fin 104. In use, FinFET 100 can form conductive channels on the three "sides" of the channel portion of fin 104, as opposed to single-gate transistors (which can be on one "side" of the channel material or substrate Formation of conductive channels) and dual-gate transistors (which can form conductive channels on both "sides" of the channel material or substrate) potentially improve performance.Although FIG. 1 shows a single FinFET 100, in some embodiments, multiple FinFETs may be arranged adjacent to each other (with a certain interval therebetween) along the fin 104.Example FinFET with double thickness gate dielectric2A-2C are cross-sectional side views of the transistor 200, which provide a first example of the FinFET 100 shown in FIG. 1. Therefore, the description provided with respect to FIG. 1 can be applied to FIGS. 2A-2C, so for the sake of brevity, it will not be repeated. The cross-sectional side view of FIG. 2A is a view in the xz plane of the example coordinate system shown in FIG. 1, where the cross-section is taken along the fin 104 (for example, along the plane shown as plane AA in FIGS. 1, 2B, and 2C) . The cross-sectional side view of FIG. 2B is a view in the yz plane of the example coordinate system shown in FIG. 1, where an example portion of the gate stack 108 passes through the fin 104 (for example, along the lines in FIGS. 1 and 2A). Shown as the plane of plane BB) a cross-section is taken. The cross-sectional side view of FIG. 2C is a view in the yz plane of the example coordinate system shown in FIG. 1, where another example portion for the gate stack 108 passes through the fin 104 (for example, along FIGS. 1 and 2A). Shown as the plane of plane CC) cut the section. The legend provided in the dashed box at the bottom of FIG. 2 shows the color/pattern of some parts or materials used to indicate some elements shown in FIG. 2 so that FIG. 2 will not be confused by too many reference signs. For example, FIG. 2 uses different colors/patterns to identify the substrate 102, the fin 104, the STI 106, the S/D region 114, and the S/D contact 216 (which may include the first S to the source region 114-1). The /D contact 216-1 and the second S/D contact 216-2 to the drain region 114-2) and so on.As shown in FIG. 2, the gate dielectric 110 of the gate stack 108 may include a first gate dielectric 110-1 and a second gate dielectric 110-2. As used herein, the difference between the first gate dielectric 110-1 and the second gate dielectric 110-2 is that these dielectric materials have different thicknesses. In particular, the first gate dielectric 110-1 is a gate A portion of the gate dielectric 110 of the stacked body 108 is thinner than the portion of the gate dielectric 110 represented by the second gate dielectric 110-2. In various embodiments, the material composition of the first gate dielectric 110-1 and the second gate dielectric 110-2 may be the same or may be different, or a thicker portion of the gate dielectric (ie, the second gate dielectric 110 -2) It may include a layer having one composition of one or more dielectric materials, and a composition of one or more dielectric materials having a thinner portion of the gate dielectric (ie, the first gate dielectric 110-1)的层。 The layer. Therefore, in various embodiments, each of the first gate dielectric 110-1 and the second gate dielectric 110-2 may include any one of the above-mentioned gate dielectric materials, and may be considered to be in contact with each other , For example, is considered a continuous layer of one or more gate dielectric materials, where different parts of the layer have different thicknesses. In some embodiments, the dielectric constant of the second gate dielectric 110-2 (ie, a thicker gate dielectric) may be smaller than that of the first gate dielectric 110-1 (ie, a thinner gate dielectric). The electrical constant is, for example, at least 2 times smaller, at least 5 times smaller, or even 6-7 times smaller. Such an embodiment can be advantageous in terms of channel control and performance, and at the same time provide a robust gate dielectric in the area most prone to breakdown.In some embodiments, as shown in FIG. 2A, the first gate dielectric 110-1 may be closer to the source region 114-1 than the second gate dielectric 110-2, and the second gate dielectric 110-2 may be closer to the source region 114-1 than the second gate dielectric 110-2. The first gate dielectric 110-1 is closer to the drain region 114-2. Therefore, the thickness of the portion of the gate dielectric 110 closest to the drain region 114-2 may be greater than the thickness of the portion of the gate dielectric 110 closest to the source region 114-1. For example, in some embodiments, the thickness of the second gate dielectric 110-2 may be between about 1.1 and 5 times larger than the thickness of the first gate dielectric 110-1 (for example, , About 2 times or about 3 times larger).As also shown in FIG. 2, the gate electrode 112 of the gate stack 108 can be regarded as including a first gate electrode material/portion 112-1 and a second gate electrode material/portion 112-2. As used herein, the difference between the first gate electrode material/portion 112-1 and the second gate electrode material/portion 112-2 is that these gate electrode materials/portions are respectively disposed in the first gate dielectric 110-1 and Above the second gate dielectric 110-2. In some embodiments, the gate electrode portions 112-1 and 112-2 may include materials of the same or different compositions. For example, in some embodiments, the gate electrode material 112-1 above the thinner gate dielectric 110-1 and the gate electrode material 112-2 above the thicker gate dielectric 110-2 may both be included in the respective gate dielectric 110. The upper WF material layer and a filling metal layer (therefore, the WF material layer may be between the filling metal layer and the respective gate dielectric 110). In some such embodiments, the WF material layer may include the same WF material over both the first gate dielectric 110-1 and the second gate dielectric 110-2. In other such embodiments, the WF material layer above the first gate dielectric 110-1 and the second gate dielectric 110-2 may include WF materials of different material compositions. For example, in some embodiments, the WF material above the first gate dielectric 110-1 and the second gate dielectric 110-2 may be WF materials associated with different threshold voltages, which means that if a given transistor will Including the first WF material as the gate electrode material 112-1, then such a transistor will have a first threshold voltage, and if another identical transistor will include the second WF material as the gate electrode material 112-2, then such a transistor will have A second threshold voltage different from the first threshold voltage. In some such embodiments, the WF material (ie, the WF material closer to the source region 114-1) disposed above the first gate dielectric 110-1 may be a WF material associated with a lower threshold voltage, The WF material disposed above the second gate dielectric 110-2 (ie, the WF material closer to the drain region 114-2) may be a WF material associated with a higher threshold voltage. This can help provide more drive current to the transistor, which can offset the higher drain ballast resistance from the extended drain portion. For example, in some embodiments, the WF material of the gate electrode portion 112-1 may be associated with a threshold voltage between about 0.1 volt and 0.25 volts, and the WF material of the gate electrode portion 112-2 may be associated with a threshold voltage of about 0.6 volts. Associated with a threshold voltage between 4 volts.In other embodiments, the second gate electrode portion 112-2, that is, the portion closer to the drain region 114-2, may only include a filling metal layer, such as copper, but does not include WF material, and the gate electrode portion 114-1 It may include a WF material (for example, any of the materials described above) and then include a filler metal layer over the WF material. Such an embodiment may be advantageous in terms of, for example, the process complexity in the gate stack and obtaining better WF metal uniformity at the transition region between the 112-1 and 112-2 regions.Turning to the further details of the double-thickness dielectric 100, the first gate dielectric 110-1 can be regarded as being disposed over the first portion of the semiconductor material of the fin 104, and the second gate dielectric 110-2 can be regarded as being disposed Above the second portion of the semiconductor material of the fin 104. The first part of the semiconductor material of the fin 104 is represented in FIG. 2A as part 204-1, which may be the uppermost part of the fin 104, for example. The second part of the semiconductor material of the fin 104 is represented as part 204-2 in FIG. 2A, which may also be the uppermost part of the fin 104, for example. Therefore, the first portion 204-1 may be between the source region 114-1 and the second portion 204-2, and the second portion 204-2 may be between the first portion 204-1 and the drain region 114-2. The size 228-1 shown in FIG. 2A refers to the length of the first portion 204-1, and the size 228-2 shown in FIG. 2A refers to the length of the second portion 204-2.2A shows an extended drain transistor embodiment of the FET 100, meaning that there is also a third portion 204-3 of semiconductor material of the fin 104, which has the length 228-3 shown in FIG. 2A, wherein the third portion 204 -3 is the portion of the fin 104 between the second portion 204-2 and the drain region 114-2. That is, the third portion 204-3 refers to a portion of the fin 104 that is not under the gate stack 108 and is closest to the drain region 114-2.In some embodiments, the length of the second portion of the channel material (ie, the dimension 228-2 measured along the x-axis (e.g., along the length of the fin 104) shown in FIG. 2A) and the channel The ratio between the length of the first part of the material (ie, dimension 228-1 also measured along the x-axis shown in FIG. 2A) may be equal to or less than about 1, for example, equal to or less than about 1/2, or equal to or Less than about 1/3. That is, in some embodiments, the length of the thick gate dielectric (ie, the length 228-2 of the second gate dielectric 110-2) and the length of the thin gate dielectric (ie, the length 228 of the first gate dielectric 110-1) The ratio of -1) may be equal to or less than about 1, for example, equal to or less than about 1/2, or equal to or less than about 1/3. In the gate stack 108, in some deployment scenarios, it includes a shorter portion near the drain than the thinner portion of the gate dielectric near the source region 114-1 (ie, the portion of the gate dielectric with a length of 228-1). The thicker portion of the gate dielectric of the region 114-2 (ie, the portion of the gate dielectric of the length 228-2) may be advantageous. Turning to the extended drain transistor component of transistor 200, the length 228-3 may be between about 0 and 1000 nanometers, including all values and ranges therein, for example, between about 10 and 500 nanometers.In addition to providing a double-thickness gate dielectric, another way to adjust the ballast resistance integrated in the FET is to modify the range of the P-well and N-well of the semiconductor channel material, where the terms “P-well” and “N-well” refer to Is the part of the channel material between the source region 114-1 and the drain region 114-2 with P-type dopants and N-type dopants, respectively, and the dopant concentration is about 1×1016 per cubic centimeter And about 1×1018 dopant atoms. The P-well and N-well are shown in FIG. 2A as portions of the fin 204 with patterns 224-1 and 224-2. If the transistor 200 is an NMOS transistor, the first well 224-1 is a P-well (ie, has P-type dopants), the second well 224-2 is an N-well (ie, has N-type dopants), and the source The pole and drain region 114 includes N-type dopants, but the dopant concentration is higher than that of the well 224 (as is clear from the dopant concentration example provided above). If the transistor 200 is a PMOS transistor, the first well 224-1 is an N-well, and the second well 224-2 is a P-well. The source and drain regions 114 include P-type dopants but the dopant concentration is higher than that of the well 224. . Generally, for a given type of transistor (e.g., NMOS or PMOS transistor), the first well 224-1 is the portion of the semiconductor material of the fin 204 that includes the first type of dopant (e.g., P-type dopant), Its dopant concentration is between about 1×1016 and about 1×1018 dopant atoms per cubic centimeter. The second well 224-2 is the semiconductor material of the fin 204 and includes a second type of dopant (for example N-type dopant), the dopant concentration is between about 1×1016 and about 1×1018 dopant atoms per cubic centimeter, and the source and drain regions 114 include the second type of doping (Ie, the same as the second well 224-2) but with a higher dopant concentration (for example, at least about 1×10 21 dopant atoms per cubic centimeter or more).In the example of the transistor 200 shown in FIGS. 2A-2C, the first well 224-1 and the second well 224-2 are substantially aligned with the first gate dielectric 110-1 and the second gate dielectric 110-2, respectively. That is, for the transistor 200, the first well 224-1 extends from the source region 114-1 below the first gate dielectric 110-1 and terminates at the beginning of the second gate dielectric 110-2. The second well 224-2 may then start at the end of the first well 224-1 and extend under the second gate dielectric 110-2, and for the extended drain transistor embodiment shown in FIG. 2A, the second The well 224-2 may extend all the way to the drain region 114-2. That is, in the example of the transistor 200, the portion of the semiconductor material of the fin 104 under the thinner gate dielectric 110-1, that is, the portion 204-1, has a first type of dopant (e.g., P-type Dopants), and the portion of the semiconductor material of the fin 104 under the thicker gate dielectric 110-2, that is, the portion 204-2, which has a second type of dopant (e.g., N-type Dopants) (while, again, the source and drain regions 114 include the second type of dopants). In some such embodiments, the portion of the semiconductor material between the second portion 204-2 and the drain region 114-2, that is, the third portion 204-3, may also include a second type of dopant (eg N-type dopant), that is, the second well 224-2 extends to the extended drain 114-2 of the transistor 200. The embodiment of the transistor 200 in which the first well 224-1 and the second well 224-2 are substantially aligned with the first gate dielectric 110-1 and the second gate dielectric 110-2, respectively, is in terms of ease of manufacture and process cost. It may be advantageous.3 to 7 show transistors 300, 400, 500, 600, 700, which are another example of the FinFET 100 shown in FIG. 1. In particular, each of FIGS. 3 to 5 shows a transistor similar to the transistor 200 shown in FIG. 2, but the alignment of the first well 224-1 and the second well 224-2 is different, and FIGS. 6 to Each of FIG. 7 shows a transistor similar to the transistor 200 shown in FIG. 2, but the alignment of the drain region 114-2 with respect to the end of the second gate dielectric 110-2 is different. Therefore, the description provided with respect to FIG. 2 is applicable to FIGS. 3-7, and therefore, for the sake of brevity, it will not be repeated, and only the differences will be described.FIG. 3 shows that the first well 224-1 extends from the source region 114-1 under the first gate dielectric 110-1, but not at the beginning of the second gate dielectric 110-2 as in the case of FIG. Termination, but an embodiment that extends under the second gate dielectric 110-2 and terminates where the second gate dielectric 110-2 terminates. Since the transistor 300 is also an extended drain transistor, the second well 224-2 can then start at the end of the first well 224-1, that is, where the second gate dielectric 110-2 ends, and extend all the way to the drain region 114- 2. That is, in the example of the transistor 300, the portion of the semiconductor material of the fin 104 under the thinner gate dielectric 110-1 (ie, the portion 204-1) and the semiconductor material of the fin 104 on the thicker gate The portion under the dielectric 110-2 (ie, the portion 204-2) is a well portion with a first type of dopant (for example, a P-type dopant), and the semiconductor material is in the second portion 204-2 and the drain. The portion between the regions 114-2 (ie, the third portion 204-3) includes a second type of dopant (for example, an N-type dopant) (at the same time, similarly, the source and drain regions 114 include the second type Dopants). In the case where a longer channel may be required, such as in the case where a smaller polysilicon pitch is selected, the first well 224-1 extends under both the first gate dielectric 110-1 and the second gate dielectric 110-2 The embodiment of the transistor 300 may be advantageous. This can enable higher drain voltages to be applied without causing high drain-source leakage.FIG. 4 shows an embodiment of the transistor 400 similar to the transistor 300, in which the first well 224-1 extends from the source region 114-1 under the first gate dielectric 110-1, and is similar to FIG. The second gate dielectric 110-2 terminates at the beginning, but continues to extend under the second gate dielectric 110-2. Compared with FIG. 3, the first well 224-1 is terminated before the second gate dielectric 110-2 is terminated. For example, in some embodiments, the first well 224-1 may extend through all of the first portion 204-1 and through about 50% of the portion 204-2 of the channel length under the second gate dielectric 110-2. Since the transistor 400 is also an extended drain transistor, the second well 224-2 can then start where the first well 224-1 ends, that is, under the second gate dielectric 110-2, and extend all the way to the drain region 114 -2. That is, in the example of the transistor 400, the portion of the semiconductor material of the fin 104 under the thinner gate dielectric 110-1 (ie, the portion 204-1) and the semiconductor material of the fin 104 in the thicker gate Some of the portions below the polar dielectric 110-2 (ie, some of the portions 204-2) are well portions having the first type of dopant (for example, P-type dopants), and the portion 204- 2 and the portion of the semiconductor material between the second portion 204-2 and the drain region 114-2 (ie, the third portion 204-3) includes a second type of dopant (for example, N-type dopant). Dopants) (At the same time, also, the source and drain regions 114 include dopants of the second type). According to the use situation, when the device is required to conduct a sufficiently high current during an ESD event and at the same time provide sufficient series resistance to reduce the drain voltage to maintain the integrity of the gate oxide, the first well 224-1 is in the first gate. An embodiment of the transistor 400 that extends under both the dielectric 110-1 and the second gate dielectric 110-2 but terminates under the second gate dielectric 110-2 may be advantageous.Figure 5 shows an embodiment of a transistor 500 similar to the transistor 400 because the first well 224-1 terminates under the gate dielectric (ie, not aligned with one of the gate dielectrics as in the transistor 200 or 300). In contrast to the transistor 400, in the transistor 500, the first well 224-1 has been terminated under the first gate dielectric 110-1. Therefore, as shown in FIG. 5, in the transistor 500, the first well 224-1 extends from the source region 114-1 under the first gate dielectric 110-1, and terminates before the first gate dielectric 110-1 is terminated. At this time, the second well 224-2 starts and continues to the drain region 114-2. For example, in some embodiments, the first well 224-1 may extend under the first gate dielectric 110-1 through about 50% of the first portion 204-1 of the channel length. The second well 224-2 may then extend under the second gate dielectric 110-2 through the remainder of the first portion 204-1 and through the entire second portion 204-2 of the channel length. Since the transistor 500 is also an extended drain transistor, the second well 224-2 can then continue to extend all the way to the drain region 114-2. That is, in the example of the transistor 500, some portions of the semiconductor material of the fin 104 under the thinner gate dielectric 110-1 (ie, some portions in the portion 204-1) are of the first type Dopants (e.g. P-type dopants), and the remaining part of the first part 204-1 and all of the semiconductor material of the fin 104 under the thicker gate dielectric 110-2 (that is, the entire part) 204-2) and the portion of the semiconductor material between the second portion 204-2 and the drain region 114-2 (ie, the third portion 204-3) includes a second type of dopant (e.g., N-type dopant). Dopants) (At the same time, also, the source and drain regions 114 include dopants of the second type). Depending on the use case, the embodiment of the transistor 500 may be advantageous when the device is required to conduct a sufficiently high current during an ESD event and at the same time provide sufficient series resistance to reduce the drain voltage to maintain the integrity of the gate oxide.FIG. 6 shows an embodiment in which the transistor 600 is similar to the transistor 200 shown in FIG. 2, but in the transistor 600, the drain region 114-2 extends to the end of the second gate dielectric 110-2. Therefore, the third portion 204-3 of the transistor 200 is effectively replaced by the drain region 114-2 in the transistor 600. In the embodiment of the transistor 600 shown in FIG. 6, the first well 224-1 and the second well 224-2 are aligned with the first portion 204-1 and the second portion 204-2, as can be seen in FIG. However, in other embodiments of the transistor 600, not shown in this figure, the first well 224-1 and the second well 224-2 may be in the first gate dielectric 110-1 or the second gate dielectric 110- 2 ends below one of, as described with reference to FIGS. 4 and 5, these embodiments are also within the scope of the present disclosure. One advantage of the extended heavily doped drain epitaxial material 114-2 is that it allows the device to conduct a sufficiently high current before breakdown during an ESD event.FIG. 7 shows an embodiment of the transistor 700 similar to the transistor 200 shown in FIG. 2, but in the transistor 700, the second gate dielectric 110-2 and therefore the second portion 204-2 extends to the drain region 114-2 . Therefore, the third part 204-3 of the transistor 200 is effectively replaced by the second part 204-2 of the transistor 700. In the embodiment of the transistor 700 shown in FIG. 7, the first well 224-1 and the second well 224-2 are aligned with the first portion 204-1 and the second portion 204-2, as can be seen in FIG. However, in other embodiments of the transistor 700, not shown in this figure, the first well 224-1 and the second well 224-2 may be in the first gate dielectric 110-1 or the second gate dielectric 110- 2 ends below one of, as described with reference to FIGS. 4 and 5, these embodiments are also within the scope of the present disclosure.Other FETs with double-thickness gate dielectricAs briefly mentioned above, the double-thickness gate dielectrics described herein can be implemented in FETs of any desired architecture. Surrounding or full gate transistors such as nanoribbons and nanowire transistors provide other examples of transistors with non-planar architectures.FIG. 8 is a perspective view of an example full gate transistor 800 that may include a double thickness gate dielectric according to various embodiments described herein. The transistor 800 may include a channel material formed as a nanowire 804 made of one or more semiconductor materials, the nanowire 804 being disposed above a substrate 802. The nanowire 804 may take the form of, for example, a nanowire or a nanoribbon. The gate stack 808 including the gate electrode 812 and the dielectric 810 can completely or almost completely surround the nanowire 804. As shown in FIG. 8, the active region of the channel material of the nanowire 804 corresponds to the nanowire 804 being formed by the gate stack. 808 around the part. Specifically, the gate dielectric 810 may surround the nanowire 804, and the gate electrode 812 may surround the gate dielectric 810. In some embodiments, the gate stack 808 may completely surround the nanowire 804. In some embodiments, an oxide material layer (not specifically shown in FIG. 8) may be provided between the substrate 802 and the gate electrode 810. The nanowire 804 may include a drain region 814-1 and a source region 814-2 on opposite sides of the gate stack 808, as shown in FIG. The substrate 802 of the transistor 800 shown in FIG. 8, the channel material of the nanowire 804, the gate stack 808, the gate dielectric 810, the gate electrode 812, the source region 814-1, and the drain region 814-2 are similar to The substrate 102, the channel material of the fin 104, the gate stack 108, the gate dielectric 110, and the gate electrode 112 of the FinFET 100 shown in FIG. 1 and the example embodiment of the FinFET 100 discussed with reference to FIGS. 2-7 The difference between the source region 114-1 and the drain region 114-2 is that the nanowire 804 is used instead of the fin 104 in FIG. 8. Therefore, the description of these elements provided with reference to FIGS. 1-7 can be applied to FIG. 8, and therefore, for the sake of brevity, it will not be repeated.Although not specifically shown in FIG. 8, a dielectric spacer may be provided between the source electrode and the gate stack of the gate-all-around transistor 800 and between the transistor drain electrode and the gate stack, so that the source electrode and the gate electrode , Provide electrical isolation between the drain electrodes, similar to the spacer described above for the FinFET 100.In addition, although the nanowire 804 shown in FIG. 8 is shown as having a rectangular cross-section, the nanowire 804 may alternatively have a circular or other irregularly shaped cross-section, and the gate stack may be similar to the shape of the nanowire 804. Unanimous. In use, the all-round gate transistor 800 can form conductive channels on more than three "sides" of the nanowire 804, potentially improving performance relative to FinFETs. Although FIG. 8 shows an embodiment in which the longitudinal axis of the nanowire 804 extends substantially parallel to the plane of the substrate 802, this need not be the case; in other embodiments, the nanowire 804 may be oriented "vertically" so as to be vertical. On the plane of the substrate 802.In some embodiments, multiple full gate transistors similar to the transistor shown in FIG. 8 may be provided along a single line such as the nanowire 804, where considerations related to providing multiple devices on a single line are in the art Known, and therefore for the sake of brevity, are not described in detail here.Transistor devices such as the FinFET 100 shown in FIGS. 1-7 and the full gate transistor 800 shown in FIG. 8 and the different variations of such devices as described above do not represent a transistor device in which a double-thickness gate dielectric can be realized This is an exhaustive collection, but only provides examples of such devices. For example, in another embodiment, a transistor with a double-thickness gate dielectric may be a transistor with a planar structure. Although specific arrangements of materials are discussed with reference to Figures 1-8, intermediate materials may be included in the transistor devices of these figures. Note that FIGS. 1-8 are intended to show the relative arrangement of the components therein, and the transistor devices of these figures may include other components not shown (for example, various spacer materials or various interface layers). In addition, although the various components of the transistor device are shown as planar rectangles or formed by rectangular entities in FIGS. 1-8, this is only for the convenience of description, and the embodiments of these transistors may be curved, circular, or otherwise different. Regular shapes, as determined by the manufacturing process used to manufacture various parts, are sometimes unavoidable due to the manufacturing process.Example device implementing FET with double thickness gate dielectric9A-9C are schematic circuit diagrams of electronic devices 900A, 900B, and 900C according to some embodiments of the present disclosure, where each electronic device implements an FET 930 with a double thickness gate dielectric configured for ESD protection. FET 930 may be any transistor having a double thickness gate dielectric as described herein, for example, any embodiment of FinFET 100 described with reference to FIGS. 1-7, any embodiment of nanowire 800 as shown in FIG. 8, such as Any further embodiments of FinFET 100 and/or nanowire FET 800 described herein, or any other (e.g., planar) implementation of FETs as described herein.Each of FIGS. 9A-9C provides a schematic diagram of a FET 930 having a double-thickness gate dielectric according to some embodiments of the present disclosure, for example, a non-transistor such as the FinFET 100 or the nanowire transistor 800 described herein. Planar MOSFET, which is configured for ESD protection of the CMOS circuit 920 coupled to the I/O pad/driver 910. The CMOS circuit 920 may include any core CMOS circuit, such as but not limited to microprocessor logic gates, memory cells, and so on. As shown in FIG. 9A, the CMOS circuit 920 is electrically connected to the I/O 910, through the I/O 910, the CMOS circuit 920 can interface with devices outside the IC chip on which the CMOS circuit 920 is implemented. The I/O 910 can be any conventional I/O pads, pins, terminals, wires, etc. The FET 930 may be used as an ESD protection device by being electrically connected to a circuit node 915 provided between the CMOS circuit 902 and the I/O 910. In the exemplary embodiment shown in Figures 9A-9C, FET 930 is in a GGNMOS configuration. In this configuration, in the normal operation mode, the ESD protection device as the FET 930 is maintained in the "off state" in which the channel region of the FET 930 conducts very small leakage current due to the presence of the grounded gate electrode 112. As shown in each of FIGS. 9A to 9C, the source region 914-1 of the FET 930 may be electrically connected to the gate of the FET 930, and both are connected to the ground potential 940 (for example, Vss), and the source region of the FET 930 The drain region 914-2 may be electrically coupled to a circuit node 915 provided between the CMOS circuit 902 and the I/O 910. The source region 914-1 and the drain region 914-2 of the FET 930 are similar to the source region 114-1 and the drain region 114-2 of the FinFET 100 described above or the source region 814-1 and the source region 814-1 of the nanowire transistor 800 described above. Drain region 814-2.9B and 9C show an embodiment in which the CMOS circuit 920 may be a receiver (for example, an RF receiver). In the embodiment of FIG. 9B, the FET 930 can be considered as an independent GGNMOS, which is used by connecting the gate and source to the ground node and the drain to the exposed I/O pin/pad 910. In the embodiment of FIG. 9C, the FET 930 is GGNMOS, which can also be used as a trigger for a silicon controlled rectifier (SCR) 960 that can be used for ESD protection. In some embodiments, the combination of FET 930 and SCR 960 can be Used with diode 970 for ESD protection. As shown in FIG. 9C, one port of the SCR 960 can be coupled to the drain region 914-2 of the FET 930, and the other port of the SCR 960 can be coupled to each of the CMOS circuit 920 and the I/O pin/pad 910 , And the third port of the SCR 960 can be coupled to the ground node/potential 940. As also shown in FIG. 9C, a diode 970 may be coupled between the I/O pin/pad 910 and the ground node/potential 940.Each of Figures 9B and 9C also shows an optional rail clamp 950, which can be configured to draw current to ground in the event of an ESD spike. Each of Figures 9B and 9C also shows an optional capacitor 955, which can be configured to draw current to ground in the event of an ESD spike.Example manufacturing methodAny suitable technique may be used to fabricate an IC structure that implements one or more transistor devices having at least one FET with a double-thickness gate dielectric according to various embodiments described herein. Figure 10 shows an example of this method. However, other examples of fabricating any of the FETs with a double-thickness gate dielectric as described herein and larger devices and components (e.g., as shown in FIGS. 11-15) including this structure are also in the present disclosure. In the range.FIG. 10 is a flowchart of an example method 1000 of manufacturing a transistor device including an FET with a double-thickness gate dielectric in accordance with various embodiments of the present disclosure.Although the operations of method 1000 are shown as each operation once and in a specific order, the operations may be performed in any suitable order and repeated as needed. For example, one or more operations may be performed in parallel to substantially simultaneously manufacture multiple FETs with double thickness gate dielectrics as described herein. In another example, the operations may be performed in a different order to reflect the structure of a particular device component, which will include one or more FETs with double thickness gate dielectrics as described herein.In addition, the example manufacturing method 1000 may include other operations not specifically shown in FIG. 10, such as various cleaning or planarization operations known in the art. For example, in some embodiments, the substrate 102/802 and various other material layers subsequently deposited thereon may be cleaned before, after or during any of the processes of the method 1000 described herein, for example to remove oxides, surface Bonded organic and metallic contaminants and subsurface contaminants. In some embodiments, it is possible to use, for example, chemical solutions (e.g. peroxides) and/or use ultraviolet (UV) radiation combined with ozone and/or oxidize the surface (e.g., use thermal oxidation) and then remove oxides (e.g., use hydrofluorine Acid (HF)) to perform cleaning. In another example, the transistor structures/components described herein may be planarized before, after, or during any process of the method 1000 described herein, for example, to remove overburben or excess material. In some embodiments, a wet or dry planarization process can be used for planarization. For example, planarization is chemical mechanical planarization (CMP), which can be understood as the use of polished surfaces, abrasives, and slurries to remove overcovering. The process of partially flattening the surface.In various embodiments, any process of the method 1000 may include any suitable patterning technique, such as photolithography or electron beam (e-beam) patterning, possibly in combination with a suitable etching technique, such as dry etching, such as reactive ion. Etching (RIE) or inductively coupled plasma (ICP) RIE. In various embodiments, any of the etchings performed in the method 1000 may include anisotropic etching using, for example, a chemically active ionized gas (ie, bromine (Br) and chlorine (Cl)-based chemical species). , Plasma) in the form of etchant. In some embodiments, during any of the etching of method 1000, the IC structure may be heated to a high temperature, for example, to a temperature between about room temperature and 200 degrees Celsius, including all values and ranges therein, to Promotes sufficient volatilization of by-products of etching to be removed from the surface.The method 1000 can begin with the provision of semiconductor channel materials for future transistors (process 1002 shown in FIG. 10). The channel material provided in 1002 may be one or more semiconductor materials used for the channel portion of the fin 104 or one or more semiconductor materials used for the nanowire 804 described above. The process 1002 may include corresponding processes to shape the channel material according to the specific transistor architecture of the FET being manufactured, for example, shape the channel material into a fin extending away from the substrate, or shape the channel material into nanometers. Wire/Nanobelt. In some embodiments, the process 1002 may include epitaxially growing one or more semiconductor materials to provide channel materials. In this context, "epitaxial growth" refers to depositing a crystalline cover layer in the form of a desired material. For example, any known gaseous or liquid precursors for forming desired material layers can be used to perform the epitaxial growth of one or more layers of process 1002.Then, the method 1000 can continue to provide the S/D region and the P/N well in the channel material provided in 1002 (process 1004 shown in FIG. 10). The S/D regions set in 1004 may be the source region 114-1 and the drain region 114-2 described above for the FinFET 100, and the source region 814-1 and the drain region described above for the nanowire transistor 800. District 814-2. Various techniques for setting the S/D region have been described above, and these techniques include, for example, an implantation/diffusion process or an etching/deposition process. The P/N wells provided in 1004 may be the first well 224-1 and the second well 224-2 as described above for the FinFET 100. In some embodiments, as described above, in addition to the difference in dopant concentration between the P/N well and the S/D region, in 1004, any technique for setting the S/D region described above can be used to set P/N trap. In other embodiments, when the channel material is provided, a P/N well may be provided in 1002.The method 1000 may then include disposing a gate dielectric of the future gate stack over a portion of the channel material disposed in 1002 between the S/D regions disposed in 1004, wherein the gate dielectric includes different portions having different thicknesses (Process 1006 shown in Figure 10). The gate dielectric provided in 1006 may include the first gate dielectric 110-1 and the second gate dielectric 110-2 as described above. Any suitable deposition technique can be used to deposit one or more dielectric materials of the first gate dielectric 110-1 and the second gate dielectric 110-2, such as but not limited to spin coating, dip coating, atomic layer deposition (ALD) , Physical vapor deposition (PVD) (for example, vapor deposition, magnetron sputtering or electron beam deposition) or chemical vapor deposition (CVD).The method 1000 may include disposing the gate electrode of the future gate stack over the gate dielectric disposed in 1006 (process 1008 shown in FIG. 10). The gate electrode provided in 1008 may include the first gate electrode material 112-1 and the second gate electrode material 112-2 as described above. Any suitable deposition technique may be used to deposit one or more of the gate electrode materials in 1008, such as but not limited to ALD, PVD, CVD, or electroplating.Example structure and deviceAn IC structure or transistor device including one or more FETs with the double thickness gate dielectric disclosed herein may be included in any suitable electronic device. Figures 11-15 show various examples of devices and components that can include at least one FET with a double thickness gate dielectric disclosed herein.11A-11B are top views of a wafer 2000 and die 2002 that may include at least one FET with a double thickness gate dielectric according to any of the embodiments disclosed herein. In some embodiments, according to any of the embodiments disclosed herein, the die 2002 may be included in an IC package. For example, any die 2002 can be used as any die 2256 in the IC package 2200 shown in FIG. 12. The wafer 2000 may be composed of a semiconductor material, and may include one or more dies 2002 having an IC structure formed on the surface of the wafer 2000. Each die 2002 may be a repeating unit of a semiconductor product including any suitable IC (for example, an IC including at least one FET with a double-thickness gate dielectric described herein). After the manufacture of the semiconductor product is completed (for example, after the manufacture of at least one FET with a double-thickness gate dielectric as described herein, for example, after the manufacture of any implementation of the IC structure of the transistor device shown in FIGS. 1-9 Example or any other embodiment of these structures described herein), the wafer 2000 may undergo a singulation process, where each die 2002 is separated from each other to provide a discrete "chip" of the semiconductor product. In particular, a device including at least one FET with a double-thickness gate dielectric as disclosed herein may take the form of a wafer 2000 (e.g., unsingulated) or a die 2002 (e.g., singulated) . The die 2002 may include at least one FET with a double-thickness gate dielectric (e.g., one or more FinFET 100 or one or more nanowire transistors 800 as described herein), and optionally, for transmitting electrical signals Transfer to the supporting circuit of at least one FET with double thickness gate dielectric, and any other IC components. In some embodiments, the wafer 2000 or die 2002 may implement RF FE devices, memory devices (for example, static random access memory (SRAM) devices), logic devices (for example, AND, OR, NAND or NOR gates) or Any other suitable circuit components. Multiple of these devices can be combined on a single die 2002.12 is a side cross-sectional view of an example IC package 2200 that may include one or more IC structures having at least one FET with a double-thickness gate dielectric according to any of the embodiments disclosed herein. In some embodiments, the IC package 2200 may be a system in package (SiP).As shown in FIG. 12, the IC package 2200 may include a package substrate 2252. The packaging substrate 2252 may be made of a dielectric material (for example, ceramic, glass, a combination of organic and inorganic materials, a buildup film, an epoxy resin film having filler particles therein, etc., and may have embedded portions with different materials) It is formed and may have conductive paths extending through the dielectric material between face 2272 and face 2274, or between different locations on face 2272, and/or between different locations on face 2274.The package substrate 2252 may include conductive contacts 2263 coupled to conductive paths 2262 through the package substrate 2252, thereby allowing circuits within the die 2256 and/or the interposer 2257 to be electrically coupled to respective conductive contacts in the conductive contacts 2264. Point 2264 (or electrically coupled to other devices included in the package substrate 2252, not shown).The IC package 2200 may include an interposer 2257 coupled to the package substrate 2252 via the conductive contacts 2261 of the interposer 2257, the first level interconnect 2265 and the conductive contacts 2263 of the package substrate 2252. The first level interconnect 2265 shown in FIG. 12 is a solder bump, but any suitable first level interconnect 2265 can be used. In some embodiments, the interposer 2257 may not be included in the IC package 2200; instead, the die 2256 may be directly coupled to the conductive contact 2263 at the face 2272 through the first level interconnect 2265.The IC package 2200 may include one or more dies 2256 coupled to the interposer 2257 via the conductive contacts 2254 of the die 2256, the first level interconnect 2258, and the conductive contacts 2260 of the interposer 2257. The conductive contact 2260 may be coupled to the conductive via (not shown) through the interposer 2257, allowing the circuit in the die 2256 to be electrically coupled to each of the conductive contacts 2261 (or electrically coupled to the conductive contacts included in the interposer 2257). Other devices, not shown). The first level interconnect 2258 shown in FIG. 12 is a solder bump, but any suitable first level interconnect 2258 can be used. As used herein, "conductive contact" can refer to a part of a conductive material (e.g., metal) used as an interface between different components; the conductive contact can be recessed into the surface of the component, flush with or flush with the surface of the component. Extend away from the surface of the component and can take any suitable form (e.g., conductive pad or socket).In some examples, the underfill material 2266 may be disposed between the package substrate 2252 and the interposer 2257 around the first level interconnect 2265, and the molding compound 2268 may be disposed around the die 2256 and the interposer 2257 and be connected to the package substrate 2252. contact. In some embodiments, the underfill material 2266 may be the same as the molding compound 2268. Where appropriate, exemplary materials that can be used for the underfill material 2266 and the molding compound 2268 are epoxy molding materials. The second level interconnect 2270 may be coupled to the conductive contact 2264. The second level interconnect 2270 shown in FIG. 12 is a solder ball (for example, for a ball grid array arrangement), but any suitable second level interconnect 2270 (for example, pins or pins in a pin grid array arrangement) can be used. Lands in a grid array arrangement). The second level interconnect 2270 can be used to couple the IC package 2200 to another component, such as a circuit board (for example, a mother board), an interposer, or another IC package, as known in the art and as described below with reference to FIG. 13 discussed.The die 2256 may take the form of any embodiment of the die 2002 discussed herein, and may include any embodiment of an IC structure having at least one FET with a double-thickness gate dielectric, such as those shown in FIGS. 1-9 Any IC structure/transistor device, or any other embodiment of at least one FET with double thickness gate dielectric described herein. In an embodiment where the IC package 2200 includes a plurality of dies 2256, the IC package 2200 may be referred to as a multi-chip package (MCP). Die 2256 may include circuitry to perform any desired function. For example, one or more of the dies 2256 may be RF FE dies and/or logic dies, including at least one FET with a double-thickness gate dielectric as described herein, one or more of the dies 2256 It can be a memory die (for example, a high-bandwidth memory) or the like. In some embodiments, any of the dies 2256 may include, for example, at least one FET having a double-thickness gate dielectric as described above; in some embodiments, at least some of the dies 2256 may not include a double-thickness gate dielectric. Any FET with gate dielectric.The IC package 2200 shown in FIG. 12 is a flip chip package, but other package architectures can be used. For example, the IC package 2200 may be a ball grid array (BGA) package, such as an embedded wafer level ball grid array (eWLB) package. In another example, the IC package 2200 may be a wafer level chip scale package (WLCSP) or a fan-out panel (FO) package. Although two dies 2256 are shown in the IC package 2200 of FIG. 12, the IC package 2200 may include any desired number of dies 2256. The IC package 2200 may include additional passive components, such as surface mount resistors, capacitors, and inductors provided on the first side 2272 or the second side 2274 of the package substrate 2205, or on either side of the interposer 2257 . More generally, the IC package 2200 may include any other active or passive components known in the art.13 is a cross-sectional side view of an IC device assembly 2300, which may include one or more IC structures having at least one FET implementing at least one FET with a double thickness gate dielectric according to any of the embodiments disclosed herein. The IC device assembly 2300 includes a plurality of components provided on a circuit board 2302 (which may be, for example, a mother board). The IC device assembly 2300 includes components disposed on the first side 2340 of the circuit board 2302 and the opposite second side 2342 of the circuit board 2302; generally, the components may be disposed on one or both sides 2340 and 2342. In particular, any suitable component among the components of the IC device assembly 2300 may include any IC structure that implements at least one FET with a double-thickness gate dielectric according to any embodiment disclosed herein; for example, discussed below with reference to the IC device assembly 2300 Any IC package of may take the form of any of the embodiments of IC package 2200 discussed above with reference to FIG. 12 (e.g., at least one FET with a double thickness gate dielectric may be included in/on die 2256).In some examples, the circuit board 2302 may be a printed circuit board (PCB) including a plurality of metal layers separated from each other by a layer of dielectric material and interconnected by conductive vias. Any one or more metal layers may be formed in a desired circuit pattern to transmit electrical signals between components coupled to the circuit board 2302 (optionally in combination with other metal layers). In other embodiments, the circuit board 2302 may be a non-PCB substrate.The IC device assembly 2300 shown in FIG. 13 includes a package-on-interposer structure 2336 coupled to the first side 2340 of the circuit board 2302 through a coupling member 2316. The coupling member 2316 can electrically and mechanically couple the package-on-interposer structure 2336 to the circuit board 2302, and can include solder balls (as shown in FIG. 13), protrusions and recesses of the socket, adhesives, underfill materials, and/ Or any other suitable electrical and/or mechanical coupling structure.The package-on-interposer structure 2336 may include an IC package 2320 coupled to the interposer 2304 through a coupling member 2318. The coupling component 2318 may take any suitable form for the application, such as the form discussed above with reference to the coupling component 2316. The IC package 2320 may be or include, for example, a die (die 2002 of FIG. 11B), an IC device (e.g., the IC structure or transistor device of FIGS. 1-9), or any other suitable components. Specifically, the IC package 2320 may include at least one FET with a double thickness gate dielectric as described herein. Although a single IC package 2320 is shown in FIG. 13, multiple IC packages may be coupled to the interposer 2304; in fact, additional interposers may be coupled to the interposer 2304. The interposer 2304 may provide an interposer for bridging the circuit board 2302 and the IC package 2320. Generally, the interposer 2304 can extend the connection to a wider pitch or reroute the connection to a different connection. For example, the interposer 2304 may couple the IC package 2320 (eg, die) to the BGA of the coupling part 2316 for coupling to the circuit board 2302. In the embodiment shown in FIG. 13, the IC package 2320 and the circuit board 2302 are attached to the opposite side of the interposer 2304; in other embodiments, the IC package 2320 and the circuit board 2302 may be attached to the same side of the interposer 2304 . In some embodiments, three or more components may be connected to each other through an interposer 2304.The interposer 2304 may be formed of epoxy resin, glass fiber reinforced epoxy resin, ceramic material, or polymer material such as polyimide. In some embodiments, the interposer 2304 may be formed of alternating rigid or flexible materials, which may include the same materials described above for semiconductor substrates, such as silicon, germanium, and other III-V and IV materials. The interposer 2304 may include metal interconnects 2310 and vias 2308, including but not limited to through silicon vias (TSV) 2306. The interposer 2304 may also include embedded devices 2314, including passive devices and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, ESD protection devices, and storage devices. More complex devices such as additional RF devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on the interposer 2304. In some embodiments, an IC structure that implements at least one FET with a double-thickness gate dielectric as described herein may also be implemented in/on the interposer 2304. The package on interposer structure 2336 may take the form of any package on interposer structure known in the art.The IC device assembly 2300 may include an IC package 2324 coupled to the first side 2340 of the circuit board 2302 through a coupling part 2322. The coupling component 2322 can take the form of any embodiment discussed above with reference to the coupling component 2316, and the IC package 2324 can take the form of any embodiment discussed above with reference to the IC package 2320.The IC device assembly 2300 shown in FIG. 13 includes a stacked package structure 2334 coupled to the second side 2342 of the circuit board 2302 through a coupling member 2328. The stacked package structure 2334 may include an IC package 2326 and an IC package 2332 coupled together by a coupling part 2330 such that the IC package 2326 is disposed between the circuit board 2302 and the IC package 2332. The coupling components 2328 and 2330 can take the form of any embodiment of the aforementioned coupling component 2316, and the IC packages 2326 and 2332 can take the form of any embodiment of the aforementioned IC package 2320. The stacked package structure 2334 can be configured according to any stacked package structure known in the art.14 is a block diagram of an example computing device 2400 that may include one or more components having one or more IC structures having at least one gate dielectric having a double thickness according to any of the embodiments disclosed herein FET. For example, any suitable component of the components of the computing device 2400 may include a die (e.g., die 2002 (FIG. 11B)) that includes at least a double-thickness gate dielectric according to any of the embodiments disclosed herein. A FET. Any component of the computing device 2400 may include an IC device (e.g., any embodiment of the IC structure or transistor device of FIGS. 1-9) and/or the IC package 2200 (FIG. 12). Any component of the computing device 2400 may include an IC device assembly 2300 (FIG. 13).A number of components are shown in FIG. 14 as being included in the computing device 2400, but any one or more of these components may be omitted or repeated as appropriate for the application. In some embodiments, some or all of the components included in the computing device 2400 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated onto a single SoC die.In addition, in various embodiments, the computing device 2400 may not include one or more components shown in FIG. 14, but the computing device 2400 may include an interface circuit for coupling to one or more components. For example, the computing device 2400 may not include the display device 2406, but may include display device interface circuits (e.g., connectors and driver circuits) to which the display device 2406 can be coupled. In another set of examples, the computing device 2400 may not include the audio input device 2418 or the audio output device 2408, but may include audio input or output device interface circuits to which the audio input device 2418 or the audio output device 2408 can be coupled (e.g., Connectors and supporting circuits).The computing device 2400 may include a processing device 2402 (e.g., one or more processing devices). As used herein, the term "processing device" or "processor" can refer to any device or part of a device that processes electronic data from registers and/or memory to transform the electronic data into registers and / Or other electronic data in the memory. The processing device 2402 may include one or more digital signal processors (DSP), dedicated ICs (ASICs), central processing units (CPUs), graphics processing units (GPUs), encryption processors (dedicated processing that executes encryption algorithms in hardware) Processor), server processor or any other suitable processing equipment. The computing device 2400 may include a memory 2404, which itself may include one or more memory devices, such as volatile memory (e.g., DRAM), non-volatile memory (e.g., read only memory (ROM)), flash memory, solid-state memory And/or hard drive. In some embodiments, the memory 2404 may include a memory that shares a die with the processing device 2402. The memory can be used as a cache memory and can include, for example, eDRAM or spin transfer torque magnetic random access memory (STT-MRAM).In some embodiments, the computing device 2400 may include a communication chip 2412 (e.g., one or more communication chips). For example, the communication chip 2412 may be configured to manage wireless communication for transmitting data to and from the computing device 2400. The term "wireless" and its derivatives can be used to describe circuits, devices, systems, methods, technologies, communication channels, etc. that can transmit data through non-solid media by using modulated electromagnetic radiation. The term does not imply that the related equipment does not contain any wires, although in some embodiments they may not.The communication chip 2412 can implement any one of a number of wireless standards or protocols, including but not limited to, including Wi-Fi (IEEE 802.11 series), IEEE 802.16 standard (for example, IEEE 802.16-2005 amendment), the Institute of Electrical and Electronics Engineers ( IEEE) standards, Long Term Evolution (LTE) projects and any amendments, updates and/or amendments (for example, LTE Advanced Project, Ultra Mobile Broadband (UMB) Project (also known as "3GPP2"), etc.). Broadband wireless access (BWA) networks compatible with IEEE 802.16 are commonly referred to as WiMAX networks. The acronym stands for "Global Interoperability for Microwave Access", which is a certification of products that have passed the conformance and interoperability tests of the IEEE 802.16 standard Sign. The communication chip 2412 can operate according to the Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA) or LTE network . The communication chip 2412 may operate according to GSM Evolution Enhanced Data (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chip 2412 can be designated as 3G, 4G, 5G according to code division multiple access (CDMA), time division multiple access (TDMA), digital enhanced cordless telecommunications (DECT), evolution data optimization (EV-DO) and its derivatives. And any other wireless protocols after that. In other embodiments, the communication chip 2412 may operate according to other wireless protocols. The computing device 2400 may include an antenna 2422 to facilitate wireless communication and/or receive other wireless communications (such as AM or FM radio transmissions).In some embodiments, the communication chip 2412 may manage wired communication such as electrical, optical, or any other suitable communication protocol (eg, Ethernet). As described above, the communication chip 2412 may include multiple communication chips. For example, the first communication chip 2412 may be dedicated to short-distance wireless communication, such as Wi-Fi and Bluetooth, and the second communication chip 2412 may be dedicated to long-distance wireless communication, such as Global Positioning System (GPS), EDGE, GPRS, CDMA, WiMAX , LTE, Ev-DO, etc. In some embodiments, the first communication chip 2412 may be dedicated to wireless communication, and the second communication chip 2412 may be dedicated to wired communication.In various embodiments, an IC structure having at least one FET with a double-thickness gate dielectric described herein is particularly advantageous for use in one or more communication chips 2412 described above. For example, this IC structure with at least one FET with a double-thickness gate dielectric can be used to implement power amplifiers, low-noise amplifiers, filters (including filter arrays and filter banks), switches, up-converters, down-converters, etc. One or more of the frequency converter and the duplexer, for example, as part of implementing an RF transmitter, RF receiver, or RF transceiver.The computing device 2400 may include a battery/power circuit 2414. The battery/power circuit 2414 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuits for coupling components of the computing device 2400 to an energy source separate from the computing device 2400 (e.g., AC line power) .The computing device 2400 may include a display device 2406 (or corresponding interface circuit, as described above). The display device 2406 may include any visual indicator, such as a head-up display, a computer monitor, a projector, a touch screen display, a liquid crystal display (LCD), a light emitting diode display, or a flat panel display.The computing device 2400 may include an audio output device 2408 (or corresponding interface circuit, as described above). The audio output device 2408 may include any device that generates an audible indicator, such as speakers, headphones, or earplugs.The computing device 2400 may include an audio input device 2418 (or corresponding interface circuit, as described above). The audio input device 2418 may include any device that generates a signal representative of sound, such as a microphone, a microphone array, or a digital musical instrument (for example, a musical instrument with a musical instrument digital interface (MIDI) output).The computing device 2400 may include a GPS device 2416 (or corresponding interface circuit, as described above). The GPS device 2416 can communicate with a satellite-based system and can receive the location of the computing device 2400, as known in the art.The computing device 2400 may include other output devices 2410 (or corresponding interface circuits, as described above). Examples of other output devices 2410 may include audio codecs, video codecs, printers, wired or wireless transmitters for providing information to other devices, or other storage devices.The computing device 2400 may include other input devices 2420 (or corresponding interface circuits, as described above). Examples of other input devices 2420 may include accelerometers, gyroscopes, compasses, image capture devices, keyboards, cursor control devices such as mice, styluses, touch pads, barcode readers, quick response (QR) codes Reader, any sensor or radio frequency identification (RFID) reader.The computing device 2400 can have any required form factor, such as handheld or mobile computing devices (e.g., cellular phones, smart phones, mobile Internet devices, music players, tablets, laptops, netbooks, ultrabooks, personal computers). Digital assistants (PDAs, ultra-mobile personal computers, etc.), desktop computing devices, servers or other networked computing components, printers, scanners, monitors, set-top boxes, entertainment control units, vehicle control units, digital cameras, digital video recorders, or wearables Computing equipment. In some embodiments, the computing device 2400 may be any other electronic device that processes data.15 is a block diagram of an example RF device 2500 that may include one or more components having one or more IC structures having at least one gate dielectric having a double thickness according to any of the embodiments disclosed herein FET. For example, any suitable component of the RF device 2500 may include a die (for example, the die 2002 described with reference to FIG. 11 or a die implementing any embodiment of the IC structure or transistor device of FIGS. 1-9), the die including At least one FET having a double-thickness gate dielectric according to any embodiment disclosed herein. Any component of the RF device 2500 may include an IC device (for example, an IC device including any embodiment of the IC structure or transistor device of FIGS. 1-9) and/or the IC package 2200 as described with reference to FIG. 12. Any component of the RF device 2500 may include the IC device assembly 2300 as described with reference to FIG. 13. In some embodiments, the RF device 2500 can be included in any component of the computing device 2400 as described with reference to FIG. 14, or can be coupled to any component of the computing device 2400, such as the memory 2404 and/or Processing equipment 2402. In other embodiments, the RF device 2500 may also include any components described with reference to FIG. 14, such as but not limited to a battery/power circuit 2414, a memory 2404, and various input and output devices as shown in FIG.Generally, the RF device 2500 may be any device or system that can support wireless transmission and/or reception of signals in the form of electromagnetic waves in the RF range of approximately 3 kilohertz (kHz) to 300 gigahertz (GHz). In some embodiments, the RF device 2500 may be used for wireless communication, for example, in a BS or UE device of any suitable cellular wireless communication technology such as GSM, WCDMA, or LTE. In another example, the RF device 2500 may be used as or used in a BS or UE device such as millimeter wave wireless technology, such as fifth-generation (5G) wireless (ie, high frequency/short wavelength spectrum, for example, with a frequency of about 20 A frequency in the range between 60 GHz and 60 GHz corresponds to a wavelength in the range between approximately 5 and 15 millimeters). In yet another example, the RF device 2500 can be used for wireless communication using WiFi technology (for example, the 2.4 GHz frequency band, corresponding to a wavelength of about 12 cm, or the 5.8 GHz frequency band, corresponding to a frequency spectrum of about 5 cm wavelength), for example, In devices that support WiFi, such as desktops, laptops, video game consoles, smart phones, tablets, smart TVs, digital audio players, cars, printers, etc. In some embodiments, the WiFi-enabled device may be, for example, a node in a smart system configured to perform data communication with other nodes such as smart sensors. In another example, the RF device 2500 may be used for wireless communication using Bluetooth technology (for example, a frequency band from about 2.4 to about 2.485 GHz, corresponding to a wavelength of about 12 cm). In other embodiments, the RF device 2500 may be used to transmit and/or receive RF signals for purposes other than communication, for example, in automotive radar systems, or in medical applications such as magnetic resonance imaging (MRI) .In various embodiments, the RF device 2500 may be included in an FDD or time domain duplex (TDD) variant of frequency allocation that can be used in a cellular network. In the FDD system, the uplink (i.e., the RF signal transmitted from the UE device to the BS) and the downlink (i.e., the RF signal transmitted from the BS to the US device) can use separate frequency bands at the same time. In a TDD system, the uplink and downlink can use the same frequency at different times.A number of components are shown in FIG. 15 as being included in the RF device 2500, but any one or more of these components may be omitted or repeated as appropriate for the application. For example, in some embodiments, the RF device 2500 may be an RF device (for example, an RF transceiver) that supports wireless transmission and reception of RF signals, in which case it may include what is referred to herein as a transmission (TX) path The components and the components referred to herein as the receive (RX) path. However, in other embodiments, the RF device 2500 may be an RF device (for example, an RF receiver) that only supports wireless reception. In this case, it may include components of the RX path but not the components of the TX path; Or the RF device 2500 may be an RF device (for example, an RF transmitter) that only supports wireless transmission. In this case, it may include the components of the TX path but not the components of the RX path.In some embodiments, some or all of the components included in the RF device 2500 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated on a single die, for example, on a single SoC die.In addition, in various embodiments, the RF device 2500 may not include one or more components shown in FIG. 15, but the RF device 2500 may include an interface circuit for coupling to one or more components. For example, the RF device 2500 may not include the antenna 2502, but may include an antenna interface circuit (for example, a matching circuit, a connector, and a driver circuit) to which the antenna 2502 can be coupled. In another set of examples, the RF device 2500 may not include the digital processing unit 2508 or the local oscillator 2506, but may include device interface circuits (e.g., connectors and support) to which the digital processing unit 2508 or the local oscillator 2506 can be coupled. Circuit).As shown in FIG. 15, the RF device 2500 may include an antenna 2502, a duplexer 2504, a local oscillator 2506, and a digital processing unit 2508. Also as shown in FIG. 15, the RF device 2500 may include an RX path, and the RX path may include an RX path amplifier 2512, an RX path premix filter 2514, an RX path mixer 2516, an RX path postmix filter 2518, and Analog-to-digital converter (ADC) 2520. As further shown in FIG. 15, the RF device 2500 may include a TX path, and the TX path may include a TX path amplifier 2522, a TX path post-mixing filter 2524, a TX path mixer 2526, and a TX path pre-mixing filter 2528. , And digital-to-analog converter (DAC) 2530. Furthermore, the RF device 2500 may further include an impedance tuner 2532 and an RF switch 2534. In various embodiments, the RF device 2500 may include multiple instances of any of the components shown in FIG. 15. It may be considered that the RX path amplifier 2512, the TX path amplifier 2522, the duplexer 2504, and the RF switch 2534 are used to form or part of the RF FE of the RF device 2500. It may be considered that the RX path mixer 2516 and the TX path mixer 2526 (possibly having their associated pre-mixing filters and post-mixing filters shown in FIG. 15) are used to form the RF of the RF device 2500 The transceiver (or an RF receiver or RF transmitter if only the RX path or TX path components are included in the RF device 2500, respectively) or a part thereof.The antenna 2502 may be configured to wirelessly transmit and/or receive RF signals according to any wireless standard or protocol, such as Wi-Fi, LTE, or GSM, and any other wireless protocol designated as 3G, 4G, 5G, and later. If the RF device 2500 is an FDD transceiver, the antenna 2502 may be configured to simultaneously receive and transmit communication signals in separate, that is, non-overlapping and non-contiguous frequency bands, such as frequency bands having an interval of, for example, 20 MHz from each other. If the RF device 2500 is a TDD transceiver, the antenna 2502 may be configured to sequentially receive and transmit communication signals in frequency bands that may be the same or overlapping for TX and RX paths. In some embodiments, the RF device 2500 may be a multi-band RF device. In this case, the antenna 2502 may be configured to simultaneously receive signals with multiple RF components in separate frequency bands and/or be configured to At the same time, signals with multiple RF components are transmitted in separate frequency bands. In such an embodiment, the antenna 2502 may be a single broadband antenna or multiple frequency band dedicated antennas (ie, multiple antennas each configured to receive and/or transmit signals in a specific frequency band). In various embodiments, the antenna 2502 may include multiple antenna elements, such as multiple antenna elements forming a phased array antenna (ie, a communication system or antenna array that may use multiple antenna elements and phase shifts to transmit and receive RF signals). Antenna element. Compared with single-antenna systems, phased array antennas can provide advantages such as increased gain, directional steering capabilities, and simultaneous communication. In some embodiments, the RF device 2500 may include more than one antenna 2502 to achieve antenna diversity. In some such embodiments, the RF switch 2534 may be deployed to switch between different antennas.The output of the antenna 2502 may be coupled to the input of the duplexer 2504. The duplexer 2504 may be any suitable component configured to filter multiple signals to allow two-way communication on a single path between the duplexer 2504 and the antenna 2502. The duplexer 2504 may be configured to provide RX signals to the RX path of the RF device 2500 and receive TX signals from the TX path of the RF device 2500.The RF device 2500 may include one or more local oscillators 2506, which are configured to provide local oscillator signals that can be used to down-convert the RF signals received by the antenna 2502 and/or The signal transmitted by the antenna 2502 is up-converted.The RF device 2500 may include a digital processing unit 2508, which may include one or more processing devices. In some embodiments, the digital processing unit 2508 may be implemented as the processing device 2402 shown in FIG. 14 and a description thereof is provided above (when used as the digital processing unit 2508, the processing device 2402 may but not necessarily Implement any IC structure and/or electronic device as described herein, for example, any IC structure and/or electronic device having at least one FET with a double thickness gate dielectric according to any embodiment disclosed herein). The digital processing unit 2508 may be configured to perform various functions related to digital processing of RX and/or TX signals. Examples of such functions include, but are not limited to, decimation/down-sampling, error correction, digital down-conversion or up-conversion, DC offset cancellation, automatic gain control, etc. Although not shown in FIG. 15, in some embodiments, the RF device 2500 may also include a memory device, for example, the memory device 2404 described with reference to FIG. 14, which is configured to cooperate with the digital processing unit 2508. When used in or coupled to the RF device 2500, the memory device 2404 may but does not have to implement any IC structure described herein, for example, a FET having at least one gate dielectric with a double thickness according to any embodiment disclosed herein的IC structure.Turning to the details of the RX path that may be included in the RF device 2500, the RX path amplifier 2512 may include an LNA. The input of the RX path amplifier 2512 may be coupled to an antenna port (not shown) of the antenna 2502 via a duplexer 2504, for example. The RX path amplifier 2512 can amplify the RF signal received by the antenna 2502.The output of the RX path amplifier 2512 may be coupled to the input of the RX path premix filter 2514. The RX path premix filter may be, for example, a harmonic or bandpass filter, which is configured to be The amplified received RF signal is filtered.The output of the RX path premix filter 2514 may be coupled to the input of the RX path mixer 2516 (also referred to as a downconverter). The RX path mixer 2516 may include two inputs and one output. The first input may be configured to receive an RX signal, which may be a current signal indicating the signal received by the antenna 2502 (e.g., the first input may receive the output of the RX path premix filter 2514). The second input may be configured to receive a local oscillator signal from one of the local oscillators 2506. The RX path mixer 2516 may then mix the signals received at its two inputs to generate a down-converted RX signal, which is provided at the output of the RX path mixer 2516. As used herein, down conversion refers to the process of mixing a received RF signal with a local oscillator signal to produce a lower frequency signal. In particular, the downconverter 2516 may be configured to generate a sum frequency and/or difference frequency at the output port when two input frequencies are provided at the two input ports. In some embodiments, the RF device 2500 can implement a direct conversion receiver (DCR), also known as a homodyne, sync, or zero IF receiver. In this case, the RX path mixer 2516 can be configured to use the frequency The local oscillator signal that is the same as or very close to the carrier frequency of the radio signal demodulates the input radio signal. In other embodiments, the RF device 2500 may utilize down conversion to intermediate frequency (IF). IF can be used in a superheterodyne radio receiver, where the received RF signal is shifted to IF before the final detection of the information in the received signal is completed. Conversion to IF may be useful for several reasons. For example, when using multi-stage filters, they can all be set to a fixed frequency, which makes them easier to build and tune. In some embodiments, the RX path mixer 2516 may include several such IF conversion stages.Although a single RX path mixer 2516 is shown in the RX path of FIG. 15, in some embodiments, the RX path mixer 2516 may be implemented as a quadrature downconverter, in which case it includes the first RX path mixer and second RX path mixer. The first RX path mixer may be configured to perform down-conversion to generate in-phase (I) down-conversion by mixing the RX signal received by the antenna 2502 with the in-phase component of the local oscillator signal provided by the local oscillator 2506 RX signal. The second RX path mixer can be configured to perform down-conversion by combining the RX signal received by the antenna 2502 with the quadrature component of the local oscillator signal provided by the local oscillator 2506 (the quadrature component is the same as the local oscillator signal). The in-phase component of the converter signal is mixed with a 90-degree phase shift to generate a quadrature (Q) down-converted RX signal. The output of the first RX path mixer may be provided to the I signal path, and the output of the second RX path mixer may be provided to the Q signal path, and the phase difference between the Q signal path and the I signal path may be substantially 90 degree.The output of the RX path mixer 2516 may optionally be coupled to the RX path post-mixing filter 2518, which may be a low-pass filter. In the case where the RX path mixer 2516 is a quadrature mixer that implements the first and second mixers as described above, the in-phase and quadrature mixers are provided at the outputs of the first and second mixers, respectively The components may be coupled to respective individual first and second RX path post-mixing filters included in the filter 2518.The ADC 2520 may be configured to convert the mixed RX signal from the RX path mixer 2516 from the analog domain to the digital domain. The ADC 2520 may be a quadrature ADC, which is similar to the RX path quadrature mixer 2516, and may include two ADCs configured to digitize the down-converted RX path signal separated in the in-phase and quadrature components. The output of the ADC 2520 may be provided to the digital processing unit 2508, which is configured to perform various functions related to the digital processing of the RX signal, so that the information encoded in the RX signal can be extracted.Turning to the details of the TX path that can be included in the RF device 2500, the DAC 2530 can be provided from the digital processing unit 2508 to the digital signal (TX signal) to be transmitted by the antenna 2502 later. Similar to the ADC 2520, the DAC 2530 may include two DACs configured to convert the digital I-path TX signal component and the Q-path TX signal component into analog form, respectively.Optionally, the output of the DAC 2530 may be coupled to the TX path premix filter 2528, which may be a low-pass filter (or in the case of quadrature processing, a pair of filters), which is configured as a slave from the DAC 2530 The output analog TX signal filters out signal components other than the desired frequency band. The digital TX signal can then be provided to the TX path mixer 2526, which can also be referred to as an upconverter. Similar to the RX path mixer 2516, the TX path mixer 2526 may include a pair of TX path mixers for mixing in-phase and quadrature components. Similar to the first and second RX path mixers that may be included in the RX path, each TX path mixer of the TX path mixer 2526 may include two inputs and one output. The first input can receive a TX signal component, which is converted into an analog form by the corresponding DAC 2530, which will be up-converted to generate the RF signal to be transmitted. The first TX path mixer can generate an in-phase (I) upconverted signal by mixing the TX signal component converted into analog form by the DAC 2530 with the in-phase component of the TX path local oscillator signal provided from the local oscillator 2506 ( In various embodiments, the local oscillator 2506 may include a plurality of different local oscillators, or be configured to provide different local oscillator frequencies to the mixer 2516 in the RX path and the mixer 2526 in the TX path) . The second TX path mixer may generate a quadrature phase (Q) up-converted signal by mixing the TX signal component converted into an analog form by the DAC 2530 with the quadrature component of the TX path local oscillator signal. The output of the second TX path mixer can be added to the output of the first TX path mixer to generate the actual RF signal. The second input of each of the TX path mixers may be coupled to the local oscillator 2506.Optionally, the RF device 2500 may include a TX path post-mixing filter 2524 configured to filter the output of the TX path mixer 2526.The TX path amplifier 2522 may be a PA that is configured to amplify the up-converted RF signal before providing it to the antenna 2502 for transmission.In various embodiments, any one of the RX path pre-mixing filter 2514, the RX path post-mixing filter 2518, the TX post-mixing filter 2524, and the TX pre-mixing filter 2528 may be implemented as RF filter. In some embodiments, each such RF filter may include one or more, usually multiple, such as resonators arranged in a ladder configuration (e.g., film bulk acoustic resonator (FBAR), Lamb wave resonator). And/or contour wave resonator). The single resonator of the RF filter may include a layer of piezoelectric material such as aluminum nitride (AlN) enclosed between the bottom electrode and the top electrode, wherein a cavity is provided around a part of each electrode to allow the piezoelectric material Part of it vibrates during the operation of the filter. In some embodiments, the RF filter may be implemented as multiple RF filters or filter banks. The filter bank may include a plurality of RF resonators, which may be coupled to a switch, such as an RF switch 2534, which is configured to selectively turn on and off any one of the plurality of RF resonators (ie, Turn on any of the multiple RF resonators) to achieve the desired filtering characteristics of the filter bank (i.e., to program the filter bank). For example, when the RF device 2500 is a BS or UE device, or is included in a BS or UE device, such a filter bank may be used to switch between different RF frequency ranges. In another example, such a filter bank may be programmable to suppress TX leakage at different duplex distances.The impedance tuner 2532 may include any suitable circuit that is configured to match the input and output impedances of different RF circuits to minimize signal loss in the RF device 2500. For example, the impedance tuner 2532 may include an antenna impedance tuner. Being able to tune the impedance of the antenna 2502 may be particularly advantageous because the impedance of the antenna is a function of the environment in which the RF device 2500 is located, for example, the impedance of the antenna changes depending on whether the antenna is held in the hand, placed on the roof, etc.As described above, the RF switch 2534 can be used to selectively switch between multiple instances of any one of the components shown in FIG. 15 in order to achieve the desired behavior and characteristics of the RF device 2500. For example, in some embodiments, an RF switch can be used to switch between different antennas 2502. In other embodiments, the RF switch may be used to switch between multiple RF resonators of any filter included in the RF device 2500 (e.g., by selectively turning the RF resonator on and off).In various embodiments, when the duplexer 2504, RX path amplifier 2512, RX path premix filter 2514, RX path post-mix filter 2518, TX path amplifier 2522, TX path premix filter When used in any of 2528, TX path post-mixing filter 2524, impedance tuner 2532, and/or RF switch 2534, FETs with double-thickness gate dielectrics as described herein may be particularly advantageous.The RF device 2500 provides a simplified version, and in a further embodiment, may include other components not specifically shown in FIG. 15. For example, the RX path of the RF device 2500 may include a current-voltage amplifier between the RX path mixer 2516 and the ADC 2520, which may be configured to amplify and convert the down-converted signal into a voltage signal. In another example, the RX path of the RF device 2500 may include a balun for generating a balanced signal. In yet another example, the RF device 2500 may also include a clock generator, which may, for example, include a suitable PLL configured to receive a reference clock signal and use the reference clock signal to generate a different clock signal, the different clock signal then It can be used to time the operation of the ADC 2520, DAC 2530, and/or it can also be used by the local oscillator 2506 to generate a local oscillator signal to be used in the RX path or the TX path.Selection exampleThe following paragraphs provide various examples of the embodiments disclosed herein.Example 1 provides a transistor device including a semiconductor (channel) material, which is disposed over a part of a support structure (for example, a substrate, a die, or a chip); a source region and a drain region are disposed in the semiconductor material; And the gate stack is arranged above the part of the semiconductor material between the source region and the drain region, wherein the part includes a first part and a second part. The gate stack includes one or more gate electrode materials; a first gate dielectric disposed between the first portion of the semiconductor material and the one or more gate electrode materials; and a second gate dielectric disposed on the semiconductor material Between the second part of and one or more gate electrode materials, wherein the thickness of the first gate dielectric is different from the thickness of the second gate dielectric.Example 2 provides the transistor device according to Example 1, wherein the first part of the semiconductor material is closer to the source region than the second part of the semiconductor material, and the second part of the semiconductor material is closer to the drain region than the first part of the semiconductor material.Example 3 provides the transistor device according to Example 2, wherein the distance between the second portion of the semiconductor material and the drain region is between about 10 and 1000 nanometers.Example 4 provides the transistor device according to Example 2 or 3, wherein the thickness of the second gate dielectric (ie, the gate dielectric closest to the drain region) is greater than that of the first gate dielectric (ie, the gate closest to the source region). The thickness of the polar dielectric, for example, the thickness of the second gate dielectric may be between about 1.1 times and 5 times the thickness of the first gate dielectric (for example, about 2 times or about 3 times larger).Example 5 provides the transistor device according to Example 4, wherein the dielectric constant of the second gate dielectric (ie, the thicker gate dielectric) is higher than that of the first gate dielectric (ie, the thinner gate dielectric) The constant is at least 3 times smaller (atleast 3times smaller than...).Example 6 provides the transistor device according to any one of Examples 1-5, wherein the first part of the semiconductor material includes a first type of dopant (for example, a P-type dopant), and the second part of the semiconductor material includes The second type of dopant (for example, N-type dopant). In some such examples, the portion of the semiconductor material between the second portion and the drain region may also include a second type of dopant (for example, an N-type dopant).Example 7 provides the transistor device according to any one of Examples 1-5, wherein each of the first part and the second part of the semiconductor material includes a first type of dopant (for example, a P-type dopant), And the portion of the semiconductor material between the second portion and the drain region includes a second type of dopant (for example, an N-type dopant).Example 8 provides the transistor device according to any one of Examples 1-5, wherein the first portion of the semiconductor material portion includes a first type of dopant (for example, a P-type dopant), and the second portion of the semiconductor material The portion closest to the first portion includes the first type of dopant (for example, P-type dopant), and the portion of the second portion of the semiconductor material that is closest to the first portion of the second portion of the semiconductor material and the drain region The portion in between (e.g., the remainder of the second portion of the semiconductor material) includes a second type of dopant (e.g., an N-type dopant).Example 9 provides the transistor device according to any one of Examples 1-5, wherein the portion of the first portion of the semiconductor material closest to the source region includes a first type of dopant (for example, a P-type dopant), The portion of the first portion of the semiconductor material between the portion of the first portion of the semiconductor material closest to the source region and the second portion of the semiconductor material (ie, the remaining portion of the first portion of the semiconductor material) includes the second type of doping (E.g., N-type dopant), and the second portion of the semiconductor material includes a second type of dopant (e.g., N-type dopant). In some such examples, the portion of the semiconductor material between the second portion and the drain region may also include a second type of dopant (for example, an N-type dopant).Example 10 provides the transistor device according to any one of Examples 6-9, wherein the dopant of the first type is at a dopant concentration between about 1×1016 and about 1×1018 dopant atoms per cubic centimeter , And/or the second type of dopant is at a dopant concentration between about 1×1016 and about 1×1018 dopant atoms per cubic centimeter.Example 11 provides the transistor device according to any one of Examples 6-10, wherein each of the source region and the drain region includes the second type of dopant, and each of the source region and the drain region The dopant concentration of the second type of dopant in is at least about 1 x 1021 dopant atoms per cubic centimeter.Example 12 provides the transistor device according to any one of the preceding examples, wherein the one or more gate electrode materials above the first gate dielectric include a work function (WF) material and a gate electrode material, such that the WF material is on the gate electrode Between the material and the first gate dielectric, and the one or more gate electrode materials above the first gate dielectric include gate electrode materials in contact with the second gate dielectric. Therefore, in some examples, the WF material may be disposed over the first gate dielectric (eg, the gate dielectric closest to the source region) and not over the second gate dielectric. In other examples, the same or different WF materials may be provided over the first gate dielectric and the second gate dielectric.Example 13 provides the transistor device according to any one of the preceding examples, wherein each of the source region and the gate stack is electrically coupled to ground potential, and the drain region is electrically coupled to the input/output port and the transistor Each of the other circuits protected by the device.Example 14 provides the transistor device according to Example 13, wherein the other circuit is a receiver circuit.Example 15 provides an electronic device including an input/output (I/O) port; a receiver circuit having an input coupled to the I/O port; and an electrostatic discharge protection (ESD) circuit coupled to the I/O port and Coupled to the input of the receiver circuit, where the ESD circuit includes a transistor having a source region, a drain region, and a gate stack. Each of the source region and the gate stack is coupled to the ground potential. The drain region is coupled to the I/O port and to the input of the receiver circuit and to the I/O port and to the input of the receiver circuit, the first part of the gate stack includes the first gate dielectric, and the gate stack The second part of the body includes a second gate dielectric, the thickness of the first gate dielectric is smaller than the thickness of the second gate dielectric, and the first part of the gate stack is closer to the source region than the second part of the gate stack (And the second part of the gate stack is closer to the drain region than the first part of the gate stack).Example 16 provides the electronic device according to Example 15, and further includes a diode coupled between the ground potential and the I/O port.Example 17 provides the electronic device according to Example 16, wherein the electronic device further includes a silicon controlled rectifier (SCR) circuit, the drain region is coupled to the input of the I/O port and the receiver circuit by being coupled to the SCR circuit, and the SCR circuit Coupled to the input of the I/O port and the receiver circuit.Example 18 provides the electronic device according to any one of Examples 15 to 17, wherein the transistor is an extended drain transistor.In various further examples, the transistor of the electronic device according to any one of Examples 15 to 18 may be implemented as a transistor device according to any one of the foregoing examples (for example, the transistor device according to any one of Examples 1 to 14 ).Example 19 provides a method of forming a transistor device, the method comprising providing a source region and a drain region in a semiconductor (channel) material provided over a portion of a support structure (eg, substrate, die, or chip); and A gate stack is provided over a portion of the semiconductor material between the source region and the drain region, wherein the portion includes a first portion and a second portion, and the gate stack includes one or more gate electrode materials, The first gate dielectric is disposed between the first part of the semiconductor material and the one or more gate electrode materials, and the second gate dielectric is disposed between the second part of the semiconductor material and the one or more gate electrode materials Wherein, the thickness of the first gate dielectric is different from the thickness of the second gate dielectric.Example 20 provides the method according to Example 19, wherein the first part of the semiconductor material is closer to the source region than the second part of the semiconductor material, the second part of the semiconductor material is closer to the drain region than the first part of the semiconductor material, and the second part The thickness of the gate dielectric (ie, the gate dielectric closest to the drain region) is greater than the thickness of the first gate dielectric.Example 21 provides an IC package including an IC die, the IC die including one or more of the transistor device and/or electronic device according to any one of the preceding examples (for example, according to any one of Examples 1-14) One or more of the transistor devices of the item and/or one or more of the electronic device according to any one of Examples 15-18); and another component coupled to the IC die.Example 22 provides the IC package according to Example 21, wherein the other component is one of a package substrate, a flexible substrate, or an interposer.Example 23 provides the IC package according to Examples 21 or 22, wherein the other component is coupled to the IC die via one or more first level interconnects.Example 24 provides the IC package according to Example 23, wherein the one or more first level interconnects include one or more solder bumps, solder pillars, or bonding wires.Example 25 provides a computing device including a circuit board; and an integrated circuit (IC) die coupled to the circuit board, wherein the IC die includes the transistor device and/or the electronic device according to any one of the preceding examples One or more (for example, one or more of the transistor devices according to any one of Examples 1-14 and/or one or more of the electronic devices according to any one of Examples 15-18), and/ Or included in an IC package according to any one of the foregoing examples (for example, an IC package according to any one of Examples 21-24).Example 26 provides the computing device according to Example 25, wherein the computing device is a wearable computing device (e.g., a smart watch) or a handheld computing device (e.g., a mobile phone).Example 27 provides the computing device according to example 25 or 26, wherein the computing device is a server processor.Example 28 provides the computing device according to example 25 or 26, wherein the computing device is a motherboard.Example 29 provides the computing device according to any one of Examples 25-28, wherein the computing device further includes one or more communication chips and an antenna.The above description of the illustrated embodiments of the present disclosure (including the content described in the abstract) is not intended to be exhaustive or to limit the present disclosure to the precise form disclosed. Although specific embodiments and examples of the present disclosure are described herein for illustrative purposes, as those skilled in the relevant art will recognize, various equivalent modifications can be made within the scope of the present disclosure. Based on the above specific embodiments, these modifications can be made to the present disclosure. |
A method of forming an insulating structure, comprising forming an insulating region comprising at least one electrical or electronic component or part thereof embedded within the insulating region, and forming a surface structure in a surface of the insulating region. |
A method of forming an insulating structure, comprisingforming an insulating region (62, 90, 112, 114) comprising at least one electrical or electronic component or part thereof (100, 270, 360) embedded within the insulating region, andforming a surface structure (120) in a surface of the insulating region.A method as claimed in claim 1, in which the step of forming the insulating region comprises depositing M layers of insulator (82, 90, 92) over a substrate (50), where M is an integer greater than or equal to one.A method as claimed in claim 2, further comprising forming a component, or part of a component (100), over the Mth layer (92) of insulator.A method as claimed in claim 3, further comprising depositing N layers (112, 114) of insulator over the M layers (90, 92) of insulator and the or each component or part of a component formed thereon.A method as claimed in claim 4, further comprising selective etching of an uppermost layer of insulator so as to form a plurality of trenches thereon.A method as claimed in claim 4 or 5, in which at least one of the following apply:a) the method further comprises forming a component, or part of a component prior to depositing the M layers of insulator;b) the method further comprises forming a component or part of a component or part of a component on a substrate, and depositing the M layers of insulator over a portion of the substrate containing the component or the part of the component.A method as claimed in claim 4, 5 or 6, in which the component is a transformer comprising a first coil (52) formed beneath the M layers of insulator and a second coil (100) formed between the Mth layer and the N layers of insulator.A method of forming an isolator for receiving an input at a first potential, and converting it to an output as a second potential, comprising forming a input circuit on a first semiconductor substrate, forming an output circuit on a second semiconductor substrate, and interconnecting the input and output circuits by a transformer as claimed in claim 7.A method of forming an isolator as claimed in claim 8, in which the transformer is formed on one of the first and second semiconductor substrates, or is formed on a third substrate, and preferably in which the isolator is formed using fabrication techniques compatible with or used for the formation of integrated circuits, and the isolator is packaged within a chip scale package.A method as claimed in claim 2 or any claim dependent on claim 2, in which the layers of insulator are formed by depositing and curing layers of polyimide, or other insulating polymers or oxides.A method as claimed in any of claims 1 to 8, further comprising packaging the insulating structure within an integrated circuit package (160), wherein the insulating structure is embedded in a packaging mould compound (170) within the integrated circuit package (160).An insulating structure within an integrated circuit package (160), the insulating structure comprising an electrical or electronic component, or at least a part thereof (100, 270, 360), embedded within the insulating structure (114), wherein the insulating structure has a textured or ribbed surface.An insulating structure as claimed in claim 12, further comprising a mould compound (170) holding the insulating structure within the integrated circuit package, where the mould compound engages with the textured or ribbed surface of the insulating structure.An insulating structure as claimed in claim 12 or 13, wherein the insulating structure is formed over a first substrate, and a first part of a signal conveying component is formed on or over the first substrate, and a second part of the signal conveying component is embedded within the insulating structure, and optionally in which the signal conveying component is a transformer having first and second coils.An isolator comprising a first circuit for at least receiving an input signal and a second circuit for at least outputting an output signal and a transformer formed within an insulating structure as claimed in claim 14, and in which at least the first and second circuits are formed on respective substrates, carried on respective lead frames within the integrated circuit package.A method of manufacturing an electrical device comprising at least one electrical or electronic circuit provided within a package that carries connectors for establishing electrical connection the circuit, and wherein the circuit comprises at least one component encapsulated in an insulator, the method comprising forming a plurality of structures extending substantially perpendicularly to a surface of the insulator, and embedding the circuit encapsulated within the insulator in a mould compound within the package, wherein the mould compound engages with the plurality of structures at the surface of the insulator. |
FIELD OF THE INVENTIONThe present disclosure relates to the formation of insulating structures on integrated circuit or on chip scale dimensions, and the inclusion of such structures within typical integrated circuit packages.BACKGROUNDMost electronic circuits are implemented within microelectronic circuits, commonly referred to as "chips". Such a chip comprises a semiconductor die carrying the microelectronic circuit encapsulated within a plastics case. This enables the chip to be bonded or soldered to circuit boards and the like for the formation into more complex products. Many applications of microelectronic circuitry may require interfacing from a relatively low voltage side, where for example the supply rails may differ from each other by only a few volts, to higher voltage components as might be found in the energy, signaling, automation, communications or motor control arenas. There are also safety critical applications such as medical applications, where high voltages must not be allowed to propagate from the circuit towards a patient being monitored. Although these high voltages may not be generated deliberately, they might occur in certain fault modes, for example if a power supply were to develop a fault. It is known to isolate low voltage and high voltage sides of a circuit from one another using "isolators". These have typically involved discrete components, such a signal transformers, being mounted on a circuit board between a low voltage side of the board and the high voltage side of the board. More recently "chip scale" isolators have become available. Within a "chip scale" isolator the low voltage and high voltage sides of the circuit are provided within a plastics package of the type known in the provision of integrated circuits, such as a dual in line package.The reduced dimensions in chip scale isolators start to give rise to breakdown mechanisms not seen in non-chip scale, i.e. discrete component isolators.SUMMARYAccording to a first aspect of the present invention there is provided a method of forming an insulating structure, comprising forming an insulating region comprising at least one electrical or electronic component, or part thereof, embedded within the insulating region, and forming a surface structure in a surface of the insulating region.The surface structure may comprise features that extend from, or are recessed into, what would otherwise be a planar surface of the insulating region. The use of a non-planar surface increases the length of a discharge path between two points on the surface compared to the use of a planar surface. This increased discharge path length acts to inhibit breakdown mechanisms operating at the surface of the insulator, or at the boundary between two insulators. Provision of a non-planar surface also gives the possibility of greater mechanical or chemical adhesion between the surfaces.Preferably the surface structure comprises a plurality of trenches or a plurality of walls, or a combination thereof. Trenches can be regarded as being bounded by walls. Such a structure gives rise to a reasonably abrupt change in the direction of the surface and, without being bound by any particular theory such abrupt changes seem to be effective at inhibiting current flow along the surface of the insulator. Such structures also provide enhanced mechanical keying between the insulating region, and insulating mould compounds that are routinely provided in the field of semiconductor packaging when placing semiconductor dies within plastic "chip" packages.The insulating region may be formed by depositing a plurality of insulating layers over a substrate. The insulating layers may be formed of polyimide or any other suitable material. Other suitable insulators may include benzocyclobutane (BCB), epoxy and/or polymers such as SU8 (which is used in semiconductor fabrication as a photoresist) or insulating oxides such as silicon dioxide. The substrate might be an insulating substrate, such as glass or polymer or plastics, or may be a semiconductor substrate such as silicon. Part of the coupling component may be provided on the silicon substrate. Thus, using conventional integrated circuit processing techniques a coil such as an aluminum coil may be fabricated over a silicon substrate. Other metals may also be used. The coil may itself be covered by a passivation layer, such as silicon dioxide. One or more layers of a suitable insulating material, such as polyimide, may then be deposited above the coil. This allows the thickness of the polyimide, and hence the breakdown voltage of the insulator, to be controlled. Then a second coil, for example of aluminum, gold or another suitable metal may be deposited over an upper surface of the polyimide in order that the two coils cooperate to form a transformer. One or more further layers of insulation, such as polyimide, BCB, SU8 etc, may then be placed over the additional coil. Each layer of insulation may lie above, and extend around the sides of, each preceding layer. If desired, further components embedded in further layers of insulator may be fabricated. Finally the outermost layer of insulator may be masked and selectively etched so as to form a pattern of trenches therein which, in use, can inter-engage with a mould compound, or indeed a further insulating compound, in order to form a secure bond over the uppermost layer of polyimide. Such a structure prevents the breakdown mechanisms resulting from "tracking" across the surface of the polyimide.According to a further aspect there is provided an insulating structure within an integrated circuit package, the insulating structure comprising an electrical or electronic component, or at least part thereof, embedded within an insulating structure, wherein the insulating structure has a textured or ribbed surface.Advantageously the insulating structure may be used to form a signal conveying component, such as a transformer. The insulating structure may be used to form an isolator wherein a first circuit that performs at least an input function is formed on a first die and a second circuit which performs at least an output function is formed on a second die, and where the first and second circuits are interconnected via the insulating structure. The first and second dies may be formed on respective lead frames within the integrated circuit package, i.e. within the chip. The insulating structure may be provided on either of the first or second dies, or in a further arrangement may be provided on a third die or substrate. Where a third substrate is used it does not have to be a semiconducting substrate, and may be formed of any suitable insulating material such as glass, plastics, or polymers. The third die may be carried on its own lead frame, or may be secured to one of the other lead frames by way of an insulating layer between the third substrate and the lead frame. The two, or three, substrates may then be embedded within a known mould compound during insertion into a plastics integrated circuit package.According to a further embodiment there is provided a method of manufacturing an electrical device comprising at least one electrical or electronic component provided within a package that carries connectors for establishing electrical connection to the circuit, and wherein the circuit comprises at least one component encapsulated in an insulator, the method comprising forming a plurality of structures extending substantially perpendicularly to the surface of the insulator, and embedding the circuit encapsulated within the insulator in a mould compound within the package, wherein the mould compound engages with the plurality of structures at the surface of the insulator.As used herein, the term "embedded" allows for the deliberate formation of connection regions to or from the at least one component such that electrical connection can be made to the component.BRIEF DESCRIPTION OF THE FIGURESEmbodiments of the present invention will now be described, by way of non-limiting example only, with reference to the accompanying Figures, in which:Figure 1 is a schematic diagram of an isolator providing galvanic isolation between an input circuit and an output circuit, by way of an intermediate transformer formed on a substrate using techniques comparable with very large scale integration (VLSI) formation of electronic circuits;Figure 2 is a cross section through an isolating transformer having one coil formed over a substrate and a second coil embedded within an insulator, and having a surface structure formed in an upmost layer of insulator;Figure 3 is a plan view of a first surface structure configuration;Figure 4 shows a modification to the plan view of Figure 3 ;Figure 5 is a plan view of a second surface structure arrangement;Figure 6 shows a modification to the arrangement of Figure 5 ;Figure 7 is a plan view of a further surface pattern;Figure 8 is a plan view of a further surface pattern;Figure 9 is a schematic representation of a transformer formed, in part, within an insulating structure formed over a semiconductor substrate and embedded within a mould compound within an integrated circuit package;Figure 10 is a plan view of a dual in line packaged isolator comprising three dies carried on two lead frames;Figure 11 is a schematic diagram showing a further configuration of an insulating structure;Figure 12 is a schematic diagram illustrating a further variation of an insulating structure;Figure 13 is a schematic diagram of a further insulating structure and a transformer based isolator therein; andFigure 14 is a schematic diagram of an insulating structure having a capacitor based isolator therein.DESCRIPTION OF EXAMPLE EMBODIMENTSFigure 1 schematically represents the components within a signal isolator which acts to receive an input signal at the first voltage or a first voltage range, which may be a relatively high voltage, and to convey it at a lower voltage for processing by further components, not shown, such as a microprocessor. Such an isolator 10 typically comprises a receive circuit 12 that has input terminals 14 and 16 for receiving an input signal, and processing electronics 18 which acts to convert the input signal into a form suitable for transmission across an isolation circuit 20. The processing electronics 18 may, for example, encode a voltage by converting it to the frequency domain, or may encode a logic signal by providing a high frequency sinusoid to the isolation circuit when the logic signal is asserted, and inhibiting provision of the sinusoid to the isolation circuit when the logic signal is not asserted. The isolation circuit 20 in this example comprises a first transformer coil 22 magnetically coupled to a second transformer coil 24. The magnetic coupling may simply be a result of the relative proximity of the coils to one another. The coils are separated by an insulating material 23. Advantageously one or both of the coils are also embedded in an insulating material. An output of the coil 22 is provided to an output circuit 30 where a further electronic circuit 32 processes the signals received from the second coil 24 in order to reconstitute a representation of the input signal provided to the drive circuit 12. The arrangement shown in Figure 1 is highly simplified, and, for example, a single channel may include two transformers such that the signal can be conveyed in a differential manner, or in a phase or frequency modulated manner.Additionally, it may be desirable to send signals back from the low voltage side of the circuit 30 to the higher voltage side 12, and therefore each element may be provided in a bi-directional manner, and the isolator may be used to convey signals in a bi-directional manner, or additional isolators may be provided such that some of the isolators may be dedicated for transmission of data in one direction and other of the isolators may be dedicated for the transmission of data in a second direction. Furthermore, if the input receiver circuitry 12 is unable to derive power from the equipment that it is connected to, then it is also possible to use the transformers to provide power to run the receiver circuit.As shown in Figure 1 , the receiver circuit 12, the isolator 20, and the output circuit 30 have been provided on respective substrates. It is desirable that the receiver at the high voltage side and the low voltage output side circuit 30 are provided on respective substrates, but either of those substrates may optionally incorporate the isolator 20.Figure 2 is a cross section through an embodiment of an isolator 20. The diagram is not to scale, and in particular the thickness of the substrate 50 is, much greater than shown in Figure 2 . In the arrangement shown in Figure 2 a substrate 50, such as a semiconductor wafer, acts as a carrier for the insulating structure used to form a transformer based signal isolator. A first coil 52 formed as a spiraling metal track is provided at the surface of the substrate 50. A layer of insulator 53 such as silicon dioxide insulates the metal track from the substrate. The metal track may be formed of Aluminum, Gold or any other suitable metal. Other conducting materials may also be used. The nature of a spiral track is that a connection is made to a radially outermost most part 54 of the spiral 52 and that a connection must also be made to radially innermost part 56 of the spiral 52. The connection to the outermost part 54 can be easily accomplished by extending the metal layer used to form the spiral such that it forms a track 60 extending towards a bond pad region 62. A connection to the innermost portion 56 of the spiral has to be made in a plane above or below the plane of the spiral. In the arrangement shown in Figure 2 a decision has been made to make the interconnection 70 below the plane of the spiral conductor 52, for example by forming a highly doped region or a further metal layer 70 which connects to the region 56 by way of a first via 72 and which connects to a further section of metal track 74 by way of a further via 76. Thus a further insulation oxide layer (not shown) may lie beneath the metal layer 70 so as to insulate it from the substrate. A further section of metal track 74 extends towards a bond pad region 80. The metal tracks may be covered by a thin layer of passivation 82, such as silicon dioxide, or some other insulator, except in the regions of the bond pads 62 and 80 where the passivation is etched away. The fabrication of such structures is known to the person skilled in the art and need not be described further here.It is known to the person skilled in the art that insulators can typically withstand the maximum electric field across them before dielectric breakdown occurs and a conductive path opens through the insulator. The electric field is expressed in volts per unit distance, and hence the key to obtaining high breakdown voltages is to be able to control the thickness of the insulator. Polyimide is a compound which is suitable for use as an insulator as it has a breakdown voltage of around 800 to 900 volts per µm, and is also relatively easy to work with within the context of semiconductor fabrication processes and is largely self planarising. Other insulating materials that are commonly used in integrated circuit fabrication include BCB and SU8. Other insulating polymers and oxides may also be used. As shown in Figure 2 , a first layer of insulator 90 , for example of polyimide is deposited over the region of the substrate 50 and the passivation 82 in which the first coil 52 is formed. Then a second overlapping layer 92 of insulator, such as polyimide is formed over the first region 90 so as to build up an additional thickness of the insulator. The ends of the region 92 are allowed to wrap around the ends of the region 90, such that the insulating structure increases in both depth and lateral extent. Each deposition step generally increases the thickness of the insulator by, in the case of polyimide between 10 to 16 microns. Thus after two deposition steps the insulator is typically between 20 and 32 microns thick. If necessary or desirable further layers can be deposited to form thicker structures. These thicknesses being relatively well controlled and selectable at the choice of the fabricator. Next a second metallic layer 100 is deposited over the layer 92 and patterned, for example to form a second spiral track which co-operates with the first spiral track to form a transformer. The second metal layer 100 may be of aluminum or another suitable metal such as gold. As with the first conductive spiral track, connections need to be made to both a centermost portion of the spiral and an edge portion. For diagrammic simplicity the connection to the outer edge portion has been omitted and can be assumed to lie either above or below the plane of the Figure 2 , whereas the central portion has been shown and represented as a bond pad 110.Following formation of the second spiral conductive track 100, a third layer 112 of insulator, such as polyimide is deposited over the second layer 92 and over the spiral track 100. The layer 112 extends beyond and overlaps the layer 92. After formation of the layer 112 it is masked and then selectively etched so as to open up a connection aperture 113 to the bond pad 110. This step may be omitted where the spatial extent of the bond pad is such that the fabricator can reliably form deeper etches so as to form the aperture to 113 reach the bond pad in subsequent etching steps, which will be described later.Following formation of the third layer of insulator 112 a fourth insulating layer , such as a polyimide layer 114 may be deposited over it. The fourth layer 114 may also advantageously reach around the edges of the preceding layer, i.e. the third layer 112. Following deposition and curing of the fourth layer 114, it is selectively masked in order that a surface structure can be formed within the layer 114, and then it is etched, for example, by plasma etching or other suitable etching process, to form a series of trenches 120 extending partially or wholly through the depth of the layer 114. This etching may also be sufficient to reach through to reveal the bond pad 110 by virtue of acting on a wider aperture within the mask.Figure 3 is a plan view of the etching, and hence trench pattern, that may be formed in the fourth insulating layer 114. As viewed from above the layer is etched to form a pattern of concentric squares or rectangles. The rectangles may have rounded corners as shown in Figure 3 , or may be formed by trenches intersecting at substantially 90°. A pattern of trenches 120 encircles the bond pads 110 such that any breakdown path trying to form along the surface of the insulating layer 114 has to travel a much further distance and, as will be explained later, this structure also provides features to facilitate keying with a bond or a mould compound.Figure 4 illustrates a variation to the arrangement shown in Figure 3 where either the walls are discontinuous, or where the trenches are discontinuous. Either of these choices is freely available to the designer. In this example the walls are discontinuous such that one trench can extend into a radially adjacent trench by virtue of linking sections 132.The wall or trench pattern need not be formed by basically rectangular features, and Figures 5 and 6 show alternative patterns formed of substantially circular or elliptical walls or trenches which may include bridges extending across a trench or apertures through a wall, similar to that described with respect to Figures 3 and 4 . As shown, substantially planar regions 140 exist around the outermost trench or wall, and these can either be left unpatterned or further patterns can be formed in them, for example arcuate trenches or walls.Further patterning structures are available, for example a honeycomb structure as shown in Figure 7 or a block paving configuration as shown in Figure 8 . These structures are only given as examples and other structures, whether they be geometric or substantially irregular can also be provided.Figure 9 schematically illustrates the transformer based isolator discussed with respect to Figure 2 when packaged within an integrated circuit package 160. The components are not shown to scale, for diagrammatic clarity, and also when compared to Figure 1 only the isolator 20 has been illustrated, and the receiver circuit 12 and output circuit 30 have not been illustrated although it is to be understood that corresponding components are included within the integrated circuit package 160. In this example the package 160 is of a dual inline type having legs 162 provided on opposing sides of the package 160.For diagrammatic simplicity, some of the bonds with the bond pads have been shown, and it is to be understood that these extend from the bond pads to lead frames, as will be described later on with respect to Figure 10 .The various dies within the package 160 are held within a mould compound 170 which serves to hold them in place, acts as an insulator, and provides environmental protection. As shown in greater detail in the expanded section 174, the mould compound 170 extends into the trenches 120 formed within the fourth layer of insulator 114 thereby enhancing the mechanical and chemical bonds between the mould compound 170 and the insulating structure. This is advantageous since any gap between the surface of the insulating structure, i.e. the fourth layer 114 and the mould compound 170 can give rise to a breakdown path. Additionally, in use, as the semiconductor component ages it is possible for delamination to occur between the mould compound and the semiconductor dies formed therein. The delamination gives rise to the formation of a gap which could give rise to a surface breakdown mechanism. The enhanced mechanical bounding resulting from the surface structure mitigates the risk of such delamination occurring between the insulating component and the mould compound 170, thereby enhancing the longevity of the component, and allowing the manufacturer to provide a component that can be guaranteed to withstand a higher voltage between its high voltage and low voltage sides before breakdown might occur.As noted with respect to Figure 9 , the various dies within the integrated circuit package need to connect with one another and need to connect with the legs 162. Figure 10 shows a configuration of a three die isolator in which the receiver circuit 12 is carried on a first die 200 mounted on a first lead frame 202. Wire interconnections for example 204, are made between bond pads on the die 200 and respective bond regions of the lead frame 202. In the arrangement shown in Figure 10 the output circuit 30 is formed on a die 210 which is carried on a second lead frame 212 together with the isolator 20. Wire connections extend between the first die 200 and the corresponding high voltage connections to the first transformer coil of the isolator 20. An example of one such connection is designated 220. Similarly connections, for example 230 extend between the second transformer coil and the die 210. The entire arrangement is embedded within the mould compound 170 described hereinbefore with respect to Figure 9 . In the arrangement shown in Figure 10 , a common return connection is provided for the transformer coils on the high voltage side, and a separate common return connection is provided for the transformer coils on the low voltage side so as to reduce the number of interconnections made to the isolator 20.Figure 11 shows a further variation on the arrangement shown in Figure 2 such that a lowermost transformer coil 230 is formed over an insulating layer such as the passivation 82 above a substrate 50. The first coil 230 may be embedded within a first layer of insulator 240, such as polyimide, BCB, SU8 or other suitable insulator. As shown, an outermost connection to the first coil 230 may be formed by extending the metallization to a first bond pad 242, and a connection to the innermost part of the coil 230 may be made by a further metallic strip 250 formed over the substrate (and on top of an insulating layer 53 placed over substrate if the substrate is silicon 50), extending between a further bond pad 252 and a via 254 that extends through the passivation 82 (or other insulating layer) to reach an innermost portion of the coil 230. A second layer of insulator 260 is formed over the first layer 240 and wraps around the edges of the layer 240 so as to enclose it. The layer 260 defines the minimum inter-coil distance, and hence the breakdown voltage between the coils of the transformers. Next a second coil 270 is formed over the second insulating layer 260 by metal deposition and patterning. A connection to the radially innermost part of the coil is represented by bond pad 272 whereas the outermost part of the coil has not been shown and is assumed to be above or below the plane of the diagram. Next, a third layer of insulator 280 is deposited over the second coil 270. Thus, comparing this arrangement with that shown in Figure 2 it can be seen that a surface pattern in the form of walls 290 surrounded by corresponding trenches 292 are formed in the third layer 280 rather than in a further (fourth) layer of polyimide. For diagrammatic simplicity only one wall 290 has been shown surrounded by corresponding trenches, but it is to be assumed that multiple walls can be fabricated. For simplicity the, first, second and third (or indeed further) insulating layers may be made of the same material, but this is not a requirement.The arrangement described herein is not limited to the fabrication of transformers, and as shown in Figure 12 only a single metallic spiral 300 might be formed so as to fabricate an inductor. Thus, the arrangement shown in Figure 12 is similar to that shown in Figure 11 except that the first spiral inductor 230 and metal layers to connect it to have been omitted. Furthermore, just to show the further variations which are possible, a fourth layer 310 of insulator has been deposited so as to overlie and surround the other layers, and then masked and etched so as to open up a connection to a bond pad 320 and also to form a surface structure 330 to facilitate engagement with the mould compound once the insulating structure has been placed within an integrated circuit package. The layers 240 and 260 beneath the component 280 may be regarded as forming M layers over the substrate, and the or each layer over the component 280 may be regarded as forming a further N layers.Figure 13 shows a further variation which is similar to Figure 2 , and like parts have been identified with the like reference numbers. Comparing Figure 13 with Figure 2 , the third and fourth layers 92 and 114 have been replaced by a single layer of insulator 350, for example of polyimide, having trenches 120 formed therein. Connections to the bond pads 80 and 110 are shown in Figure 13 .Figure 14 is similar to Figure 13 to the extent that the uppermost layer of insulator 350 covers an uppermost metal layer 360, and similar to Figure 11 in that lower metal layer 370 has been formed over the passivation 82. However here the metal layers 360 and 370 have not been patterned to form inductors, but instead form plates of a capacitor. A lowermost metal layer 74 beneath the passivation 82 and over the oxide layer 53 can be used to provide a connection to the lowermost plate of the capacitor.Each of these variations may, in use, be embedded in mould compound.It is thus possible to improve the breakdown voltage of an insulating structure by at least one of: a) lengthening the surface breakdown path across an uppermost surface of the insulating structure; and b) improving the adhesion between the insulating structure and a further insulating material such as mould compound.A further advantage of the arrangement described herein is that the surface structure can be fabricated at the same time as masking and etching the polyimide so as to open up apertures to make electrical contact to the bond pads. Thus, the enhanced breakdown voltage described by the provision of such a structure can effectively be fabricated for no additional cost. |
A base having a plurality of pins extending upwardly therefrom is provided. A support is provided over the base. The support has an upper surface and a plurality of holes extending therethrough. The pins extend through the holes and upwardly beyond the upper surface of the support. An actuator is provided beneath the support. A board having a plurality of integrated circuits bonded thereto is provided. The integrated circuits form a repeating pattern of integrated circuit packages across the board, and the board has a plurality of holes extending through it. The board is placed over the support upper surface with the pins extending into the holes in the board. While the board is over the support upper surface, it is cut to separate the integrated circuit packages from one another. |
What is claimed is: 1. An integrated circuit package separator for separating integrated circuit packages from a board comprising a plurality of integrated circuit components bonded thereto, the components extending outwardly from the board, the board having a plurality of holes extending within it, the separator comprising:a base having a plurality of pins extending upwardly therefrom; a support over the base and having an upper planar surface, the support having a plurality of holes extending there through and a pair of opposing ends, the pins extending through the holes and upwardly beyond the upper planar surface of the support; the support and pins being configured such that the pins extend into the holes in the board when the board is placed over the support upper planar surface; a pair of actuators beneath the support and configured to vertically displace the support and lift the support off the pins, the actuators comprising a first actuator proximate one of said opposing ends and a second actuator proximate the other of said opposing ends; a panel over the support; a plurality of blocks over the panel, the blocks having upper surfaces and being configured to support the board while leaving the integrated circuit chip components extending between the block upper surfaces and the panel; and a cutting mechanism configured to cut the board while the board is over the panel and to thereby separate the integrated circuit packages from one another. 2. The separator of claim 1 wherein the pins align with the board such that each of the separated integrated circuit packages is retained to the support by at least one pin.3. The separator of claim 1 wherein the pins align with the board such that each of the separated integrated circuit packages is retained to the support by at least two pins.4. The separator of claim 1 wherein the actuators are pneumatically powered.5. The separator of claim 1 wherein the actuators are coupled to the support through first and second lift members, respectively; the lift members having substantially planar upper surfaces and the base having a substantially planar upper surface, the substantially planar upper surfaces of the lift members being substantially flush with the base substantially planar upper surface.6. The separator of claim 1 wherein the actuators are coupled to the support through first and second lift members, respectively; at least one of the lift members having at least one post extending upwardly therefrom, the at least one post extending through a hole in the support.7. The separator of claim 1 wherein the actuators are coupled to the support through first and second lift members, respectively; the first and second lift members each having at least two posts extending upwardly therefrom, the posts extending through holes in the support.8. The separator of claim 1 wherein the posts are tapered, the tapered posts being wider at respective lift members than above respective lift members.9. The separator of claim 1 wherein the actuators are pneumatically powered; the actuators each comprising a pair of gas ports, one of each pair of ports being a gas inlet when the actuator lifts the support and the other port of each pair of ports being a gas outlet when the actuator lifts the support; the separator further comprising at least one pressure release valve in fluid communication with the gas outlets.10. The separator of claim 1 wherein the actuators are pneumatically powered; the actuators each comprising a pair of gas ports, one of each pair of ports being a gas inlet when the actuator lifts the support and the other port of each pair of ports being a gas outlet when the actuator lifts the support; the separator further comprising at least two pressure release valves, one of the pressure release valves being in fluid communication with one of the gas outlets, and an other of the pressure release valves being in fluid communication with the other of the gas outlets.11. The separator of claim 1 wherein the actuators comprise respective substantially co-planar surfaces positioned beneath respective opposing ends of the support.12. The separator of claim 1 wherein the panel is fastened to the support.13. The separator of claim 1 wherein the blocks are in a one-to-one correspondence with the integrated circuit packages on the board.14. The separator of claim 1 comprising more than one panel over the support, each panel having blocks associated therewith.15. The separator of claim 1 wherein the blocks are fastened to the panel.16. The separator of claim 1 wherein the blocks are one-piece with the panel.17. The separator of claim 1 wherein the pins do not extend through the panel.18. The separator of claim 1 wherein the base comprises a substantially planar upper surface, and wherein the support defines a substantially planar upper surface over an entirety of the upper surface of the base.19. The separator of claim 1 wherein the support is devoid of pins.20. The separator of claim 1 wherein the panel is devoid of pins.21. The separator of claim 1 wherein the base comprises a substantially planar upper surface terminating to define outermost lateral edges, and wherein the support defines a substantially planar upper surface over an entirety of the upper surface of the base and extending laterally beyond the outermost lateral edges of the base. |
RELATED PATENT DATAThis patent resulted from a divisional application of U.S. patent application Ser. No. 09/533,058, filed Mar. 22, 2000, and titled "Integrated Circuit Package Separators", which is a divisional application of U.S. patent application Ser. No. 09/176,479, which was filed on Oct. 20, 1998 now U.S. Pat. No. 6,277,671.TECHNICAL FIELDThe invention pertains to methods of forming integrated circuit packages, as well as to devices for separating integrated circuit packages.BACKGROUND OF THE INVENTIONCircuit constructions having integrated circuit (IC) chips bonded to circuit boards (such as SIMMs and DIMMs) can be fabricated by joining IC chips on a single large circuit board comprising a plurality of the constructions. The circuit board can be subsequently cut to separate discrete constructions from one another. The discrete constructions are referred to herein as integrated circuit packages. The smaller the individual circuit packages, the more likely it is for industry processing to utilize the above-described method of forming the packages on a single large board and subsequently cutting individual packages from the board.An exemplary prior art process of separating integrated circuit packages is described with reference to FIG. 1. FIG. 1 illustrates a board assembly 10 having a plurality of IC chips 12 (only some of which are labeled) bonded thereto. Chips 12 are aligned into individual IC package configurations 14 (only some of which are labeled) to form a repeating pattern of integrated circuit packages 14 across the board assembly 10. Dashed lines 16 are shown to illustrate the boundaries between individual IC packages 14. In the shown exemplary embodiment, assembly 10 comprises three separate circuit boards 11, 13 and 15. The number and size of individual circuit boards can vary depending on the number and size of IC packages that are ultimately to be formed.Each of boards 11, 13 and 15 comprises a pair of lateral waste sections 21, 23 and 25, respectively. The lateral waste sections 21, 23 and 25 are separated from the remainder of boards 11, 13 and 15, respectively, by imaginary dashed lines 20, 22 and 24. In further processing, the individual IC packages 14 are separated from one another by cutting through boards 11, 13 and 15 along the regions illustrated by dashed lines 16. During the cutting to separate IC packages 14 from one another, boards 11, 13 and 15 are also cut along regions illustrated by dashed lines 20, 22 and 24 to remove waste portions 21, 23 and 25 from the lateral sides of the boards, and accordingly from lateral edges of the ultimately formed IC packages.Orifices 19 (only some of which are labeled) are provided throughout circuit boards 11, 13 and 15. Specifically, pairs of orifices 19 are provided in each IC package 14, and at least two orifices 19 are provided in each of waste portions 21, 23 and 25.FIG. 1 further illustrates an IC package separator 40 comprising a cutting mechanism 42 (shown schematically as a cutting wheel, although other cutting mechanisms, such as, for example, router bits or linear blades, are known to persons of ordinary skill in the art), a retaining table 44, and a control mechanism 45 configured to control orientation of cutting wheel 42 relative to table 44. Retaining table 44 can comprise, for example, an x-y table (i.e., a table horizontally adjustable S in x and y directions; an "X", "Y" and "Z" axis system is illustrate in a lower corner of FIG. 1). Control mechanism 45 can control the x and y orientation of table 44 and the z (i.e., vertical) orientation of cutting mechanism 42 to precisely cut a board retained on table 44. Table 44, cutting mechanism 42, and control mechanism 45 can be comprised by commercially available cutting systems, such as, for example, Advanced Technology Incorporated's CM101 single spindle router (or, more generally, a circuit board depanelization router).FIG. 1 also illustrates that table 44 comprises an upper platform 46. A subplate 48 is provided over platform 46, and a stripper plate 50 is provided over subplate 48. Subplate 48 comprises a plurality of upwardly extending pins 60 (only some of which are labeled), and stripper plate 50 comprises a number of orifices 62 configured to slide over pins 60. Subplate 48 is retained on table 44 by downwardly extending pins (not shown) which are aligned with and precisely received within orifices (not shown) extending within platform 46 of table 44.Orifices 19 of boards 11, 13 and 15 align with pins 60. In operation, boards 11, 13 and 15 are slid over pins 60 until the pins protrude through orifices 19. Typically, orifices 19 are only about 0.003 inches wider than pins 60 to insure tight alignment of boards 11, 13 and 15 with subplate 48. After boards 11, 13 and 15 are retained on table 44 by pins 60, cutting mechanism 42 is utilized to cut along the regions illustrated by dashed lines 16, 20, 22 and 24. Such cutting separates discrete integrated circuit packages 14 from one another, as well as from waste regions 21, 23 and 25. The separated circuit packages are retained on table 44 by pins 60 extending through the packages. Specifically, each of individual packages 14 comprises a pair of orifices 19 and is thereby retained on table 44 by a pair of pins 60.After the IC packages are separated from one another, stripper plate 50 is manually lifted off of subplate 42 to lift the IC packages 14 from pins 60. Once stripper plate 50 is lifted off from pins 60, the individual IC packages can be separated from stripper plate 50. An exemplary method of removing the IC packages from stripper plate 50 is to tilt plate 50 and allow the packages to slide off plate 50. After the packages 14 are removed, plate 50 can be returned to over 48 and used again for separating IC packages.Difficulties can occur in utilizing the assembly of FIG. 1 for separating IC packages. For instance, separated IC packages can be broken as stripper plate 50 is lifted from subplate 48. It would be desirable to reduce or eliminate such problems.SUMMARY OF THE INVENTIONIn one aspect, the invention encompasses a method of forming integrated circuit packages. A base having a plurality of pins extending upwardly therefrom is provided. A support is provided over the base. The support has an upper surface and a plurality of holes extending therethrough. The pins extend through the holes and upwardly beyond the upper surface of the support. An actuator is provided beneath the support. A board having a plurality of integrated circuits bonded thereto is provided. The integrated circuits form a repeating pattern of integrated circuit packages across the board, and the board has a plurality of holes extending through it. The board is placed over the support upper surface with the pins extending into the holes in the board. While the board is over the support upper surface, it is cut to separate the integrated circuit packages from one another. After the cutting, the support is vertically displaced by the actuator to lift the support off the pins.In another aspect, the invention encompasses an integrated circuit package separator for separating integrated circuit packages from a board. The board comprises a plurality of integrated circuits bonded thereto, and has a plurality of holes extending within it. The separator includes a base having a plurality of pins extending upwardly therefrom and a support over the base. The support has an upper surface, a plurality of holes extending therethrough, and a pair of opposing ends. The pins extend through the holes in the support and upwardly beyond the upper surface of the support. The support and pins are configured such that the pins extend into the holes in the board when the board is placed over the support upper planar surface. The separator further includes a pair of actuators beneath the support and configured to vertically displace the support and lift the support off the pins. Additionally, the separator includes a cutting mechanism configured to cut the board while the board is over the support upper planar surface and thereby separate the integrated circuit packages from one another.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings.FIG. 1 is a diagrammatic, perspective, exploded view of a prior art IC package separator and circuit board assembly.FIG. 2 is a diagrammatic top view of an IC package separator of the present invention.FIG. 3 is a diagrammatic, perspective, exploded view of an IC package separator of the present invention with a stripper plate of the present invention and a circuit board.FIG. 4 is a view of the FIG. 3 assembly with the circuit board retained on the IC separator.FIG. 5 is a view of the FIG. 4 assembly after the retained circuit board is cut to separate individual IC packages from one another.FIG. 6 is a view of the FIG. 5 assembly after a stripper plate is lifted to release separated IC packages from retaining pins.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThis disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8).An IC package separator of the present invention and a method of operation of such separator are described below with reference to FIGS. 2-6. In referring to FIGS. 2-6, similar numbering to that utilized above in describing prior art FIG. 1 will be used, with differences indicated by the suffix "a" or by different numerals.Referring to FIG. 2, a separator 100 of the present invention is shown in top view. Separator 100 comprises a table 44a and a subplate 48a provided over table 44a. Table 44a can comprise, for example, an x-y table similar to the table 44 described above with reference to FIG. 1. Subplate 48a, like the above-described substrate 48 of FIG. 1, can be joined to table 44a through a plurality of downwardly extending pins (not shown), and comprises a plurality of upwardly extending pins 60 (only some of which are labeled) configured to retain a circuit board assembly (not shown).Subplate 48a differs from subplate 48 of FIG. 1 in that subplate 48a comprises notches 102 at its ends. Notches 102 are provided to allow room for a pair of forcer plates 104 and 106 to move vertically (in and out of the page of FIG. 2) relative to table 48a. Forcer plates 104 and 106 comprise upwardly extending pins 108 and 110, respectively. Base plate 48a comprises an upper planar surface 115, and forcer plates 104 and 106 comprise upper planar surfaces 117 and 119, respectively. Upper planar surfaces 115, 117 and 119 ultimately support a circuit board assembly (not shown in FIG. 2). Planar surfaces 115, 117 and 119 are preferably substantially coplanar with one another to avoid distorting (e.g., bending) a supported circuit board assembly.Forcer plates 104 and 106 are connected to actuators 112 and 114, respectively, configured to vertically displace forcer plates 104 and 106. In the exemplary shown embodiment, forcer plates 104 and 106 are connected to the actuators with screws 116. It is to be understood, however, that other mechanisms could be utilized for joining forcer plates 104 and 106 to actuators 112 and 114, including, for example, welding.Actuators 112 and 114 are pneumatic (preferably air-powered) and connected to a gas source 120. An advantage of utilizing air powered actuators is that most wafer fabrication plants have a source of clean dry air available. Accordingly, it is relatively convenient to couple air powered actuators 112 and 114 into existing fabrication plants by simply connecting them to existing air lines. However, it is to be understood that the actuators can be powered by other sources besides air, including, for example, other fluids, such as liquids, as well as non-pneumatic and non-hydraulic sources, such as, for example, electricity.Separator apparatus 100 comprises a cutting assembly (not shown in FIG. 2) and a controller (not shown in FIG. 2), analogous to the cutting assembly 42 and controller 45 of FIG. 1.Referring to FIG. 3, IC circuit package separator 100 is shown in exploded view with a circuit board assembly 10 identical to the assembly described above with reference to FIG. 1.A stripper plate 50a is provided between subplate 48a and circuit board assembly 10. Stripper plate 50a is similar to the stripper plate 50 of FIG. 1 in that plate 50a comprises a plurality of orifices 62 configured for receipt of pins 60. However, stripper plate 50a differs from plate 50 of FIG. 1 in that plate 50a also comprises orifices 122 configured for receipt of upwardly extending pins 108 and 110 of forcer plates 104 and 106. Pins 108 and 110 are preferably tapered pins, such as can be obtained from McMaster-Carr. Exemplary pins have a dimension of 0.248 inches at base, 0.2324 inches at top, and a length of 0.75 inches. The taper of the pins can assist in aligning support 50a over the pins during placement of support 50a onto base 48a. Stripper plate 50a further differs from plate 50 of FIG. 1 in that plate 50a is configured for receipt of a series of panels 132, 134 and 136. Stripper plate 50a can comprise, for example, static-reduced plastic having a thickness of greater than {fraction (3/16)} inches, and panels 132, 134 and 136 can comprise, for example, aluminum. In the shown embodiment, panels 132, 134 and 136 are held to stripper plate 50a by a plurality of screws 138 (only some of which are labeled). It will be recognized, however, that other mechanisms can be utilized for holding panels 132, 134 and 136 to stripper plate 50a, including riveting. Alternatively, panels 132, 134 and 136 can be molded as part of stripper plate 50a. Panels 132, 134 and 136 comprise ribs 140, 142 and 144, respectively (only some of which are labeled). Ribs 140, 142 and 144 can assist in supporting board assembly 10. Specifically, IC chips 12 are frequently provided on both an upper surface of circuit board assembly 10, and a bottom surface (not shown). Ribs 140, 142 and 144 (also referred to as blocks) have upper surfaces 141, 143 and 145, respectively, which contact the bottom surfaces of circuit boards 11, 13 and 15 at locations between the IC chips 12 on the bottom of the board. Preferably, such upper surfaces are provided at a height approximately equal to a thickness of integrated circuit chip components 12. Accordingly, when boards 11, 13 and 15 are rested on panels 132, 134 and 136, respectively, the boards rest on the upper surfaces of blocks 140, 142 and 144 while leaving integrated circuit chip components on the underside of boards 11, 13 and 15 extending between block upper surfaces 141, 143 and 145 and panels 132, 134 and 136. An exemplary block height (or thickness) of blocks 140, 142 and 144 for a DRAM having IC chips 12 with a TSOP dimensional package is 0.040 inches ±0.005 inches. As another example, if IC chips 12 have a SOJ dimensional package, the block height is preferably 0.140 inches ±0.005 inches.Blocks 140, 142 and 144 can be formed as one piece with panels 132, 134 and 136. Alternatively, blocks 140, 142 and 144 can be formed as discrete pieces from panels 132, 134 and 136 that are subsequently fastened to the panels.In the shown embodiment, blocks 140, 142 and 144 are provided in a one-to-one correspondence with integrated chip packages 14. Also, is in the shown exemplary embodiment each of panels 132, 134 and 136 is identical to one another, and in a one-to-one correspondence with individual boards 11, 13 and 15. It is to be understood, however, that the invention encompasses other embodiments (not shown) wherein the blocks are not provided in a one-to-one correspondence with packages 14, wherein the panels are not identical to one another, and wherein the panels are not in a one-to-one correspondence with the individual boards.Pins 60 extend upwardly beyond upper surfaces 141, 143 and 145 of blocks 140, 142 and 144, and are configured to retain circuit board assembly 10 over stripper panel 50a. In the shown embodiment, pins 60 do not extend through panels 132, 134 and 136. However, it is to be understood that the invention encompasses other embodiments wherein pins 60 do extend through such panels.FIG. 3 shows a side perspective view of actuator 112. In such view it can be seen that several ports 150, 152, 153, 154. 155 and 156 are provided between actuator 112 and gas source. 120. Valves (not shown) are provided between source 120 and one or more of ports 150, 152, 153, 154, 155 and 156. Such valves enable fluid to be selectively flowed from source 120 into one or more of ports 150, 152, 153, 154, 155 and 156 to selectively control raising and lowering of forcer plate 104 with actuator 112. For instance, flow of gas into port 152 can force a pneumatic cylinder to lift forcer plate 104, and flow of gas into port 150 can force the pneumatic cylinder to lower forcer plate 104.Ports 154 and 156 are connected to release valves 163 and 165, respectively, which enable a pressure on at least one side of the pneumatic cylinder of actuator 112 to be maintained at ambient pressure (generally, about 1 atmosphere). Specifically, release valves 163 and 165 comprise outlet ports 157 and 159, respectively, which vent to a surrounding environment. Persons of ordinary skill in the art will recognize that one or more of ports 150, 157 and 159 are utilized as gas outlet ports during lifting of forcer plate 104, and port 152 comprises a gas inlet port during such lifting. In preferred embodiments of the present invention, the release valves are associated with an outlet side of actuator 112 to enable equilibration of a pressure at such outlet side to ambient prior to (and/or during) lifting of forcer plate 104. Specifically, the release valves enable gas to be drained from outlet lines (more specifically, the gas is drained through ports 157 and 159 which are open to ambient conditions) prior to, and/or during, lifting with the actuator. Actuator 114 (FIG. 2) is preferably identical to actuator 112 and connected to an identical valve and port assembly as that shown connected to actuator 112. Accordingly, actuator 114 is also connected with release valves configured to equilibrate a back-pressure of the actuator to ambient prior to, and/or during, lifting of stripper panel 50a. The equilibration of pressure at the outlet ends of both of actuators 112 and 114 to ambient during a lifting operation can enable both actuators to have an identical back-pressure during the lifting operation. This can facilitate having both actuators lift simultaneously and in unison. Such simultaneous lifting can avoid distortion (such as, for example, bending) of circuit board assembly 10 during the lifting.Stripper plate 50a has an upper planar surface 160 and a pair of opposing ends 162 and 164. Opposing ends 162 and 164 overlie forcer plates 104 and 106, respectively. In operation, actuators 112 and 114 are utilized to lift opposing ends 162 and 164 simultaneously and in unison. Such can be accomplished by, for example, maintaining approximately equal gas pressure at both of actuators 112 and 114 during lifting, and is found to reduce breakage of integrated circuit packages relative to prior art methods. The term "approximately" in the previous sentence is utilized to indicate the gas pressure at both actuators is equal within operational parameters.A method of operation of separator 100 is described with reference to FIGS. 4-6. In referring to FIGS. 4-6, subplate 48a is referred to as a base, and stripper plate 50a is referred to as a support. Referring first to FIG. 4, circuit board assembly 10 is shown retained on support 50a. Specifically, circuit board assembly 10 is placed over support upper surface 160 with pins 60 extending through orifices 19 of the circuit boards 11, 13 and 15. Pins 60 and board assembly 10 are aligned such that each of the integrated circuit packages 14 is retained to the support 50a by at least one pin, and, in the shown embodiment, is retained by 2 pins. In the FIG. 4 processing step, actuators 112 and 114 (FIG. 2) are in a lowered position.Referring to FIG. 5, the individual integrated circuit packages 14 are separated from one another by cutting through boards 11, 13 and 15.Referring to FIG. 6, actuators 112 and 114 (FIG. 2) are utilized to vertically displace support 50a from base 48a. Preferably, such vertical displacement comprises lifting both of ends 162 and 164 of support 50a substantially simultaneously and substantially in unison with one another. (As used in the preceding sentence, the term "substantially" indicates that the lifting of both ends is simultaneous and in unison within operational parameters.) In exemplary applications the upper surface 160 of support 50a is level prior to the lifting and remains level during the lifting. The lifting of support 50a releases separated circuit packages 14 from pins 60. After such release, support 50a can be, for example, manually lifted from pins 108 and 110, and the separated packages removed from support 50a. In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents. |
A data cache region prefetcher creates a region when a data cache miss occurs. Each region includes a predetermined range of data lines proximate to each data cache miss and is tagged with an associated instruction pointer register (RIP). The data cache region prefetcher compares subsequent memory requests against the predetermined range of data lines for each of the existing regions. For each match, the data cache region prefetcher sets an access bit and attempts to identify a pseudo-random access pattern based on the set access bits. The data cache region prefetcher increments or decrements appropriate counters to track how often the pseudo-random access pattern occurs. If the pseudo-random access pattern occurs frequently, then the next time a memory request is processed with the same RIP and pattern, the data cache region prefetcher prefetches the data lines in accordance with the pseudo-random access pattern for that RIP. |
A data cache region prefetcher, comprising:a line entry data table including a plurality of line entries,wherein each line entry includes a region defined by a predetermined number of access bits, and an access bit for a given line entry is set if a cache line is requested within the region; anda region history table configured to receive evictions from the line entry data table,wherein the data cache region prefetcher is configured to determine an access pattern from certain access bits in an evictable line entry and exclude line entries having predetermined access patterns from eviction to the region history table.The data cache region prefetcher of claim 1, wherein the data cache region prefetcher is configured to evict the line entries having pseudo-random access patterns to the region history table.The data cache region prefetcher of claim 1, wherein the region history table is indexed using at least an instruction pointer register (RIP).The data cache region prefetcher of claim 3, wherein the region history table is further indexed using an offset to support multiple pseudo-random access patterns, for the same RIP, depending on whether an initial access to a region is at a beginning, end or middle of a cache line.The data cache region prefetcher of claim 1, wherein:the region history table includes a plurality of region history entries,each region history entry including the predetermined number of access bits,each region history entry including counters for certain access bits in the predetermined number of access bits, andthe counters are incremented or decremented depending on whether the access bit is set for the evictable line entry.The data cache region prefetcher of claim 1, further comprising:a region prefetch generator configured to receive prefetch requests from the region history table on a condition that counters associated with specific access bits in a specific region history entry in the region history table have reached a threshold.The data cache region prefetcher of claim 1, wherein the data cache region prefetcher is configured to block other prefetchers from processing streams that are pending with the data cache region prefetcher.The data cache region prefetcher of claim 1, wherein each line entry further includes second access bits which are set when a subsequent cache line request is within one access bit of a home bit in the predetermined number of access bits and which are used to determine the predetermined access patterns that are excluded from eviction to the region history table.A processing system, comprising:a stream prefetcher; anda data cache region prefetcher according to claim 1, wherein the data cache region prefetcher is configured to block the stream prefetcher from processing streams that are pending with the data cache region prefetcher.A method for data cache region prefetching, the method comprising:receiving a cache line request at a line entry table, the line entry table including a plurality of line entries, wherein each line entry includes a region defined by a predetermined number of access bits;setting an access bit for a given line entry if the cache line request is within the region;determining an access pattern from certain access bits in an evictable line entry;excluding line entries having predetermined access patterns from eviction to a region history table; andevicting line entries having pseudo-random access patterns to a region history table.The method of claim 10, further comprising:indexing the region history table using at least an instruction pointer register (RIP).The method of claim 11, further comprising:indexing the region history table using the RIP and an offset to support multiple pseudo-random access patterns for the same RIP, depending on whether an initial access to a region is at a beginning, end or middle of a cache line.The method of claim 10, wherein the region history table includes a plurality of region history entries, each history line entry including counters for certain access bits in the predetermined number of access bits, the method further comprising:incrementing or decrementing the counters depending on whether respective access bits are set; andsending prefetch requests to a region prefetch generator on a condition that counters associated with specific access bits in a specific region history entry in the region history table meet or exceed a threshold.The method of claim 10, further comprising:blocking other prefetchers from processing streams that are pending with the data cache region prefetcher.The method of claim 10, wherein each line entry further includes second access bits, the method further comprising:setting the second access bits when a subsequent cache line request is within one access bit of a home bit in the predetermined number of access bits; andusing the set second access bits to determine the predetermined access patterns that are excluded from eviction to the region history table. |
CROSS REFERENCE TO RELATED APPLICATIONThis application claims the benefit of U.S. provisional application no. 62/377,314 , having a filing date of August 19, 2016, which is incorporated by reference as if fully set forth herein. BACKGROUND Many processing devices utilize caches to reduce the average time required to access information stored in a memory. A cache is a smaller and faster memory that stores copies of instructions or data that are expected to be used relatively frequently. For example, central processing units (CPUs), one type of processor that uses caches, are generally associated with a cache or a hierarchy of cache memory elements. Other processors, such as graphics processing units, also implement cache systems. Instructions or data that are expected to be used by the CPU are moved from (relatively large and slow) main memory into the cache. When the CPU needs to read or write a location in the main memory, the CPU first checks to see whether a copy of the desired memory location is included in the cache memory. If this location is included in the cache (a cache hit), then the CPU can perform the read or write operation on the copy in the cache memory location. If this location is not included in the cache (a cache miss), then the CPU needs to access the information stored in the main memory and, in some cases, the information can be copied from the main memory and added to the cache. Proper configuration and operation of the cache can reduce the average latency of memory accesses to a value below the main memory latency and close to the cache access latency.A prefetcher is used to populate the lines in the cache before the information in these lines has been requested. The prefetcher monitors memory requests associated with applications running in the processor and uses the monitored requests to determine or predict that the processor is likely to access a particular sequence of memory addresses in a memory region, where the latter is generally referred to as a stream. Prefetchers keep track of multiple streams and independently prefetch data for the different streams. BRIEF DESCRIPTION OF THE DRAWINGS A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings wherein:Figure 1 is a high level block diagram of a system that uses a data cache region prefetcher in accordance with certain implementations;Figure 2 is a high level block diagram of a data cache region prefetcher in accordance with certain implementations;Figure 3 is a block diagram of and a flow diagram for a line entry in a line entry table structure for a data cache region prefetcher in accordance with certain implementations;Figure 4 is a flow diagram for a region history table structure in a data cache region prefetcher in accordance with certain implementations;Figures 5A1 , 5A2 and 5B are example flow diagrams of the methods for use with a data cache region prefetcher in accordance with certain implementations; andFigure 6 is a block diagram of an example device in which one or more disclosed implementations may be implemented. DETAILED DESCRIPTION Described herein is a data cache region prefetcher. The data cache region prefetcher recognizes cache access patterns generated by a program (e.g., in response to load or store instructions), and issues prefetch requests to copy data from main memory to the data cache in anticipation of possible future requests for this data. In particular, the data cache region prefetcher attempts to detect patterns where, after a given instruction accesses a data line, other data lines that are within a predetermined range of the initial accessed data line are subsequently accessed. The predetermined range of data lines including the initial accessed data line is termed a region and each region is tagged with an instruction pointer register (RIP). The patterns associated with each region are then used to prefetch data lines for subsequent accesses by the same RIP.Figure 1 is a high level block diagram of a processing system 100 that uses a data cache region prefetcher 160 in accordance with certain implementations. The processing system 100 includes a processor 105 that is configured to access instructions or data that are stored in a main memory 110. The processor 105 includes at least one core 115 that is used to execute the instructions or manipulate the data and a hierarchical (or multilevel) cache system 117 that speeds access to the instructions or data by storing selected instructions or data in the cache system 117. The described processing system 100 is illustrative and other architectures and configurations can be implemented without departing from the scope of the disclosure.The cache system 117 includes a level 2 (L2) cache 120 for storing copies of instructions or data that are stored in the main memory 110. In an implementation, the L2 cache 120 is 16-way associative to the main memory 110 so that each line in the main memory 110 can potentially be copied to and from 16 particular lines (which are conventionally referred to as "ways") in the L2 cache 120. Relative to the main memory 110, the L2 cache 120 is implemented using smaller and faster memory elements. The L2 cache 120 is deployed logically or physically closer to the core 115 (relative to the main memory 110) so that information can be exchanged between the core 115 and the L2 cache 120 more rapidly or with less latency.The cache system 117 also includes an L1 cache 125 for storing copies of instructions or data that are stored in the main memory 110 or the L2 cache 120. Relative to the L2 cache 120, the L1 cache 125 is implemented using smaller and faster memory elements so that information stored in the lines of the L1 cache 125 can be retrieved quickly by the processor 105. The L1 cache 125 may also be deployed logically or physically closer to the core 115 (relative to the main memory 110 and the L2 cache 120) so that information may be exchanged between the core 115 and the L1 cache 125 more rapidly or with less latency (relative to communication with the main memory 110 and the L2 cache 120). In an implementation, different multilevel caches including elements such as L0 caches, L1 caches, L2 caches, L3 caches, and the like are used. In some implementations, higher-level caches are inclusive of one or more lower-level caches so that lines in the lower-level caches are also stored in the inclusive higher-level caches.The L1 cache 125 is separated into level 1 (L1) caches for storing instructions and data, which are referred to as the L1-I cache 130 and the L1-D cache 135. Separating or partitioning the L1 cache 125 into the L1-I cache 130 for storing only instructions and the L1-D cache 135 for storing only data allows these caches to be deployed closer to the entities that are likely to request instructions or data, respectively. Consequently, this arrangement reduces contention, wire delays, and generally decreases latency associated with instructions and data. In one implementation, a replacement policy dictates that the lines in the L1-I cache 130 are replaced with instructions from the L2 cache 120 and the lines in the L1-D cache 135 are replaced with data from the L2 cache 120.The processor 105 also includes a stream prefetcher 150 and the data cache region prefetcher 160 that are used to populate data lines in one or more of the caches 125, 130, 135. Although the stream prefetcher 150 and data cache region prefetcher 160 are depicted as separate elements within the processor 105, the stream prefetcher 150 and data cache region prefetcher 160 can be implemented as a part of other elements. In an implementation, the stream prefetcher 150 and data cache region prefetcher 160 are hardware prefetchers. In an implementation, the stream prefetcher 150 and data cache region prefetcher 160 monitor memory requests associated with applications running in the core 115. For example, the stream prefetcher 150 and data cache region prefetcher 160 monitor memory requests (e.g., data line accesses) that result in cache hits or misses, which are recorded in a miss address buffer (MAB) 145. Although the stream prefetcher 150 and data cache region prefetcher 160 both determine or predict that the core 115 is likely to access a particular sequence of memory addresses in the main memory 110 (nominally called a stream), each prefetcher handles accesses differently.The stream prefetcher 150 detects two or more contiguous and sequential memory accesses by the core 115. A direction of a sequence is determined based on a temporal sequence of the sequential memory accesses and the core 115 uses this direction to predict future memory accesses by extrapolating based upon the current or previous sequential memory accesses. The stream prefetcher 150 then fetches the information in the predicted locations from the main memory 110 and stores this information in an appropriate cache so that the information is available before it is requested by the core 115.In general, the data cache region prefetcher 160 creates a region when a data cache miss occurs. Each region includes a predetermined range of data lines proximate to each data cache miss and is tagged with an associated RIP. The data cache region prefetcher 160 then compares subsequent memory requests against the predetermined range of data lines for each of the existing regions. For each match, the data cache region prefetcher 160 sets an access bit and attempts to identify a pseudo-random access pattern based on the set access bits. The data cache region prefetcher 160 later increments or decrements appropriate counters to track how often the pseudo-random access pattern occurs. If the pseudo-random access pattern occurs frequently (e.g., based on preset thresholds), then the next time a memory request is processed with the same RIP, the data cache region prefetcher 160 prefetches the data lines in accordance with the pseudo-random access pattern for that RIP.In an implementation, there is feedback between the stream prefetcher 150 and data cache region prefetcher 160. This feedback is used to throttle the stream prefetcher 150. For example, the enabling of a flag allows the data cache region prefetcher 160 to block the stream prefetcher 150 from acting on newly created streams with pending data cache region prefetch requests as described herein below.Figure 2 is a high level block diagram of a data cache region prefetcher 200 in accordance with certain implementations. The data cache region prefetcher 200 includes a line entry table 205 (which is a training structure) coupled to a region history table 210 (which is a backing structure populated by the training structure). The number of table entries and the size of the fields described herein are illustrative only and other values can be used without departing from the scope of the disclosure.The line entry table 205 includes a predetermined number of line entries 215. In an implementation, the line entry table 205 includes 32 entries. Each line entry 215 includes a RIP field 220, an address field 222 for the data line, an access bits field 224 and a second line access bits field 226. In an implementation, the RIP field 220 is a 20 bit field, the address field 222 is a 44 bit field, the access bits field 224 is an 11 bit field and the second line access bits field 226 is a 3 bit field. In an implementation, the line entry table 205 is implemented using a content addressable memory (CAM).The region history table 210 includes a predetermined number of region history entries 230 that are indexed in one implementation using a 9-bit hash of RIP[19:0] and Addr[5:4] (where the latter is also referred to as an offset). In an implementation, the region history table 210 includes 512 entries. Each region history table entry 230 has an access bits/counter field 238, where each bit (excluding bit 0) in the access bits/counter field 238 has a 2 bit counter. In an implementation, access bits/counter field 238 is a 22 bit two-dimensional array or data structure with 11 entries and a 2 bit counter per entry. In an implementation, the 2 bit counters are up/down counters.Memory requests or data line accesses from a processor are inserted into the line entry table 205 on data cache misses to create regions. The RIP field 220 and address field 222 of each region are populated with the RIP and address associated with each missed memory request. Each region is defined by a predetermined range of data lines proximate the memory request that missed the data cache. The access bits field 224 includes a bit for each data line in the predetermined range of data lines. A predetermined position or bit in the access bits field 224 is designated as a home position or home bit. The home bit being the memory request that missed the data cache and created the specific region. In the illustrative implementation, the predetermined range is 10 data lines and the range is +6 data lines and -4 data lines from the home bit, where the home bit is bit 0 or position 0. Subsequent memory requests are compared (using for example a CAM) to determine if the subsequent memory requests are within the predetermined range of data lines. A corresponding bit is set in the access bits field 224 of the region for each subsequent memory request that is within the predetermined range.The setting of the access bits in the access bits field 224 establishes pseudo-random patterns that are used by the region history table 210 to potentially prefetch data lines. In particular, when a memory request in the line entry table 205 ages out and has a valid pattern established by the setting of some bits in the access bits field 224, the memory request is evicted to the region history table 210 and the fields as described above are populated. The second line access bits field 226 is used to determine if the pseudo-random pattern indicates two or more contiguous and sequential memory accesses (i.e., a non-valid pattern), in which case the region is not moved to the region history table 210 and is handled by the stream prefetcher 150 as shown in Figure 1 .The region history table 210 tracks the number of times a memory request with a given RIP and offset was followed by requests to surrounding data lines in accordance with the established pattern. The tracking information is kept using the 2 bit counters in the access bits/counter field 238. In an implementation, when updating the region history table entry 230, each individual 2 bit up/down counter in the access bits/counter field 238 is either incremented (if the corresponding access bit in the line entry is 1) or decremented (if the corresponding access bit in the line entry is 0). When decrementing, these 2 bit up/down counters saturate at 0. When incrementing, these 2 bit up/down counters saturate at 3. When a subsequent data cache miss creates a new line entry, the associated RIP and offset are used to select one of the entries in the region history table 210, then the 2 bit counters in the access bits/counter field 238 are used to determine if a prefetch is appropriate by comparing against a threshold (e.g., 2). If a prefetch is warranted, the appropriate or relevant information is sent to a region prefetch generation unit 250 to generate a prefetch request, which in turn sends the prefetch request to a prefetch request first in, first out (FIFO) buffer (not shown).Figure 3 is a block diagram of and a flow diagram for a line entry 300 in a line entry table structure for a data cache region prefetcher in accordance with certain implementations. Each line entry 300 includes a RIP field 305, an address field 310 for the data cache miss, an access bits field 315 and a second (2nd) line access bits field 320. In an implementation, the RIP field 305 is a 20 bit field, the address field 310 is a 44 bit field, the access bits field 315 is an 11 bit field and the second line access bits field 320 is a 3 bit field. In an implementation, the access bits field 315 represents the range of the data cache region prefetcher from +6 to -4 data lines, where bit 0 is the data line or address associated with the data cache miss (which is designated "home" as stated above).The second line access bits field 320 is used to determine if there are two or more contiguous and sequential memory accesses relative to home. That is, the second line access bits field 320 is used to differentiate between sequential (stride = +1 or -1 cache lines) streams and other, non-sequential access patterns. Sequential streams train on the second access to the stream/region if that access is to the next sequential (+/-1) cache line. The stream prefetcher handles sequential streams, which are excluded from the region history table. In particular, if second line access bits +1 and -1 are set, then the corresponding stream or associated region is not moved to the region history table. If the second access to the region is not to the next sequential (+/-1) cache line, then the second line access bit 0 is set. The second line access bit 0 indicates that the second access to the region was not to the next sequential (+/-1) cache line. These line entries, with non-sequential access patterns, are candidates for inclusion in the region history table.Operationally, a data cache (Dc) miss status is used as an input to the line entry table (step 350). Each data cache miss which does not update an existing region creates a new region that is entered into a new line entry 300 and the appropriate fields are populated as discussed herein (step 352). The old line entry 300 is evicted in accordance with a least-recently-used replacement algorithm. If a valid pattern exists in the access bits field 315 and second line access bits field 320 in the old line entry 300, the old line entry 300 is used to update the region history table (step 354).Figure 4 is a flow diagram 400 for a region history table 405 in a data cache region prefetcher in accordance with certain implementations. The region history table 405 includes multiple region history table entries 410 which are RIP and offset-indexed. Each region history table entry 410 includes an access bits/counter field 416 that includes 2 bit counters for each bit in the access bit/counter field 416. In an implementation, the access bits/counter field 416 is a 22 bit two-dimensional array or data structure with 11 entries and a 2 bit counter per entry. An address offset 414 (shown as an octo-word offset with address bits 5 and 4), is used to allow multiple different line access patterns to be stored in the region history table 405 so that multiple different data line access patterns can be prefetched for a given RIP based on where within the 64-byte cache line the initial data line access (i.e., home bit) is located. If the initial data access within a region is near the beginning or the end of a data line, additional data lines or a different pattern of data lines may need to be prefetched. More specifically, the region prefetcher tracks a pseudo-random sequence of load/store memory accesses made by a program to a region of system memory. These load/store memory accesses are typically 4, 8 or 16 bytes, much smaller than a cache line, which is typically 64 bytes. The region prefetcher maps these load/store memory accesses onto a second, coarser pseudo-random pattern of 64B cache lines surrounding the initial memory access cache miss which created the region. This second, coarser pseudo-random pattern is the line access bits.Even assuming the pseudo-random sequence of load/store memory accesses is consistent, the same address offsets are used from memory region to memory region, and the mapping of these 4, 8 or 16 byte memory accesses onto 64B cache lines (the line access bits) varies depending on whether the initial memory access cache miss which created the region was to the beginning, middle or end of a cache line.Including the address offset 414 (Addr[5:4]) of the initial memory access into the index used to access the region history table allows multiple, different line access patterns to be stored in the region history table for the same RIP based on the alignment of the region within system memory relative to a 64B cache line boundary.Operationally, when an old line entry 300 is evicted from the line entry table and if a valid pattern exists in the access bits field 315 and second line access bits field 320 in the old line entry 300, the old line entry 300 is used to update the region history table (step 420). In particular, the given RIP and address offset for the old line entry 300 are used as an index to read out a region history table entry 410 from the region history table 405. The 2 bit counters in the access bits/counter field 416 are used to track the number of times the given RIP and address offset follow the established pattern. Each bit in the access bits field 315 in the old line entry 300 is examined. If a line access bit field 315 is 1, then the data cache region prefetcher increments the corresponding 2 bit counter in the access bits/counter field 416 in the region history line table 410. If a line access bit field 315 is 0, then the data cache region prefetcher decrements the corresponding 2 bit counter in the access bits/counter field 416 in the region history line table 410.When a subsequent data cache miss creates a new line entry 300, the RIP and address offset associated with the new line entry 300 are used as an index to read out a region history table entry 410 from the region history table 405. The data cache region prefetcher then examines each 2 bit counter in the access bits/counter field 416. If a counter is above a threshold (e.g., 2), the data cache region prefetcher generates a region prefetch request (step 424) for the corresponding cache line offset. These cache line offsets are relative to the home address of the new line entry 300. The generated region prefetch request is placed in the data cache prefetch request queue (step 426).Figures 5A1 and 5A2 are an example flow diagram 500 of a method for use with a data cache region prefetcher in accordance with certain implementations. The data cache region prefetcher receives a memory request upon a data cache miss (step 505). The memory request is compared against all line entries in the line entry table (step 510). If there is a match, the appropriate bits in the line entry table are set (step 512). If there is no match, a new region is created and entered into a line entry in the line entry table (step 515). Two different process branches occur at this point: 1) updating the region history table as described in Figures 5A1 and 5A2 , and 2 ) region prefetch request generation as described in Figure 5B (denoted as "B" in Figure 5A1 ). Referring still to Figures 5A1 and 5A2 , a home bit is set to the address of the memory request and the RIP is stored in the line entry (step 517). Subsequent memory requests are reviewed to determine if they are within a predetermined range of the memory request (step 519). If subsequent memory requests are within the predetermined range, then specific line access bits are set in the line entry for the specific region (step 521). If subsequent memory requests are not within the predetermined range, then a new region is created (step 522).At a given time, each line entry will age out as new line entries are being created (step 523). At this time, the line access bits are reviewed to determine what pattern exists (step 525). If the detected pattern is contiguous and sequential (e.g., there is an ascending or descending pattern relative to the home bit), then the line entry is discarded (step 527). In an implementation, the data cache region prefetcher discards those line entries when ((second line access bits [+1] AND line access bits [+6:+1] (which are all set to 1)) equals 1) OR ((second line access bits [-1] AND line access bits [-1:-4] (which are all set to 1)) equals 1). If the detected pattern is pseudo-random (step 529) (e.g., bits 6, 2 and 3 are set), the line entry is prepared for moving to the region history table using the RIP and address offset of the memory request as an index (step 531). That is, the RIP and address offset of the line entry are used as an index to read an entry out of the region history table. If the corresponding access bit in the line entry is set to 1, then the data cache region prefetcher increments the specific counters (step 537). If the corresponding access bit in the line entry is set to 0, then the data cache region prefetcher decrements the specific counters. If the detected pattern is not pseudo-random, the line entry is handled by other prefetchers or modules for other processing (step 532).Referring now to Figure 5B , the RIP and the offset (shown as an octo-word offset with address bits 5 and 4 in Figure 4 ) for the new entry are used to read into the region history table (step 550). The data cache region prefetcher then examines each 2 bit counter in the access field of the region history table entry (step 555). If a counter is above a threshold, the data cache region prefetcher generates a region prefetch request (step 570). The generated region prefetch request is placed in the data cache prefetch request queue (step 575). If the counter is not above the threshold, continue processing (step 560). That is, a region prefetch request is not generated at this time.Figure 6 is a block diagram of an example device 600 in which one or more portions of one or more disclosed embodiments may be implemented. The device 600 may include, for example, a head mounted device, a server, a computer, a gaming device, a handheld device, a set-top box, a television, a mobile phone, or a tablet computer. The device 600 includes a processor 602, a memory 604, a storage 606, one or more input devices 608, and one or more output devices 610. The device 600 may also optionally include an input driver 612 and an output driver 614. It is understood that the device 600 may include additional components not shown in Figure 6 .The processor 602 may include a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core may be a CPU or a GPU. The memory 604 may be located on the same die as the processor 602, or may be located separately from the processor 602. The memory 604 may include a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache.The storage 606 may include a fixed or removable storage, for example, a hard disk drive, a solid state drive, an optical disk, or a flash drive. The input devices 608 may include a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The output devices 610 may include a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals).The input driver 612 communicates with the processor 602 and the input devices 608, and permits the processor 602 to receive input from the input devices 608. The output driver 614 communicates with the processor 602 and the output devices 610, and permits the processor 602 to send output to the output devices 610. It is noted that the input driver 612 and the output driver 614 are optional components, and that the device 600 will operate in the same manner if the input driver 612 and the output driver 614 are not present.In general, in an implementation, a data cache region prefetcher includes a line entry data table having a plurality of line entries, where each line entry includes a region defined by a predetermined number of access bits and where an access bit for a given line entry is set if a cache line is requested within the region. The data cache region prefetcher further includes a region history table configured to receive evictions from the line entry data table. The data cache region prefetcher determines if an access pattern from certain access bits in an evictable line entry and excludes line entries having predetermined access patterns from eviction to the region history table. In an implementation, the data cache region prefetcher evicts the line entries having pseudo-random access patterns to the region history table. In an implementation, the region history table is indexed using at least an instruction pointer register (RIP). In an implementation, the region history table is further indexed using an offset to support multiple pseudo-random access patterns, for the same RIP, depending on whether an initial access to a region is at a beginning, end or middle of a cache line. In an implementation, each region history entry includes the predetermined number of access bits, each region history entry includes counters for certain access bits in the predetermined number of access bits, and the counters are incremented or decremented depending on whether the access bit is set for the evictable line entry. In an implementation, the data cache region prefetcher further includes a region prefetch generator configured to receive prefetch requests from the region history table on a condition that counters associated with specific access bits in a specific region history entry in the region history table have reached a threshold. In an implementation, the data cache region prefetcher blocks other prefetchers from processing streams that are pending with the data cache region prefetcher. In an implementation, each line entry further includes second access bits which are set when a subsequent cache line request is within one access bit of a home bit in the predetermined number of access bits and which are used to determine the predetermined access patterns that are excluded from eviction to the region history table.In an implementation, a processing system includes a stream prefetcher and a data cache region prefetcher. The data cache region prefetcher including a line entry data table having a plurality of line entries and a region history table which receives evictions from the line entry data table. Each line entry includes a region defined by a predetermined number of access bits, and an access bit for a given line entry is set if a cache line is requested within the region. The data cache region prefetcher determines an access pattern from certain access bits in an evictable line entry, excludes line entries having predetermined access patterns from eviction to the region history table and blocks the stream prefetcher from processing streams that are pending with the data cache region prefetcher. In an implementation, the data cache region prefetcher evicts line entries having pseudo-random access patterns to the region history table. In an implementation, the region history table is indexed using at least an instruction pointer register (RIP). In an implementation, the region history table is further indexed using an offset to support multiple pseudo-random access patterns, for the same RIP, depending on whether an initial access to a region is at a beginning, end or middle of a cache line. In an implementation, each region history entry includes the predetermined number of access bits, each history line entry includes counters for certain access bits in the predetermined number of access bits, and the counters are incremented or decremented depending on whether there is a bit set in the respective access bit. In an implementation, the system includes a region prefetch generator configured to receive prefetch requests from the region history table on a condition that counters associated with specific access bits in a specific region history entry in the region history table have reached a threshold. In an implementation, each line entry further includes second access bits which are set when a subsequent cache line request is within one access bit of a home bit in the predetermined number of access bits and which are used to determine the predetermined access patterns that are excluded from eviction to the region history table.In an implementation, a method for data cache region prefetching includes a cache line request being received at a line entry table, the line entry table having a plurality of line entries, where each line entry includes a region defined by a predetermined number of access bits. An access bit is set for a given line entry if the cache line request is within the region. An access pattern is determined from certain access bits in an evictable line entry. Line entries having predetermined access patterns are excluded from eviction to a region history table and line entries having pseudo-random access patterns are evicted to a region history table. In an implementation, the region history table is indexed using at least an instruction pointer register (RIP). In an implementation, the region history table is indexed using the RIP and an offset to support multiple pseudo-random access patterns, for the same RIP, depending on whether an initial access to a region is at a beginning, end or middle of a cache line. In an implementation, each history line entry includes counters for certain access bits in the predetermined number of access bits and the counters are incremented or decremented depending on whether respective access bits are set. In an implementation, prefetch requests are sent to a region prefetch generator on a condition that counters associated with specific access bits in a specific history line entry meet or exceed a threshold. In an implementation, other prefetchers are blocked from processing streams that are pending with the data cache region prefetcher. In an implementation, each line entry further includes second access bits and the second access bits are set when a subsequent cache line request is within one access bit of a home bit in the predetermined number of access bits and the set second access bits are used to determine the predetermined access patterns that are excluded from eviction to the region history table.In general and without limiting embodiments described herein, a computer readable non-transitory medium including instructions which when executed in a processing system cause the processing system to execute a method for data cache region prefetching.It should be understood that many variations are possible based on the disclosure herein. Although features and elements are described above in particular combinations, each feature or element may be used alone without the other features and elements or in various combinations with or without other features and elements.The methods provided may be implemented in a general purpose computer, a processor, or a processor core. Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. Such processors may be manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions and other intermediary data including netlists (such instructions capable of being stored on a computer readable media). The results of such processing may be maskworks that are then used in a semiconductor manufacturing process to manufacture a processor which implements aspects of the implementations.The methods or flow charts provided herein may be implemented in a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium for execution by a general purpose computer or a processor. Examples of non-transitory computer-readable storage mediums include a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). |
In an embodiment of a transactional memory system, an apparatus includes a processor and an execution logic to enable concurrent execution of at least one first software transaction of a first software transaction mode and a second software transaction of a second software transaction mode and at least one hardware transaction of a first hardware transaction mode and at least one second hardware transaction of a second hardware transaction mode. In one example, the execution logic may be implemented within the processor. Other embodiments are described and claimed. |
1.A device that includes:processor;Execution logic for enabling at least one first software transaction of the first software transaction mode and the second software transaction of the second software transaction mode and at least one hardware transaction and the second hardware of the first hardware transaction mode in the transaction memory system Concurrent execution of at least one second hardware transaction of a transaction mode;Tracking logic for activating a flag to indicate that at least one software transaction is undergoing execution in the first software transaction mode or the second software transaction mode;Cross logic for determining whether a filter group of the first hardware transaction of the second hardware transaction mode at the end of a first hardware transaction of the second hardware transaction mode is related to the at least one software transaction undergoing execution Filter group conflicts; andFinal completion logic for submitting the first hardware transaction when there is no conflict and terminating the first hardware transaction in the event of a conflict.2.The apparatus of claim 1, wherein in the second hardware transaction mode, the first hardware transaction updates the filter group of the first hardware transaction for each memory access of the first hardware transaction .3.The apparatus of claim 1, wherein in the first software transaction mode, the first software transaction obtains a first lock and a second lock at the end of the first software transaction and is stored in a hash table Write data to update the transactional memory of the transactional memory system.4.The apparatus of claim 3 wherein, in said first software transaction mode, said first software transaction causes another software transaction of said first software transaction mode after said commit of said first software transaction Invalid.5.The apparatus of claim 4 wherein, in said second hardware mode, the second hardware transaction obtains a commit lock and a transaction lock prior to the commit of said second hardware transaction.6.The apparatus of claim 4 wherein said first software transaction causes said another if a crossover occurs between a filter group of said first software transaction and a filter group of said another software transaction Software transaction is invalid.7.A method comprising:The software transaction of the first thread and the hardware transaction of the second thread are concurrently executed by the processor in the transactional memory system;A global lock is activated to indicate execution of the software transaction;Determining a state of the global lock at the end of the hardware transaction, and determining if a filter group of the first thread intersects a filter group of the second thread if the global lock is valid, And if not, submit the hardware transaction.8.The method of claim 7 further comprising submitting said software transaction and deactivating said global lock at the end of said software transaction.9.The method of claim 7 further comprising submitting the hardware transaction without determining if the filter group is crossed when the global lock is invalid at the end of the hardware transaction.10.The method of claim 7 further comprising:Inserting an address of the access of the hardware transaction to a transaction memory of the transactional memory system into the filter group of the first thread;Updating one or more fields of the filter group of the first thread based on hashing the accessed address with one or more hash values.11.The method of claim 7 further comprising:Re-hashing the hash table from the first size to the second size concurrently in the software transaction;The hash table is accessed in the hardware transaction and the hardware transaction is enabled to commit during the concurrent re-hash.12.A method comprising:Performing a second hardware transaction in a second hardware transaction mode of the transactional memory system;Submitting the second hardware transaction at the end of the second hardware transaction;After the commit of the second hardware transaction, if a conflict exists between the second hardware transaction and the at least one software transaction, invalidating at least one software transaction concurrently executed with the second hardware transaction.13.The method of claim 12, further comprising determining whether a commit lock is acquired prior to the submitting of the second hardware transaction, and if so, determining whether a conflict exists in the second hardware transaction and obtaining the Submit the lock between the first software transaction.14.The method of claim 12, further comprising terminating said second hardware transaction when said conflict exists between said second hardware transaction and said first software transaction, wherein if said second hardware transaction The filter group intersects with the filter group of the first software transaction to determine that a conflict exists.15.The method of claim 12, further comprising determining whether one or more transaction locks were acquired by one or more hardware transactions after the first software transaction acquired the commit lock, and if so, The commit of the first software transaction is delayed until the one or more transaction locks are released.16.The method of claim 12 further comprising:Performing a first hardware transaction in a first hardware transaction mode of the transactional memory system;Determining, at the end of the first hardware transaction, whether at least one software transaction is executed concurrently;If so, the first hardware transaction is aborted, otherwise the first hardware transaction is committed.17.The method of claim 12 further comprising:Verifying that a read operation by the first software transaction on a transactional memory of the transactional memory system during execution of the first software transaction;If the read operation is verified, the location of the read operation is added to the filter group of the first software transaction.18.The method of claim 12 further comprising:Performing a second software transaction in the second software transaction mode, including acquiring a first lock and a commit lock at the beginning of execution of the second software transaction, and directly updating one or more memory locations during execution of the second software transaction ;as well asAt the end of the second software transaction, submitting the second software transaction to invalidate one or more concurrent execution software transactions of the first software transaction mode, and thereafter releasing the first lock and the commit lock .19.At least one machine readable medium comprising a plurality of instructions responsive to execution on a computing device to cause the computing device to perform the method of any one of claims 12-18.20.A system comprising:a processor comprising mixed transactional memory logic to concurrently execute at least one hardware transaction and at least one software transaction,Wherein the hybrid transactional memory logic executes a first transaction in a first hardware transaction mode until a first transaction is committed or the first transaction retryes a first threshold number of times in the first hardware transaction mode, and thereafter, if Not committing the first transaction, executing the first transaction in a first software transaction mode, wherein the hybrid transaction memory logic includes cross logic to determine that the association is performed in the first hardware mode Whether the filter group of the transaction conflicts with a filter group associated with the second transaction executed in the first software transaction mode, and in response to the conflict, the hybrid transaction memory logic prevents in the first hardware transaction mode The first transaction is submitted;A transactional memory coupled to the processor.21.The system of claim 20 wherein said hybrid transactional memory logic executes said first transaction in said first software transaction mode until said first transaction is committed or said first transaction is said first The software transaction mode retryes the second threshold number of times, and after the second threshold number of times, executes the first transaction in a second software transaction mode, wherein the first transaction directly updates the transactional memory.22.The system of claim 20 wherein said hybrid transactional memory logic executes said first transaction in a second hardware transaction mode prior to execution of said first hardware transaction mode, wherein said first hardware transaction mode The hybrid transactional memory logic executes the first transaction for a third threshold number of times in the second hardware transaction mode prior to executing the first transaction.23.The system of claim 20 wherein said hybrid transactional memory logic causes said first transaction to verify read data during execution of said first software transaction mode, updating and updating based on an address associated with said read data A filter group associated with the first transaction executed in the first software transaction mode, and the write data is used to update the hash table.24.The system of claim 23 wherein said hybrid transactional memory logic is such that:The second transaction in the second software transaction mode obtains the first lock and the second lock at the beginning of the second transaction, and thereafter updates the transaction memory directly during execution of the second transaction;The first transaction in the first software transaction mode obtains the first lock and the second lock upon commit of the first transaction, and thereafter employs the write data from the hash table Updating the transaction memory and invalidating at least one other software transaction concurrently executed in the first software transaction mode. |
Enable maximum concurrency in mixed transactional memory systemsBackground techniqueIn a parallel programming computing environment, sharing access to the same memory location requires proper management and synchronization, which can be more difficult to perform. Traditionally, synchronization between threads accessing shared memory is achieved using locks to protect shared data from simultaneous access. However, locking is often overly conservative in its serialization of shared data (which may not necessarily be necessary at the time of execution), but determining the time to write code is often tricky or impossible.As an alternative solution, a transactional memory is proposed to allow threads to speculatively execute critical sections (called transactions) in parallel. If a conflict occurs while execution, the thread stops or terminates its transaction and executes them again to resolve the conflict. In a transactional memory system, a thread can speculatively execute a transaction without changing the contents of the shared memory location until the transaction is subsequently committed. If a conflict is detected between two transactions, one of the transactions can be aborted so that other transactions can commit, at which point the committed transaction can change the contents of the shared memory location.DRAWINGS1 is a block diagram of a system in accordance with an embodiment.2 is a high level flow diagram of the execution of a transaction in accordance with an embodiment.Figure 3 illustrates possible timing between hardware transactions and software transactions in accordance with an embodiment.4 is a block diagram of a flow of a hybrid transaction memory system in accordance with an embodiment of the present invention.Figure 5 is a flow diagram of execution of a first hardware transaction in accordance with an embodiment.Figure 6 is a detail of a stage of a first hardware transaction in accordance with an embodiment.Figure 7 is a flow diagram of the execution of a second transaction, in accordance with an embodiment.Figure 8 is a detail of a hardware transaction based on a basic Bloom filter, in accordance with an embodiment.9 is a detail of a hardware transaction based on an optimized Bloom filter, in accordance with an embodiment.10 is a flow diagram of the execution of a speculative software transaction, in accordance with an embodiment.Figure 11 illustrates details of software transaction execution in accordance with an embodiment.Figure 12 is a flow diagram of the execution of an irrevocable software transaction, in accordance with an embodiment.Figure 13 illustrates details of an irrevocable software transaction in accordance with an embodiment.Figure 14 is a block diagram of a system in accordance with another embodiment.Detailed waysIn various embodiments implementing a transactional memory system, information related to the location of the accessed memory can be used to determine a conflict between one or more hardware transactions concurrently executed with one or more software transactions. In some implementations, this information can be maintained by a filter group associated with the thread executing the transaction. More specifically, embodiments may implement these filter sets as so-called Bloom filters in which information relating to the location of the accessed memory may be stored.In general, a Bloom filter can be implemented as a bit vector that includes a plurality of fields each providing a value associated with one or more memory locations. In operation, the accessed memory location address (or a portion thereof) is hashed with one or more hash values. The hash result is used to load the corresponding entry of the bit vector. More specifically, at the time of access and hash calculation, the indicated field of the bit vector can be set to a logical one or a valid value to indicate that the corresponding address has been accessed. Similarly, any field with a logical zero or invalid value indicates that one or more given addresses of the memory have not been accessed.Collision detection can be performed at least in part using multiple Bloom filter values. More specifically, the Bloom filter of the first thread can compare its content to the content of the Bloom filter of the second thread having concurrent execution transactions. If the cross comparison indicates that the access memory location intersects in one or more locations, a conflict is detected and one or more operations that terminate or abort the transaction may occur. And if the cross comparison indicates that the visited location does not intersect, one or both of the transactions can continue to commit without conflict detection.Embodiments can be used to determine conflicts between hardware transactions that are concurrently executed by software transactions. Using an embodiment that provides a Bloom filter for each thread, the hardware transaction that completes execution while the software globally locks it while being held by the software office can only be forced to abort when a conflict is found. Bloom filters can sometimes allow false positives, so false suspensions can still occur. However, the use of Bloom filters can improve the commit rate of hardware transactions.Embodiments may be used in a hybrid transactional memory (HTM) that uses a single global lock to be obtained by a given software firm to provide software transactions and hardware transactions. The hardware transaction memory can be implemented only by the processor hardware, which uses the best effort to complete the transaction to be committed. The software transaction memory is implemented entirely in software to synchronize shared memory in a multi-threaded program.At the end of the hardware transaction, the hardware transaction consults a single global lock. If the lock is empty, the hardware transaction can be successfully committed. In the case where a single global lock is not empty, conflict detection can be performed using a per-thread Bloom filter (which represents a set of reads and writes for each transaction). This way, non-conflicting hardware transactions can be committed even if a single global lock is occupied by a software transaction.Thus, embodiments achieve an increase in the amount of concurrency achieved in a hybrid transactional memory system. To detect conflicts between software transactions and hardware transactions, each thread is associated with a Bloom filter. During the execution of a transaction in a thread, each read and write is annotated to add a memory location to the Bloom filter. In an embodiment, this annotation can be made through a database call. However, other embodiments may employ read and write memory accesses to embed such annotations. Alternatively, the compiler can insert instructions to manipulate the Bloom filter insertion.When a hardware transaction (ie, a critical section of a transaction) is completed, the transaction consults the global lock before committing, and if the lock is empty, the transaction can be successfully committed. However, if the lock is occupied, the Bloom filter content of the hardware transaction and the software transaction (which has a global lock) is compared in the crossover operation to determine if there is a conflict. The Bloom filter allows false positives, but false negatives are not allowed. Therefore, although the transaction does not have an actual conflict, a collision can be detected, but if the transaction accesses the same memory location, the cross comparison will not report a zero collision. Therefore, even if the lock is occupied, the hardware transaction can be successfully committed as long as the Bloom filter does not report a conflict.In a particular hybrid transactional memory system, a single software transaction can execute concurrently with one or more hardware transactions. At the beginning of a software transaction, it acquires a single global lock to ensure exclusivity. Each hardware transaction reads this lock at the end of the critical section to determine if it can manage to commit or to consult the Bloom filter. In an embodiment, a single global lock can store the identifier of the owner thread, thus indicating to the hardware transaction which Bloom filter to check for conflicts.In an embodiment, the Bloom filter can be implemented as a software Bloom filter. Using these filters, each transaction (hardware or software) adds each memory location read or written to its own Bloom filter as it is read/written to that location. At the end of the hardware transaction, the Bloom filter is used to identify conflicts with software transactions that currently hold a single global lock (if any).Note that hardware transactions are primarily performed by hardware, but the read and write accesses are annotated such that the read/write location is entered into the per-threaded software Bloom filter. At commit time, the hardware transaction checks for global locks, and if it is empty, they can commit, otherwise they calculate the group intersection between their own Bloom filter and the software Bloom filter. If there is no conflict, the hardware transaction can be successfully committed. At commit time (after confirming that there are no conflicts or filter crossings), the updates performed by the hardware transaction become visible to other threads by writing the updated values to memory (so that all updates become visible immediately). If the transaction is aborted, all updates are restored to their original state.The aborted hardware transaction is retried multiple times. After a retry of N (which is a configurable parameter), the hardware transaction transitions to a software transaction and seeks to acquire a single global lock. In an embodiment where the software transaction is not aborted, this transition ensures forward progress.In this embodiment, only one software transaction can be executed at any given time. Software transactions can execute when their threads have a single global lock. It acquires the lock by writing its thread identifier (ID) at the lock position and begins executing its critical section. All updates performed by the software office are performed in situ (in other words, the software transaction directly updates the memory). In addition, the software transaction also stores the read/write location in its thread's Bloom filter to allow any concurrent hardware transaction check conflicts. In an embodiment, software transactions can never be aborted.The hybrid transactional memory approach can be used to achieve faster transaction execution and reduced overhead associated with hardware transactional memory while ensuring forward progress of the manipulated transaction. In the hybrid transaction memory mode, transactions are initially manipulated by hardware and then manipulated by software when forward progress is not available through software. In various embodiments, a hybrid transactional memory system is provided in which global locking is used to implement concurrent execution of software transactions and one or more hardware transactions.FIG. 1 is a block diagram of device 100. As shown in FIG. 1, device 100 includes a plurality of components including processor component 102, memory component 104, and transaction management module 106. However, embodiments are not limited to the type, number or arrangement of elements shown.In various embodiments, processor element 102 can be implemented using any processor or logic device capable of achieving task level parallelism. In some embodiments, processor element 102 can be a multi-core processor. In another example embodiment, processor element 102 may be a plurality of processors configured to perform tasks in parallel. Memory element 104 can be implemented using any machine readable or computer readable medium that can store data, including both volatile and nonvolatile memory. In some embodiments, memory element 104 can include a cache of processor elements 102. In various embodiments, as an additional or alternative, memory element 104 may include other types of data storage media such as read only memory (ROM), random access memory (RAM), dynamic RAM (DRAM), double data rate DRAM ( DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory (eg Ferroelectric polymer memory, Austrian memory, phase change or ferroelectric memory, silicon oxide silicon oxide (SONOS) memory), magnetic or optical card or any other type of medium suitable for storing information. Some or all of memory element 104 may be included on the same integrated circuit as processor element 102, or alternatively, some or all of memory element 104 may be disposed on an integrated circuit or other medium external to the integrated circuit of processor element 102. (eg hard drive).In some embodiments, transaction management module 106 can include circuitry, logic, other hardware, and/or instructions to manage the execution of a transaction in accordance with a transactional memory paradigm. In various embodiments, transaction management module 106 can cause execution of hardware transactions and software transactions. A hardware transaction may be a transaction that is directly executed by logic device circuitry in processor component 102. A software transaction may be a transaction that is indirectly executed by programming logic executing on processor element 102.As further shown in FIG. 1, a system 140 is provided that includes a device 100 and a transceiver 144. Transceiver 144 may include one or more radio units capable of transmitting and receiving signals using various suitable wireless communication techniques. Such techniques may involve communication across one or more wireless networks. Exemplary wireless networks include, but are not limited to, wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area networks (WMANs), cellular networks, and satellite networks.In some embodiments, processor element 102 can host one or more threads 108. Each thread 108 may correspond to an application or program executing on processor element 102, and any particular application or program may have more than one associated thread 108. An application or program may use a particular thread 108 to request execution of one or more transactions 110. Transaction 110 may cause execution of various calculations or other tasks to be performed by processor component 102.In various embodiments, when thread 108 requests execution of a transaction, transaction management module 106 manages the transaction in accordance with a hybrid transactional memory algorithm. In some embodiments, a hybrid transactional memory algorithm may implement multiple execution phases or modes during which an attempt is made to execute and commit a transaction. In various embodiments, the hybrid transactional memory algorithm can include a hardware phase and a software phase. In some embodiments, the transaction management module 106 can use the software phase for transactions only after the hardware phase is unsuccessful.In some embodiments, transaction management module 106 can utilize global lock 112 to effect concurrent execution of software transactions and one or more hardware transactions. In various embodiments, transaction management module 106 may cause global lock 112 to be set or active when a software transaction undergoes execution, and cause global lock 112 to be cleared or invalidated when no software transaction is experienced. In some embodiments, global lock 112 can be a spin lock. In other embodiments, a Mellor-Crummey-Scott (MCS) lock can be used for global lock 112 to reduce contention on the locked cache line. In various such embodiments, the "MCS_acquire" and "MCS_release" methods can be used to utilize hardware transactions to accelerate the execution of compare and exchange (CAS) instructions. In some embodiments, this global locking can be further implemented using a filtering mechanism as described herein.In some embodiments, if global lock 112 is invalid at the end of the transaction and no other conflicts occur during transaction execution, transaction management module 106 can commit the hardware transaction. And if the global lock 112 is valid or occupied when the hardware transaction seeks to commit, the transaction management module 116 can determine whether the conflict exists in the hardware transaction by referring to the information stored in the Bloom filter associated with the thread that initiated the transaction. Between the pending software transaction.In various embodiments, transaction management module 106 can include execution logic 114. In some embodiments, execution logic 114 may be circuitry, other hardware, and/or instructions that execute transaction 110. In various embodiments, execution logic 114 may perform one or more executions of a transaction each time thread 108 requests execution of a new transaction. In some embodiments, execution logic 114 may initially execute a transaction as a hardware transaction one or more times, and then execute the transaction as a software transaction when the transaction fails to commit when executed on hardware. Thus, in some embodiments, the software transaction mode can be a fallback execution phase during which the transaction is assigned the highest priority to ensure that it will commit and will progress forward. In some embodiments, execution logic 114 may also check global lock 112 at the end of a hardware transaction.In some embodiments, transaction management module 106 can include tracking logic 116. In various embodiments, tracking logic 116 may include circuitry to manage global lock 112, retry counter 118, and retry threshold 120, other hardware and/or instructions. In some embodiments, tracking logic 116 can set global lock 112 based on instructions from execution logic 114. For example, when execution logic 114 begins execution of a transaction in a software phase, execution logic 114 may instruct tracking logic 116 to set global lock 112. In various embodiments, retry counter 118 may include the total number of execution attempts that have been performed in hardware transaction mode to perform a transaction. In some embodiments, the retry threshold 120 may include the number of attempts after which the execution logic 114 should enter execution as a software transaction from execution as a hardware transaction. In various embodiments, tracking logic 116 may reset retry counter 118 (corresponding to a transaction) to zero when a new transaction is received. In some embodiments, tracking logic 116 may increment retry counter 118 after each unsuccessful execution of a transaction.As further shown in FIG. 1, memory component 104 includes per-thread read set storage 126 and per-threaded write set storage 128. In an embodiment, the storage device can store information related to values read or written during a transaction. In addition, each thread may have corresponding Bloom filters 134 and 136 associated therewith, each associated with a given read set storage device or write set storage device (and thread). As will be further described herein, during execution of a transaction, each read and write can be annotated into a corresponding Bloom filter to indicate that a given memory address has been accessed during the transaction. This information can later be used to determine if at least potential conflicts exist between concurrent execution transactions.In various embodiments, transaction management module 106 can include final completion logic 128. In some embodiments, final completion logic 128 may include circuitry, other hardware, and/or instructions that determine whether to commit or abort a transaction after the transaction is executed by execution logic 114. In various embodiments, final completion logic 128 may determine that any particular transaction will be aborted when the transaction conflicts or potentially conflicts with other transactions. In some embodiments, final completion logic 128 may determine whether a transaction may potentially conflict with concurrent software transactions by examining global lock 112. In various embodiments, if global lock 112 is set and the transaction is a hardware transaction, final completion logic 128 may then reference cross logic 124 to determine if at least a potential conflict exists between the hardware transaction and the software transaction. To this end, the cross logic 124 can access the respective Bloom filters 134 and 136 of the thread that initiated the transaction to determine if the filter groups intersect. If so, then at least potential conflicts exist, and thus the cross logic 124 can report a valid cross to the final completion logic 128. If the filter group does not indicate an intersection, then this invalid intersection is reported to final completion logic 128.The final completion logic 128, in turn, can cause the hardware transaction to be aborted when the intersection is found, otherwise the hardware transaction can be committed (assuming no other conflicts are detected).In some embodiments, if global lock 112 is set and the transaction is a software transaction, final completion logic 128 may commit the transaction and instruct tracking logic 116 to release global lock 112. In various embodiments, if global lock 112 is not set, final completion logic 128 may submit a hardware transaction and instruct tracking logic 116 to clear retry counter 118 without interacting with cross logic 124 to determine if the filter group indicates potential conflict.In some embodiments, transaction management module 106 can include abort handler logic 130. In various embodiments, the abort handler logic 130 may include circuitry, other hardware, and/or instructions that manipulate the abort of the transaction indicated by the final completion logic 128. In some embodiments, the abort handler logic 130 may determine whether the next attempted execution of the aborted transaction should occur as a hardware transaction or as a software transaction. In various embodiments, the abort handler logic 130 may determine whether the transaction was aborted due to a conflict or potential conflict with other transactions or for another reason. If the transaction is aborted for another reason, such as a cache association overflow due to an illegal instruction, a capacity overflow, or an irregular memory access mode, the abort handler logic 130 may determine that the execution logic 114 should enter the software phase directly. . If the transaction is aborted due to a conflict or potential conflict with other transactions, the abort handler logic 130 may determine whether the transaction should be retried at the current or the next stage, for example based on the number of retries.In various embodiments, in order to determine whether the next attempted execution of the aborted transaction should be handled as a hardware transaction or a software transaction, the abort handler logic 130 may compare the retry counter 118 to the retry threshold 120. In some embodiments, if the retry counter 118 is less than the retry threshold 120, the abort handler logic 130 can instruct the execution logic 114 to retry the transaction as a hardware transaction. Otherwise, the abort handler logic 130 may instruct the execution logic 114 to retry the transaction as a software transaction. In various embodiments, tracking logic 116 may adaptively determine the value of retry threshold 120 based on the number of successful and/or unsuccessful commits of the attempted transaction. Although shown at this higher level in the FIG. 1 embodiment, it is to be understood that the scope of the present invention is not limited in this respect, and that the hybrid transaction memory system can take many different forms and have many variations.Referring now to Figure 2, shown is a high level flow diagram of the execution of a transaction in accordance with an embodiment. As seen in Figure 2, in accordance with method 200, all transactions begin as hardware transactions through hardware (block 210). During execution of each read or write (block 215), the transaction records the locations read or written in the software Bloom filter of the corresponding thread. After the hardware transaction completes execution of its critical section (block 220), it attempts to commit by checking for conflicts with software transactions (if any). The hardware transaction first checks if the global lock is occupied (diamond 225). If this lock is empty, the hardware transaction can be successfully committed (assuming no abort occurs as determined by diamond 240). If the lock is occupied, the value of the lock indicates the index or identifier of the thread that remains locked, which thus performs a software transaction.In this case, the hardware transaction goes to diamond 230 to access the Bloom filter of the thread executing the software transaction to determine if there are any conflicts. More specifically, at diamond 230, an interleaving operation can be performed between 2 filters to determine if any entries or fields of the 2 Bloom filters intersect (eg, both have valid or logical ones) . If so, the hardware transaction is aborted and control passes to diamond 270 to determine if the number of retries for a given hardware transaction has reached a configurable number N. Note that the various steps can be performed at the time of aborting the transaction, including eviction of any updated values in the buffer associated with the thread or other storage device.If instead it is determined that there is no intersection between the Bloom filters, then control passes to diamond 240 to determine if the transaction was aborted, for example, for another reason. If not, then control passes to block 250 where the transaction is committed. For commit, the hardware transaction can update the memory with any updated value that was previously stored in a buffer that is visible to the given thread only during hardware transaction execution.It will also be understood that while the determination as to whether the transaction is aborted is illustrated by diamond 240 at a particular location of the FIG. 2 embodiment, it is possible to pass the conflict detection logic (which may detect other types of conflicts or other reasons for suspension during the transaction). ) Cause hardware transactions to abort at any time during their execution. However, for ease of explanation, it is to be understood that the diamond frame 240 is shown in the position shown in FIG.Referring again to FIG. 2, if it is determined at diamond 270 that the number of retries has not reached the threshold number N, then control passes to block 280 where the number of retries is incremented and then control passes back to block 210 to begin the hardware transaction again. Otherwise, if the number of retries has reached the retry threshold N, then control passes from diamond 270 to block 260 where execution can switch to software transaction mode. More specifically, in this implementation that only permits a single software transaction, the transaction can thus be executed in software transaction mode to complete, allowing the transaction to be submitted at block 250.Bloom filters ensure conflict detection between software transactions and hardware transactions. The hardware transactional memory system ensures collision detection and resolution between hardware transactions. In an embodiment, a single global lock ensures that only one software transaction is executed at any time, and thus no additional conflict detection mechanisms are provided for software transactions.Figure 3 illustrates possible timing between hardware transactions and software transactions in accordance with an embodiment. In case 310, the software transaction first updates the variable X, and the hardware transaction later updates the same variable X to a different value. When a hardware transaction attempts to commit, a filter group crossing is performed that identifies double access, thus creating a conflict and aborting the hardware transaction. A similar operation occurs in case 320. However, in cases 330 and 340, at the point of hardware transaction commit, the software thread has committed and released a single global lock. Therefore, when the hardware thread checks for this lock, it is found to be released, and thus the transaction can be successfully committed.In cases 330 and 340, when the hardware transaction attempts to commit, the lock is empty, indicating that no software transactions are executing concurrently. Even if there are overlapping software transactions, it has already been committed to this point and serialized before the hardware transaction. If a software transaction has performed any conflicting operations that are serialized after a hardware transaction, the hardware transaction is aborted in the event of a conflict (due to the hardware conflict detection mechanism). Therefore, hardware transactions can be committed to provide correct behavior when the lock is empty.If the hardware transaction tries to commit while it is committed, it is occupied (as in cases 310 and 320), then the concurrent software transaction is executed. The commit hardware transaction is serialized before this software transaction due to possible future conflict operations performed by the software office. However, software transactions may perform conflicting operations on one or more memory locations before hardware transactions begin to track those locations, and thus serialized hardware transactions may not behave correctly prior to software transactions. Thus, the examples use a Bloom filter to determine this.Note that the Software Bloom filter does not contain all the locations that the software transaction will access in the future, but only where the transaction has been accessed. However, future accesses will be properly serialized after the submitted hardware transaction. Therefore, if the Bloom filter does not cross, hardware transactions can be properly serialized before the software transaction. If the Bloom filter identifies a conflict, the conflict operation first occurs in the software transaction and then in the hardware transaction, otherwise the hardware transaction has been aborted. In this case, hardware transactions cannot be serialized before the software transaction and will be aborted. Thus, embodiments correctly identify these conflicts and abort hardware transactions. Note that in an embodiment, it is possible for the Bloom filter to incorrectly report a conflict (as an indistinguishable false positive), so the hardware transaction will also be aborted in both cases. However, the Bloom filter does not cause false negatives and thus identifies and prevents all conflicts.In an embodiment, the effective Bloom filter implementation allows insertion and group crossing at O(1) time, thereby minimizing overhead. In addition, hardware transactions only read global locks and read software Bloom filters before committing, thereby reducing the window when hardware transactions may be aborted due to software transactions modifying these locations. In an embodiment, the read lock and Bloom filter may add only two additional cache lines to the read set of transactions. In some embodiments, this can be optimized such that the position of the Bloom filter is used to indicate whether the lock is occupied and the remainder of the Bloom filter is used as a Bloom filter. In this implementation, the locked position can serve two purposes, thereby reducing the read set size of the hardware transaction to only one additional location. The transaction's own Bloom filter adds additional cache lines to the write set, but in an implementation this may be as low as only one cache line, depending on the Bloom filter size.Using an embodiment, many small hardware transactions and concurrent execution of large software transactions accessing their own separate memory accesses can be committed. As one such example, consider an array representing an open addressing hash table. The thread can perform the lookup(x) and insert(x) operations in this hashtable. Once the occupied threshold is taken, the thread decides to double the size of the hash table by allocating the new array and re-emuthing the elements from the old array to the new array. Lookup and insert operations are short transactions and can be successfully done with hardware most of the time. Re-hashing can be performed as a software transaction (and a re-hashing thread gets a single global lock). In this case, by the accurate collision detection between the software transaction and the concurrent hardware transaction, the lookup operation performed as the hardware transaction can be submitted using the data from the old array while the re-hashing of the new array occurs. In addition, the insert operation performed as a hardware transaction for the end of the old array (ie, in the portion that has not been re-hashed) can also be committed during the re-hash. Thus, embodiments improve throughput by allowing small hardware transactions to be concurrently committed with long execution software transactions.While the Bloom filter collision detection technique is provided as described above to improve parallelism, in the case of the use of a single global lock in the above described embodiments, there can still be inefficiencies. In other embodiments, a transactional memory system can be provided that enables multiple hardware transactions and multiple software transactions to be executed and submitted in parallel. In general, cache-based hardware transactional memory systems are available for hardware components, and failure-based software transactional memory systems are available for software components. These embodiments provide a hybrid transactional memory system that allows multiple hardware transactions to be executed concurrently with multiple software transactions while still ensuring forward progress.Referring now to Figure 4, shown is a block diagram of a hybrid transaction memory system in accordance with an embodiment of the present invention. As shown in FIG. 4, HTM system 400 provides multiple hardware transaction modes and multiple software transaction modes. In the implementation shown in FIG. 4, the transaction begins in a first hardware transaction mode 410 (referred to herein as a Light Hardware (LiteHW) transaction mode). If an overflow or unsupported instruction occurs, the transaction is immediately upgraded to another type of transaction mode. If the transaction is aborted for another reason (eg, due to a conflict), the transaction is retried multiple times before being upgraded to the second hardware transaction mode 420 (referred to herein as Bloom Filter Hardware (BFHW) mode). A similar retry occurs and is updated to the first software transaction mode 430 (referred to herein as the speculative software (SpecSW) mode) when the transaction is not committed. Again in this mode, the transaction can be retried multiple times before being upgraded to the second software transaction mode 440 (referred to herein as the IrrevocSW mode). It is to be understood that although specific modes and interactions are shown in FIG. 4, embodiments are not limited in this respect.If most transactions are short, accesses are able to conform to TM's cache-capable memory, and they do not contain unsupported instructions, they can be successfully executed directly through the hardware without having to synchronize with the software transaction. The lightest type of transaction is the first hardware transaction mode (LiteHW). This transaction type is executed without any comments read and written, and can be successfully committed when there is no software transaction execution when it tries to commit. This type of transaction is simple and fast, but it allows for minimal concurrency with software transactions.The second hardware transaction mode BFHW uses a software Bloom filter to record the location of hardware transaction reads and writes to enable detection of conflicts with concurrently executed software transactions. This type of transaction adds extra overhead compared to LiteHW transactions, but can be committed even in the presence of concurrent execution software transactions. Hardware transactions are faster, but fail in the best-effort HTM due to unsupported instructions or overflows, and thus provide software rollback.The first software transaction mode, SpecSW, in turn performs a speculative software transaction in which the transaction records the location of the read and write in a Bloom filter that conflicts with other software and hardware transactions, and stores all writes in a hash table. Delayed updates during the submission phase. The invalidation occurs after the commit to abort the ongoing conflicting transaction, and each transaction lock is used to ensure opacity. In this first software transaction mode, each read is verified to prevent stale transactions (the aborted transaction) from reaching an inconsistent state.Finally, the second software transaction mode, IrrevocSW, performs all updates (directly to memory) in place and cannot be aborted. Due to this quality, only one IrrevocSW transaction can be executed at any given time. However, multiple SpecSW and BFHW transactions can be executed concurrently with the IrrevocSW transaction.Collision detection between multiple software transactions is accomplished using a Bloom filter, as described above. Collision detection between software and hardware transactions also uses Bloom filters, but using a best effort HTM with no escape action generally results in the abort of hardware transactions during collision detection. This behavior is due to strong isolation of hardware transactions: any memory location tracked by hardware will cause a conflict, thereby aborting the hardware transaction when the software transaction performs a conflicting access to that location. In addition, hardware updates do not become visible to other threads until the hardware transaction commits.Embodiments postpone collision detection between hardware and software transactions until after a hardware transaction has been committed. The hardware transaction performs the post-commit phase, where it invalidates all ongoing conflicting software transactions. Because the hardware transaction has been committed, sharing the Bloom filter information with other threads does not cause it to abort.Whether it is software or hardware, the various transactions go through multiple stages. The behavior in each of these phases depends on the type of transaction. The first phase is the beginning of the beginning of the transaction. The hardware transaction call initiates a hardware transaction instruction, and the software transaction records information related to the starting address and notifies other threads about its existence via an indicator indicating the presence of at least one software transaction, such as a flag (eg, the sw_exists flag).During the execution phase, the operations are interpreted and written, and the behavior is determined based on the type of transaction execution. All transaction types record the visited location in the Bloom filter, except for the LiteHW transaction.During the abort phase, the hardware abort is automatically handled by the hardware. For software transactions, the software clears the information recorded during the execution of the transaction and restarts from the address stored during the start phase.During the commit phase, conflict detection is performed, and if the transaction can be committed, a memory update is performed. Its implementation depends on the type of transaction.During the post-commit phase, the transaction can be invalidated. Note that this phase updates the memory when the transaction has been committed and its write set location is used. This phase ensures that all ongoing software transactions that conflict with the transaction just submitted will be aborted.As outlined above, the first hardware mode LiteHW is the simplest and fastest type because it introduces negligible additional software overhead and is performed entirely by hardware. LiteHW transactions can only be committed if the software transaction does not exist. Figure 5 is a flow diagram of the execution of such a transaction, in accordance with an embodiment. Method 500 begins execution of a hardware transaction, such as via a user-level hardware transaction start instruction (block 510). Subsequently, the transaction body is executed (block 520). This critical section is executed without recording any of the read or write. When the transaction attempts to commit (at block 530), it checks if any software transactions are currently executing (eg, by checking the sw_exists flag (w_exists != 0 if the software transaction is executing)) (diamond 540). If there are software transactions executing concurrently, control passes to diamond 550 to determine if the retry threshold (M) has been reached. If not, then control passes to block 555 and the retry count is incremented. Control then passes back to block 510 above. If the transaction instead retries the threshold number of times, then control passes to block 558 where the transaction switches to the second hardware transaction mode BFHW mode.If it is determined at diamond 540 that there is no software transaction execution (sw_exists = 0), then the transaction can be successfully committed, assuming that the transaction has not been aborted in diamond 560. Because LiteHW is a hardware transaction, its commit can be executed instantaneously (block 565). If the transaction is aborted at any point during execution by the hardware conflict detection mechanism, the abort handler checks the abort state set by the hardware to determine whether to retry the transaction in the same mode (up to M times), or to switch to SpecSW ( Block 570) (eg, if the abort is caused by an overflow) or switches to IrrevocSW (block 580) (eg, if the transaction is caused by an unsupported instruction, such as an input/output instruction).FIG. 6 shows more details of the LiteHW transaction 501. During the start phase, the transaction execution begins a hardware transaction instruction (such as a txbegin instruction). During execution, the OnRead and OnWrite handlers, which are access handlers that update the Bloom filter, such as database-based handlers, are empty. The OnAbort handler increments the number of retries and determines whether to retry the transaction as a LiteHW transaction or switch to a different mode based on the number of retries and the reason for the abort. Finally, the sw_exists flag is also checked by the commit phase executed by the hardware, and a hardware transaction end instruction (such as the txend instruction) is called. This transaction type has no post-commit phase.Figure 7 is a flow diagram of the execution of a BFHW transaction in accordance with an embodiment. Method 600 begins execution of a hardware transaction, such as via a user-level hardware transaction start instruction (block 610). Subsequently, the transaction body is executed (block 620). During its execution, the transaction records the location of the memory being read or written in its read and write bloom filters. When the transaction attempts to commit (diamond 630), it checks if the commit lock is occupied (diamond 640). If the lock is empty, and assuming no abort has occurred (as determined at diamond 660), the transaction acquires its own hardware transaction lock (at block 670) and commits (block 675).If the transaction lock is occupied, the software transaction is currently committing. In one embodiment, the simplest thing to do in this case is to abort because the hardware transaction may have conflicting memory updates with the commit software transaction. This situation is shown in more detail in Figure 8 below.However, if the hardware transaction does not have any conflict with the commit software transaction, the hardware transaction may be committed while the software transaction is being committed. This can be determined using a Bloom filter comparison. The optimization behavior of this hardware transaction mode is to check the Bloom filter of the commit software transaction when it finds that the commit lock is occupied. If the Bloom filter indicates a conflict, the hardware transaction is aborted, otherwise it can commit (after it gets its own transaction lock as described above). This situation is illustrated in Figure 9.Similar to LiteHW, the OnAbort handler determines whether it has risen to one of the multiple software modes (eg, at blocks 658 and 690) or whether the number of retries that have reached the threshold number has occurred (in diamond 650). Otherwise, the number of retries is incremented at block 655 and the transaction begins again at block 610.It is written to the transaction memory and submitted entirely through hardware. First, the transaction determines if it can commit (by diamond 650) by examining the commit lock and the software Bloom filter (whether the commit lock is occupied). If there is no conflict (the lock is empty or the Bloom filter of the hardware transaction does not intersect the Bloom filter of the software transaction), the hardware transaction gets its own transaction lock (at block 675) (in Figures 8 and 9) Shown as tlock). This lock is only obtained by the hardware transaction that owns it, so it is always empty when that transaction tries to get it. However, it is used to prevent competition with software transactions that begin its commit phase, as described in more detail below.Note that if the transaction is aborted, the lock on the transaction is automatically released because it is part of its speculative write set. In addition, the value written to the lock becomes visible to other threads only when the hardware transaction commits its changes to the memory. If other threads check this location after the lock is taken but before the change is committed to the memory, the hardware transaction is aborted, ensuring that competition is not possible.Still referring to FIG. 7, after the commit of block 675, the post-commit phase of this second hardware transaction mode is executed by software and occurs after the hardware transaction commits its changes to the memory. As seen, the post-commit operation includes invalidating the conflicting software transaction (block 680). Note that at this point, the hardware transaction has been committed, but it ensures that all software transactions that conflict with it will be aborted. This is achieved by checking the Bloom filter of the hardware transaction for the Bloom filter for ongoing software transactions. If a conflict is detected, the software transaction is aborted. After the failure process is completed, the hardware transaction resets its lock.FIG. 8 illustrates a basic Bloom filter based hardware transaction 601 including a start phase, an execution phase, a commit phase, and a post commit phase. As seen, during execution, read and write are added to the corresponding read and write Bloom filters. However, it is to be understood that in other embodiments, a single Bloom filter can be used for both reading and writing sets. It is then determined if the commit lock is occupied, and if so, the transaction is aborted in this basic implementation. Otherwise, the transaction lock is taken and the transaction write is committed. Then in the post-commit phase, a Bloom filter crossover is performed to abort all conflicting software transactions and then release the transaction lock.FIG. 9 illustrates an optimized Bloom filter-based hardware transaction 602 that includes a start phase, an execution phase, a commit phase, and a post commit phase. In this case, if the commit lock is occupied, the conflict can be detected using a Bloom filter cross to determine if a conflict exists, and if not, the hardware thread can commit, and if the conflict exists, the transaction aborts. Otherwise, the operation occurs similarly to Figure 8.10 is a flow diagram of the execution of a speculative software transaction, in accordance with an embodiment. Method 700 begins execution of a software transaction (block 710). Subsequently, the transaction body is executed (block 720). As seen, during execution, the read and write locations are recorded to the Bloom filter. At the time of submission (block 730), the transaction acquires a commit lock (block 740) and consults the contention manager (at block 760) (which may be implemented by hardware, software, firmware or other logic or a combination thereof) to determine it. Whether to commit or abort (so that the conflicting software transaction can continue to execute). If the contention manager decides to abort the transaction, the transaction releases the commit lock and retries as a SpecSW transaction, depending on the number of retries determined at diamond 790. If below this threshold, the retry counter is incremented at block 792 and the transaction is re-executed in the speculative software transaction mode (at block 710). If above the threshold, the transaction switches to the irrevocable software transaction mode at block 795.Otherwise, if the transaction can commit, it acquires an irrevocable lock (at block 765), commits its changes to the memory (at block 770), invalidates the ongoing conflicting software transaction (at block 775), and releases the lock (at block 780). ).Additional details of the execution of the SpecSW transaction are shown in FIG. As seen, the speculative software transaction 701 performs all phases through the software. In the embodiment of Figure 11, during the main execution, the read is verified and added to the read Bloom filter, while the write is added to the write Bloom filter. Note that deferred updates can be performed by writing any updated values to a hash table or other temporary storage device. During the commit phase, assuming that the transaction is allowed to commit, it gets an irrevocable lock and updates the memory. Otherwise, it releases the lock and restarts the transaction. Then in the post-commit phase, the transaction is invalidated before the lock is released, thereby invalidating any conflicting software transactions.Finally, note that SpecSW transactions provide correct execution even when BFHW is being submitted. If the SpecSW transaction has already begun the commit process when the BFHW is ready to commit, the BFHW transaction will observe that the commit lock is occupied and will check for a Bloom filter check for its software Bloom filter. If there is no conflict, the hardware transaction can be committed, otherwise BFHW will be aborted.However, if the BFHW checks for a commit lock before the SpecSW transaction begins the commit phase, one of two conditions can occur: the commit lock is changed before the BFHW hardware transaction commit (which terminates the hardware transaction, thereby eliminating any potential conflicts); or the commit lock is in The BFHW hardware transaction was changed after submission. It is speculated that the software transaction does not check for conflicts with hardware transactions, and thus it may miss conflicts with newly committed hardware transactions and may begin committing its changes to memory. To avoid this, all SpecSW transactions check for locks on all hardware transactions after getting the commit lock, and wait until they are empty. If the SpecSW transaction is still valid when the transaction lock is empty, it does not conflict with any committed hardware transactions.Referring now to Figure 12, shown is a flow diagram of the execution of the irrevocable software transaction IrrevocSW. As seen in Figure 12, method 800 begins at the beginning of a transaction (block 810). The transaction then acquires an irrevocable lock and commits the lock (block 820). The primary transaction body can then be executed at block 830. Note that for non-cancelable software transactions, all updates are executed in-place (directly to memory), so the transaction acquires irrevocable and commit locks immediately upon execution to ensure seriality. The transaction then commits (block 840). Thereafter, the conflicting software transaction is invalidated, for example, based on the Bloom filter group crossing (block 850). Finally, both locks are released (block 860).FIG. 13 illustrates additional details of an irrevocable software transaction 801 in accordance with an embodiment. Note that at the beginning of the execution, two locks are acquired and the software flags are set. In the body, although direct updates are used, reads and writes are added to the corresponding Bloom filter to effect subsequent failures with conflicting software transactions. In an embodiment, the irrevocable transaction cannot be aborted, and thus the commit phase is essentially no operation (NOP). The post-commit phase is similar to the post-commit phase of the speculative software transaction: the current transaction has been committed, and thus it invalidates the ongoing conflicting software transaction.In an embodiment, the contention manager is used by the speculative software transaction to determine if they are able to commit when they reach their commit phase. The contention manager considers all ongoing transactions that would be aborted when the commit transaction is allowed to commit, and determines which transaction or transactions are allowed to be advanced based on various factors. In an embodiment, this determination may be based on priority, read and write set sizes of commit and conflict transactions, and transaction progress for each thread (eg, number of commits so far), among other factors.The failure performed after the commit ensures that the new transaction can begin during the commit phase without being missed by the invalidation process. If a transaction is missed by a failure process (because it starts too late), it begins after the failure process of committing the transaction. Therefore, it begins after the commit transaction commits its write, and thus all reads of the newly started transaction are serialized after the commit transaction and are thus consistent. Table 1 below is a pseudo code representation of the failure process in accordance with an embodiment.Table 1Verification is performed for each read of the portion of the write set that is not the transaction that performed the read. If the read is part of a write set, the value is returned from the hash table that stores the updated value of the transaction, and no validation needs to be performed.In an embodiment, the verification can be performed as follows. First, the thread inserts the new read position into its Bloom filter, and then it reads the position. This sequence ensures that potential conflicts will not be missed by the failed process of committing the transaction. After reading the value of the read location, it is still not safe to be returned because other transactions may be in the middle of their commit phase, thereby updating the memory location. If the current read is for the location just updated, returning this read may generate incorrect program behavior because all other reads of the current transaction are from before committing the transaction update memory.To avoid this, a verification code can be executed for all previously unwritten reads. This code checks if the irrevocable lock is occupied, and if so, it reads the Bloom filter of the software transaction (indicated by the identifier of the irrevocable lock) to determine if there is any conflict. If the lock changes at the same time, the conflict can be ignored by the verification code. But at the end of the verification, the transaction checks to invalidate it by other software transactions. If the lock is released at the same time, it means that the commit transaction must be completed. Reading is safe if the verification passes and the transaction is not invalidated by the commit transaction.Referring now to Table 2, there is shown pseudo code for the verification process in accordance with an embodiment.Table 2An irrevocable transaction acquires a commit lock and an irrevocable lock at the beginning of execution. The speculative transaction first gets the commit lock and consults the contention manager about whether it can commit. If the contention manager allows a transaction to commit, it acquires an irrevocable lock before writing its update to memory.A commit transaction can get an irrevocable lock from its commit phase, making commit locks unnecessary. However, the speculative transaction bases its captcha on the acquired irrevocable lock. If the commit lock does not exist and the irrevocable lock is acquired at the beginning of the commit phase, the following conditions may occur before consulting the contention manager. Consider the SpecSW transaction that performs the read and executes the verification code, noting that its read and commit software transactions conflict. Therefore, it decided to start over. The transaction advisory contention manager is submitted, but it is not allowed to commit (for example, because of a high priority in-progress transaction). Therefore, the commit transaction is also aborted. However, other transactions have been terminated. In addition, competition may occur such that the contention manager bases the abort decision on the transaction that was just decided to abort during the verification, so the two transactions abort each other without advancing.Submit locks can be used to avoid this situation. Therefore, the commit transaction requires a commit lock, which is escalated to an irrevocable lock only after being given permission to submit. The verification code only aborts the transaction due to a conflict with other transactions that remain irrevocable. Therefore, the transaction is only verified to be aborted due to a conflict with a transaction that is definitely committed.The hardware is used to ensure the correctness of relative concurrent hardware transactions. Hardware transactions are greatly isolated, so changes made to memory become visible to other thread atoms only when the transaction commits. In addition, collision detection is implemented in hardware, so conflicting transactions for transactions are aborted. Therefore, there are no other software components to ensure proper interaction between multiple LiteHW transactions and multiple BFHW transactions.Collision detection between concurrent software transactions uses a failure method to ensure. All commit transactions check for conflicts with other ongoing software transactions and abort them in the event of a conflict. No software transaction can commit during the invalidation process because the commit transaction remains committed. An irrevocable transaction acquires a commit lock as soon as it becomes valid, and thus no other software transaction can become irrevocable or committed during its execution. When an irrevocable transaction commits, it also invalidates the ongoing conflicting transaction, ensuring that there is no violation of serializable correctness.Regarding hardware-software correctness, in one embodiment, LiteHW transactions can be executed concurrently with software transactions, but they cannot be committed while the software transaction is currently executing. This is because the LiteHW transaction does not maintain a record of the location of the accessed memory, so collision detection between LiteHW transactions and software transactions cannot be performed.In contrast, BFHW transactions track the memory locations they access, so they can perform collision detection. In addition, BFHW transactions can be committed even when software transactions are being executed. If the commit software transaction has a conflict with the BFHW transaction, the latter will be automatically aborted by the hardware. If the commit BFHW transaction has a conflict with the ongoing software transaction, the software transaction is aborted during the post-BFHW commit phase (failure phase). In addition, it is sufficient to compare the Bloom filter of the hardware transaction with the Bloom filter of the software transaction at the end of the hardware transaction. In this way, collision detection of every read and write of a hardware transaction can be avoided.Embodiments also achieve memory consistency. For concurrent hardware transactions, opacity is automatically maintained by the hardware because the updates do not become visible until the hardware transaction commits, ensuring consistency.A hardware transaction can enter an inconsistent state by reading a memory location while the non-reversible transaction is being executed using a direct update or while the speculative transaction is performing a writeback. However, failures and loops that occur in hardware transactions will cause the transaction to abort and resume without significant impact on other threads.Each newly read verification code is used between software transactions to ensure opacity, as described above, so that software transactions cannot enter an inconsistent state due to updates caused by other software offices.Software transactions can enter an inconsistent state by reading the memory location modified by the newly submitted hardware transaction. In an embodiment, the software transaction is not allowed to commit and is invalidated by the post-commit phase of the hardware transaction. However, due to read inconsistent data, software transactions may enter an illegal state until they are noted to have failed.To prevent this, an embodiment may provide a software sandbox for SpecSW transactions. Alternatively, the opacity of the software transaction can be provided using a hardware post-commit counter. In this case, the counter counts the number of hardware transactions that were just committed in the BFHW mode and are currently in the post-commit phase. The BFHW transaction increments this counter using a store operation before committing a hardware transaction. If the atomicity is violated, the hardware transaction is aborted, leaving no trace of change. After the post-commit phase is completed, the BFHW transaction, for example, uses the fetch and replace instruction to decrement the post-commit counter. Using this counter, opacity can be achieved on SpecSW transactions in the presence of BFHW transactions. The SpecSW transaction can in turn read this counter and wait until it reaches zero before adding a new value to its read set. This ensures that all new values read by the SpecSW transaction are read outside of the post-commit phase of the hardware transaction and are therefore consistent (otherwise the SpecSW transaction is marked as INVALID during the post-commit phase of the BFHW transaction). In addition, the BFHW transaction can use this post-commit counter to ensure consistency of SpecSW transactions and to ensure mutual exclusion with SpecSW transactions during the commit phase (in one embodiment, each transaction is locked unused).Note that the commit phase and the post-commit phase of the lock serialization software transaction are committed. However, hardware transactions are not serialized by concurrent submission of software and hardware transactions; therefore, they can commit and execute failures concurrently, making the system more scalable and useful.Referring now to Figure 14, shown is a block diagram of an apparatus in accordance with another embodiment. As shown in FIG. 14, device 100' includes circuits, components, and logic similar to device 100 described above with respect to FIG. In fact, in many system implementations, a hybrid transactional memory system can execute on the same hardware, whether it is in accordance with an embodiment that implements a single global lock (as described above in Figure 2) or implements multiple locks and multiple software. A system of mode transaction modes (for example, as described in Figure 4).For ease of discussion, the same components, circuits, and logic as in FIG. 1 in the embodiment of FIG. 14 will not be discussed. The present discussion will instead focus on the differences in the device 100' that implements the execution of a hybrid transactional memory transaction with multiple hardware transaction modes and multiple software transaction modes. As seen, instead of a single global lock, a commit lock 112 and an irrevocable lock 113 are provided to enable different software transactions to acquire these locks at different times depending on the transaction mode in the transaction (of course, in other embodiments, additional or Different locks can exist). Additionally, multiple retry counters 118 can be provided, where each retry counter is associated with a retry count for a given transaction mode. And similarly, a plurality of retry thresholds 120 are also provided.Still referring to FIG. 14, transaction management module 106 also includes invalidation logic 125 that is configured to perform post-commit failure as described above. In general, the remainder of device 100' and system 140' are the same as in FIG. Note that given the additional functionality and operations performed in, for example, the hybrid transactional memory system described in connection with Figures 4-13, there may be some differences in the implementation of the various logical components. Further, it is to be understood that although shown at this higher level in Figure 14, many variations and alternatives are possible.The following examples relate to other embodiments.In Example 1, an apparatus includes: a processor; execution logic to implement at least one first software transaction of a first software transaction mode and a second software transaction of a second software transaction mode and a first hardware transaction in a transactional memory system Concurrent execution of at least one second hardware transaction of the mode and at least one second hardware transaction of the second hardware transaction mode; tracking logic, an activation flag to indicate that the at least one software transaction is experiencing the first software transaction mode or the second software transaction mode Executing; intersecting logic determining whether a filter group of the first hardware transaction of the second hardware transaction mode conflicts with a filter group of the at least one software transaction undergoing execution at the end of the first hardware transaction of the second hardware transaction mode; And finalizing the logic, committing the first hardware transaction when there is no conflict, and aborting the first hardware transaction in the event of a conflict. Note that in some implementations, one or more of execution logic, tracking logic, cross logic, and finalization logic can be implemented in the processor. It is also noted that the above described processor can be implemented using various components. In an example, the processor includes a system on a chip (SoC) incorporated in a user device touch enabled device. In another example, a system includes a display and a memory, and includes a processor of one or more of the examples herein.In Example 2, in the second hardware transaction mode, the first hardware transaction optionally updates the filter group of the first hardware transaction for each memory access of the first hardware transaction.In Example 3, in the first software transaction mode, the first software transaction optionally obtains the first lock and the second lock at the end of the first software transaction, and updates the transaction memory system with the write data stored in the hash table. Transactional memory.In Example 4, in the first software transaction mode, after the commit of the first software transaction, the first software transaction optionally invalidates other software transactions of the first software transaction mode.In Example 5, in the second hardware transaction mode, the second hardware transaction optionally gets a commit lock and a transaction lock before the commit of the second hardware transaction.In Example 6, where the intersection occurs between a filter group of the first software transaction and a filter group of other software transactions, the first software transaction of Example 4 optionally invalidates other software transactions.In Example 7, in the first software transaction mode of Example 3, the first software transaction optionally verifies the read data during execution.In Example 8, the second software transaction mode of any of the above examples: at the beginning of the second software transaction, the second software transaction obtains the first lock and the second lock; and the second software in the second software transaction mode During execution of the transaction, the second software transaction directly updates the transactional memory of the transactional memory system.In Example 9, a method includes concurrently executing a software transaction of a first thread and a hardware transaction of a second thread by a processor in a transactional memory system; activating a global lock to indicate execution of a software transaction; and in a hardware transaction At the end, the state of the global lock is determined, and if the global lock is valid, it is determined whether the filter group of the first thread crosses the filter group of the second thread, and if not, the hardware transaction is committed.In Example 10, the method of Example 9 optionally further comprises submitting a software transaction and deactivating the global lock at the end of the software transaction.In Example 11, the method of Example 9 or 10 optionally further includes submitting the hardware transaction when the global lock is invalid at the end of the hardware transaction without determining whether the filter group is crossed.In Example 12, the method of one of the examples 9-1, optionally further comprising: inserting, by the hardware transaction, an address of the access to the transaction memory of the transactional memory system into the filter group of the first thread; and based on the access to be The address is hashed with one or more hash values to update one or more fields of the filter group of the first thread.In Example 13, the method of one of Examples 9-12 optionally further comprising storing a filter group of the first thread in a write set of the first thread, the filter set comprising a Bloom filter.In Example 14, the method of Example 13 optionally further includes adding a global lock to the filter group of the first thread and determining a state of the global lock based on determining whether the filter group is crossed.In the example 15, the method of any one of examples 9-14, optionally further comprising: re-hashing the hash table from the first size to the second size concurrently in the software transaction; and accessing the hardware transaction Hash the table and enable hardware transactions to be committed during concurrent re-hashing.In another example, a computer readable medium comprising instructions performs the method of any of the above examples. In yet another example, an apparatus includes components for performing the method of any of the above examples.In Example 16, the at least one computer readable medium includes instructions that, when executed, enable the system to: execute a second hardware transaction in a second hardware transaction mode of the transactional memory system; and submit a second hardware transaction at the end of the second hardware transaction And after the commit of the second hardware transaction, if the conflict exists between the second hardware transaction and the at least one software transaction, invalidating at least one software transaction concurrently executed with the second hardware transaction.In Example 17, the at least one computer readable medium of Example 16 optionally further comprising instructions that, when executed, cause the system to determine whether a commit lock was acquired prior to the submission of the second hardware transaction, and if so, determine a conflict Whether exists between the second hardware transaction and the first software transaction that obtained the commit lock.In Example 18, the at least one computer readable medium of Example 17 optionally further comprising instructions that, when executed, enable the system to suspend the second hardware transaction when the conflict exists between the second hardware transaction and the first software transaction, Wherein if the filter group of the second hardware transaction intersects the filter group of the first software transaction, it is determined that a conflict exists.In Example 19, the at least one computer readable medium of Example 17 optionally further includes instructions that, when executed, enable the system to determine whether the one or more hardware transactions have acquired one or after the first software transaction acquires the commit lock Multiple transaction locks, and if so, delay the submission of the first software transaction until one or more transaction locks are released.In Example 20, the at least one computer readable medium of Example 17 optionally further comprising instructions that, when executed, enable the system to: execute the first hardware transaction in a first hardware transaction mode of the transactional memory system; in the first hardware transaction At the end, it is determined whether at least one software transaction is executed concurrently; and if so, the first hardware transaction is aborted, otherwise the first hardware transaction is committed.In Example 21, the at least one computer readable medium of Example 17 optionally further comprising instructions that, when executed, enable the system to: verify transactional memory to the transactional memory system by the first software transaction during execution of the first software transaction Read operation; and if the read operation is verified, the location of the read operation is added to the filter group of the first software transaction.In Example 22, the at least one computer readable medium of Example 17 optionally further includes instructions that, when executed, enable the system to: execute the second software transaction in the second software transaction mode, including at the beginning of execution of the second software transaction Obtaining a first lock and a commit lock, and directly updating one or more memory locations during execution of the second software transaction; and at the end of the second software transaction, submitting the second software transaction to cause one or the first software transaction mode Multiple concurrent execution software transactions are invalidated, and the first lock and commit lock are released thereafter.In Example 23, a system includes a processor including hybrid transactional memory logic to concurrently execute at least one hardware transaction and at least one software transaction. The hybrid transactional memory logic may execute the first transaction in the first hardware transaction mode until the first transaction is committed or the first transaction retries the first threshold number of times in the first hardware transaction mode, and thereafter if the first transaction is not committed, A software transaction mode performs the first transaction. The hybrid transactional memory logic can include cross logic to determine whether a filter group associated with the first transaction executed in the first hardware mode conflicts with a filter group associated with the second transaction executed in the first software transaction mode, and the response The conflict, mixed transactional memory logic prevents the first transaction in the first hardware transaction mode from committing. The system also includes a transactional memory coupled to the processor.In Example 24, the hybrid transactional memory logic can optionally execute the first transaction in the first software transaction mode until the first transaction is committed or the first transaction retryes the second threshold number of times in the first software transaction mode, and in the second After the threshold number of times, the first transaction is executed in the second software transaction mode, where the first transaction directly updates the transactional memory.In Example 25, the hybrid transactional memory logic can optionally execute the first transaction in the second hardware transaction mode prior to execution of the first hardware transaction mode, wherein the transactional transaction logic is mixed prior to the first transaction in the first hardware transaction mode The first transaction is performed a third threshold number of times in the second hardware transaction mode.In Example 26, the hybrid transactional memory logic can optionally cause the first transaction to verify the read data during execution of the first software transaction mode, and to update with the first software transaction mode based on the address associated with the read data. A filter group associated with a transaction, and the write data is used to update the hash table.In Example 27, the hybrid transactional memory logic of Example 26 can optionally cause the second transaction in the second software transaction mode to obtain the first lock and the second lock at the beginning of the second transaction, and thereafter in the second transaction The transaction memory is directly updated during execution; and the first transaction in the first software transaction mode obtains the first lock and the second lock upon commit of the first transaction, and thereafter the transaction data is updated using the write data from the hash table, and At least one other software transaction concurrently executed in the first software transaction mode is invalidated.In Example 28, a system for executing a transactional memory transaction includes: means for executing a second hardware transaction in a second hardware transaction mode of a transactional memory system; for submitting the second hardware at the end of the second hardware transaction a component of the transaction; and means for invalidating at least one software transaction concurrently executed with the second hardware transaction after the conflict exists between the second hardware transaction and the at least one software transaction after the commit of the second hardware transaction.In Example 29, the system of Example 28 optionally further includes means for performing the step of determining whether a commit lock was acquired prior to the commit of the second hardware transaction, and if so, determining if the conflict exists in the second The hardware transaction is between the first software transaction that obtained the commit lock.In Example 30, the system of Example 28 optionally further includes means for terminating the second hardware transaction when the conflict exists between the second hardware transaction and the first software transaction, wherein if the filter group of the second hardware transaction is A filter group crossing of a software transaction determines that a conflict exists.In Example 31, the system of Example 28 optionally further includes means for performing the step of determining whether one or more transaction locks were acquired by one or more hardware transactions after the first software transaction acquired the commit lock, and If so, the commit of the first software transaction is delayed until one or more transaction locks are released.In Example 32, the system of Example 28 optionally further comprising: means for executing a first hardware transaction in a first hardware transaction mode of the transactional memory system; for determining whether the at least one software transaction is completed at the end of the first hardware transaction A component that executes concurrently; and a component that performs the following steps: if so, aborts the first hardware transaction, otherwise submits the first hardware transaction.In Example 33, the system of Example 28 optionally further comprising: means for verifying a read operation of the transaction memory of the transactional memory system by the first software transaction during execution of the first software transaction; and for the read operation being The location of the read operation is added to the component of the filter group of the first software transaction at the time of verification.In Example 34, the system of Example 28 optionally further comprising: means for performing the step of: executing the second software transaction in the second software transaction mode, comprising acquiring the first lock and at the beginning of execution of the second software transaction Submitting a lock and updating one or more memory locations directly during execution of the second software transaction; and means for performing the steps of: submitting the second software transaction at the end of the second software transaction to enable the first software transaction mode One or more concurrent execution software transactions fail, and the first lock and commit lock are released thereafter.It is to be understood that various combinations of the above examples are possible.Embodiments can be used in many different types of systems. For example, in one embodiment, a communication device can be configured to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to communication devices, but other embodiments can be directed to other types of devices for processing instructions or one or more machine readable media including instructions that are responsive to execution by a computing device. The apparatus performs one or more of the methods and techniques described herein.Embodiments may be implemented by code and may be stored on a non-transitory storage medium having instructions stored thereon that can be used to program a system to execute instructions. The storage medium may include, but is not limited to, any type of magnetic disk, including a floppy disk, an optical disk, a solid state drive (SSD), a compact disk read only memory (CD-ROM), a rewritable optical disk (CD-RW), and a magneto-optical disk; For example, read only memory (ROM), random access memory (RAM) such as dynamic random access memory (DRAM) and static random access memory (SARAM), erasable programmable read only memory (EPROM), flash Memory, electrically erasable programmable read only memory (EEPROM), magnetic or optical card; or any other type of medium suitable for storing electronic instructions.While the invention has been described with respect to the embodiments of the embodiments the embodiments It is intended that the appended claims cover all such modifications and alternatives |
Certain aspects of the present disclosure provide a semiconductor capacitor. The semiconductor capacitor generally includes an insulative layer, and a semiconductor region disposed adjacent to a first side of the insulative layer. The semiconductor capacitor also includes a first non-insulative region disposed adjacent to a second side of the insulative layer. In certain aspects, the semiconductor region may include a second non-insulative region, wherein the semiconductor region includes at least two regions having at least one of different doping concentrations or different doping types, and wherein one or more junctions between the at least two regions are disposed above or below the first non-insulative region. |
CLAIMS1. A semiconductor capacitor comprising:an insulative layer;a semiconductor region disposed adjacent to a first side of the insulative layer; anda first non-insulative region disposed adjacent to a second side of the insulative layer, wherein:the semiconductor region comprises a second non-insulative region and at least two regions having at least one of different doping concentrations or different doping types;one or more junctions between the at least two regions are disposed above or below the first non-insulative region; andthe semiconductor region comprises a third non-insulative region such that a capacitance between the first non-insulative region and the second non- insulative region is configured to be adjusted by varying a control voltage applied to the third non-insulative region with respect to the first non-insulative region or the second non-insulative region.2. The semiconductor capacitor of claim 1, wherein the at least two regions comprise an n-well region and a p-well region, wherein the third non-insulative region is disposed adjacent to the p-well region, and wherein the second non-insulative region is disposed adjacent to the n-well region.3. The semiconductor capacitor of claim 1, wherein the at least two regions comprise at least three regions including:a first n-well region, a second n-well region, and an intrinsic region disposed between the first n-well region and the second n-well region; ora first p-well region, a second p-well region, and an intrinsic region disposed between the first p-well region and the second p-well region.4. The semiconductor capacitor of claim 1, wherein the at least two regions comprise an n-well region and a p-well region.5. The semiconductor capacitor of claim 4, wherein the one or more junctions comprise a p-n junction between the n-well region and the p-well region and disposed above or below the first non-insulative region.6. The semiconductor capacitor of claim 4, wherein the at least two regions further comprise an intrinsic region.7. The semiconductor capacitor of claim 6, wherein the intrinsic region is disposed between the n-well region and the p-well region.8. The semiconductor capacitor of claim 6, wherein the p-well region is disposed between the n-well region and the intrinsic region.9. The semiconductor capacitor of claim 6, wherein the n-well region is disposed between the p-well region and the intrinsic region.10. The semiconductor capacitor of claim 1, wherein the at least two regions comprise an intrinsic region and one of an n-well or a p-well region.11. The semiconductor capacitor of claim 1, further comprising a buried oxide (BOX) region, wherein the insulative layer comprises a portion of the BOX region disposed between the semiconductor region and the first non-insulative region.12. The semiconductor capacitor of claim 11, further comprising a silicide-blocking layer, wherein the BOX region and the silicide-blocking layer are disposed adjacent to opposite sides of the semiconductor region.13. A semiconductor capacitor comprising:an insulative layer;a semiconductor region disposed adjacent to a first side of the insulative layer and comprising an intrinsic region; anda first non-insulative region disposed adjacent to a second side of the insulative layer, wherein the semiconductor region further comprises a second non-insulative region having a first doping type.14. The semiconductor capacitor of claim 13, wherein:the semiconductor region further comprises a third non-insulative region having a second doping type; andthe intrinsic region is disposed between the second non-insulative region and the third non-insulative region.15. The semiconductor capacitor of claim 13, wherein:the semiconductor region further comprises at least one region having a different doping concentration or doping type than the intrinsic region; andone or more junctions between the at least one region and the intrinsic region are disposed above or below the first non-insulative region.16. The semiconductor capacitor of claim 15, wherein the at least one region comprises at least two regions including an n-well region and a p-well region.17. The semiconductor capacitor of claim 16, wherein the intrinsic region is disposed between the n-well region and the p-well region.18. The semiconductor capacitor of claim 16, wherein the n-well region is disposed between the p-well region and the intrinsic region.19. The semiconductor capacitor of claim 13, further comprising a buried oxide (BOX) region, wherein the insulative layer comprises a portion of the BOX region disposed between the semiconductor region and the first non-insulative region.20. The semiconductor capacitor of claim 19, further comprising a silicide-blocking layer, wherein the BOX region and the silicide-blocking layer are disposed adjacent to opposite sides of the semiconductor region.21. A method for fabricating a semiconductor capacitor, comprising:forming a semiconductor region; forming an insulative layer, wherein the semiconductor region is formed adjacent to a first side of the insulative layer;forming a first non-insulative region adjacent to a second side of the insulative layer;forming a second non-insulative region in the semiconductor region, wherein the semiconductor region comprises at least two regions having at least one of different doping concentrations or different doping types, and wherein one or more junctions between the at least two regions are formed above or below the first non-insulative region; andforming a third non-insulative region in the semiconductor region such that a capacitance between the first non-insulative region and the second non-insulative region is configured to be adjusted by varying a control voltage applied to the third non- insulative region with respect to the first non-insulative region or the second non- insulative region.22. The method of claim 21 , wherein the at least two regions comprise an n-well region and a p-well region, wherein the third non-insulative region is formed adjacent to the p-well region, and wherein the second non-insulative region is formed adjacent to the n-well region.23. The method of claim 21, wherein the at least two regions comprise at least three regions including:a first n-well region, a second n-well region, and an intrinsic region formed between the first n-well region and the second n-well region; ora first p-well region, a second p-well region, and an intrinsic region formed between the first p-well region and the second p-well region.24. The method of claim 21 , wherein the at least two regions comprise an n-well region and a p-well region.25. The method of claim 24, wherein a p-n junction between the n-well region and the p-well region is formed above or below the first non-insulative region.26. The method of claim 24, wherein the at least two regions further comprise an intrinsic region.27. The method of claim 26, wherein the intrinsic region is formed between the n- well region and the p-well region.28. The method of claim 26, wherein the n-well region is formed between the p-well region and the intrinsic region.29. A method for fabricating a semiconductor capacitor, comprising:forming a semiconductor region comprising an intrinsic region;forming an insulative layer, wherein the semiconductor region is formed adjacent to a first side of the insulative layer;forming a first non-insulative region adjacent to a second side of the insulative layer; andforming, in the semiconductor region, a second non-insulative region having a first doping type.30. The method of claim 29, further comprising:forming, in the semiconductor region, a third non-insulative region having a second doping type, wherein the intrinsic region is formed between the second non- insulative region and the third non-insulative region. |
TRANSCAP DEVICE ARCHITECTURE WITH REDUCED CONTROL VOLTAGE AND IMPROVED QUALITY FACTORCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to U.S. Application No. 15/706,352, filed September 15, 2017, which claims benefit of and priority to provisional application no. 62/524,171, filed June 23, 2017, which are both expressly incorporated herein by reference in their entirety.TECHNICAL FIELD[0002] Certain aspects of the present disclosure generally relate to electronic circuits and, more particularly, to a variable semiconductor capacitor.BACKGROUND[0003] Semiconductor capacitors are fundamental components for integrated circuits. A variable capacitor is a capacitor whose capacitance may be intentionally and repeatedly changed under the influence of a bias voltage. A variable capacitor, which may be referred to as a varactor, is often used in inductor-capacitor (LC) circuits to set the resonance frequency of an oscillator, or as a variable reactance, e.g., for impedance matching in antenna tuners.[0004] A voltage-controlled oscillator (VCO) is an example circuit that may use a varactor in which the thickness of a depletion region formed in a p-n junction diode is varied by changing a bias voltage to alter the junction capacitance. Any junction diode exhibits this effect (including p-n junctions in transistors), but devices used as variable capacitance diodes are designed with a large junction area and a doping profile specifically chosen to improve the device performance, such as quality factor and tuning range.SUMMARY[0005] Certain aspects of the present disclosure provide a semiconductor capacitor. The semiconductor capacitor generally includes an insulative layer and a semiconductor region disposed adjacent to a first side of the insulative layer. The semiconductor capacitor also includes a first non-insulative region disposed adjacent to a second side of the insulative layer. In certain aspects, the semiconductor region includes a second non- insulative region and at least two regions having at least one of different doping concentrations or different doping types, wherein one or more junctions between the at least two regions are disposed above or below the first non-insulative region. In certain aspects, the semiconductor region also includes a third non-insulative region such that a capacitance between the first non-insulative region and the second non-insulative region is configured to be adjusted by varying a control voltage applied to the third non- insulative region with respect to the first non-insulative region or the second non- insulative region.[0006] Certain aspects of the present disclosure provide a semiconductor capacitor. The semiconductor capacitor generally includes an insulative layer, a semiconductor region disposed adjacent to a first side of the insulative layer and comprising an intrinsic region, and a first non-insulative region disposed adjacent to a second side of the insulative layer. In certain aspects, the semiconductor region also includes a second non-insulative region having a first doping type.[0007] Certain aspects of the present disclosure provide a method for fabricating a semiconductor capacitor. The method generally includes forming a semiconductor region; forming an insulative layer, wherein the semiconductor region is formed adjacent to a first side of the insulative layer; forming a first non-insulative region adjacent to a second side of the insulative layer; forming a second non-insulative region in the semiconductor region, wherein the semiconductor region comprises at least two regions having at least one of different doping concentrations or different doping types, and wherein one or more junctions between the at least two regions are formed above or below the first non-insulative region; and forming a third non-insulative region in the semiconductor region such that a capacitance between the first non-insulative region and the second non-insulative region is configured to be adjusted by varying a control voltage applied to the third non-insulative region with respect to the first non-insulative region or the second non-insulative region.[0008] Certain aspects of the present disclosure provide a method for fabricating a semiconductor capacitor. The method generally includes forming a semiconductor region comprising an intrinsic region; forming an insulative layer, wherein the semiconductor region is formed adjacent to a first side of the insulative layer; forming a first non-insulative region adjacent to a second side of the insulative layer; and forming, in the semiconductor region, a second non-insulative region having a first doping type.BRIEF DESCRIPTION OF THE DRAWINGS[0009] So that the manner in which the above-recited features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be by reference to aspects, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.[0010] FIG. 1 illustrates an example semiconductor variable capacitor.[0011] FIG. 2 illustrates an example differential semiconductor variable capacitor.[0012] FIG. 3 illustrates a transcap device having a p-n junction formed underneath a plate oxide layer, in accordance with certain aspects of the present disclosure.[0013] FIG. 4 is a graph illustrating the capacitance and quality factor (Q) of a transcap device as a function of the control voltage for different p-well region lengths, in accordance with certain aspects of the present disclosure.[0014] FIGs. 5A-5C illustrate a transcap device implemented with different semiconductor region structures, in accordance with certain aspects of the present disclosure.[0015] FIG. 6A illustrates a transcap device implemented with an intrinsic region, in accordance with certain aspects of the present disclosure.[0016] FIGs. 6B and 6C illustrate a cross-section and a top-down view, respectively, of a transcap device implemented with an intrinsic region and an n-well region, in accordance with certain aspects of the present disclosure.[0017] FIG. 7 is a graph illustrating the capacitance and Q of a transcap device as a function of control voltage, with different intrinsic region lengths, in accordance with certain aspects of the present disclosure. [0018] FIGs. 8A and 8B illustrate semiconductor capacitors implemented with an intrinsic region undemeath a non-insulative region for an anode, in accordance with certain aspects of the present disclosure.[0019] FIG. 9 illustrates a transcap device implemented with a backside plate, in accordance with certain aspects of the present disclosure.[0020] FIGs. 10A and 10B are graphs showing the capacitor-voltage (C-V) characteristics and the Q, respectively, of the transcap device of FIG. 9 for different intrinsic region lengths, in accordance with certain aspects of the present disclosure.[0021] FIGs. 11 and 12 are flow diagrams of example operations for fabricating a semiconductor capacitor, in accordance with certain aspects of the present disclosure.DETAILED DESCRIPTION[0022] Aspects of the present disclosure are generally directed to a semiconductor capacitor structure having a semiconductor regions implemented with one or more regions having different doping concentrations and/or different doping types. In certain aspects, junctions between the one or more regions may be formed above or below a plate terminal region of the semiconductor capacitor to improve the quality factor (Q) of the semiconductor capacitor.[0023] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0024] As used herein, the term "connected with" in the various tenses of the verb "connect" may mean that element A is directly connected to element B or that other elements may be connected between elements A and B (i.e., that element ^ is indirectly connected with element B). In the case of electrical components, the term "connected with" may also be used herein to mean that a wire, trace, or other electrically conductive material is used to electrically connect elements A and B (and any components electrically connected therebetween).[0025] FIG. 1 illustrates an example structure of a transcap device 100. The transcap device 100 includes a non-insulative region 112 coupled to a plate (P) terminal 101 , a non-insulative region 106 coupled to a well (W) terminal 103, and a non- insulative region 108 coupled to a displacement (D) terminal 102. Certain implementations of a transcap device use a plate oxide layer 110 disposed above a semiconductor region 1 14. The plate oxide layer 1 10 may isolate the W and P terminals, and thus, in effect act as a dielectric for the transcap device 100. The non- insulative region 106 (e.g., heavily n doped region) and the non-insulative region 108 (e.g., heavily p doped region) may be formed in the semiconductor region 1 14 and on two sides of the transcap device 100 in order to create p-n junctions. As used herein, a non-insulative region generally refers to a region that may be conductive or semiconductive.[0026] In certain aspects, a bias voltage may be applied between the D terminal 102 and the W terminal 103 in order to modulate the capacitance between the P and W terminals. For example, by applying a bias voltage to the D terminal 102, a depletion region 130 may be formed between the p-n junction of the non-insulative region 108 and the region 1 15 of the semiconductor region 114. Based on the bias voltage, this depletion region 130 may widen under the plate oxide layer 110, reducing the area of the equivalent electrode formed by the semiconductor region 1 14, and with it, the effective capacitance area and capacitance value of the transcap device 100. Furthermore, the bias of the W and P terminals may be set as to avoid the formation of an inverted region underneath the oxide and operate the transcap device 100 in deep depletion mode. By varying the voltage of the W terminal with respect to the P and D terminals, both vertical and horizontal depletion regions may be used to modulate the capacitance between the W and P terminals.[0027] The work-function of the non-insulative region 1 12 above the plate oxide layer 1 10 may be chosen to improve the device performance. For example, an n-doped poly-silicon material may be used (instead of p-doped), even if the semiconductor region 114 underneath the plate oxide layer 1 10 is doped with n-type impurities. In some aspects, a metallic material (also doped if desired) may be used for the non- insulative region 1 12 with an opportune work-function or a multi-layer stack of different metallic materials to obtain the desired work-function. In certain aspects, the non-insulative region 1 12 may be divided into two sub-regions, one n-doped and one p- doped, or a different metallic material may be used for each sub-region. [0028] In some cases, the semiconductor region 114 may be disposed above an insulator or semiconductor region 116. The type of material for the region 116 may be chosen in order to improve the transcap device 100 performance. For example, the region 1 16 may be an insulator, a semi-insulator, or an intrinsic/near-intrinsic semiconductor in order to decrease the parasitic capacitances associated with the transcap device 100. In some cases, the region 116 may be made of n-doped or p-doped semiconductor with an appropriate doping profile in order to increase the transcap device Q and/or the control on the depletion region 130 that may be formed between the non-insulative region 108 and the region 1 15 of the semiconductor region 114 when applying a bias voltage to the D terminal 102. The region 1 16 may also be formed by multiple semiconductor layers or regions doped in different ways (n, p, or intrinsic). Furthermore, the region 1 16 may include semiconductors, insulating layers, and/or substrates or may be formed above semiconductors, insulating layers, and/or substrates.[0029] To better understand the working principle of the transcap device 100, it may be assumed that the D terminal 102 is biased with a negative voltage with respect to the W terminal 103. The width of the depletion region 130 in the semiconductor region 1 14 may be controlled by applying a control voltage to the D terminal 102 or to the W terminal 103. The capacitance between the W and P terminals may depend on the width of the depletion region 130 in the semiconductor region 114, and thus, may be controlled by applying the control voltage to the D terminal 102. Furthermore, the variation of the bias voltage applied to the D terminal 102 may not alter the direct- current (DC) voltage between the W and P terminals, allowing for improved control of the device characteristics.[0030] In some cases, it may be preferable to have the non-insulative region 106 and/or non-insulative region 108 a distance away from the plate oxide layer 110 in order to reduce the parasitic capacitance associated with the non-insulative region 108 and improve the isolation of the non-insulative region 106 for high control voltages. For example, the non-insulative region 106 may be partially overlapped with the plate oxide layer 1 10, or the non-insulative region 106 may be formed at a distance from the edge of the plate oxide layer 1 10 to increase the device tuning range and linearity. In the latter case, the voltage-withstanding capability of the device is improved since a portion of a radio-frequency (RF) signal, that may be applied to the P and W terminals, drops between the oxide edge and the non-insulative region 106 instead of being applied entirely across the plate oxide layer 1 10. The non-insulative region 108 may be partially overlapped with the plate oxide layer 110, or the non-insulative region 108 may be spaced apart so as to reduce the parasitic capacitance between the P terminal 101 and the D terminal 102.[0031] In certain aspects, the semiconductor region 114 may be implemented with a p-well region to improve the breakdown voltage of the p-n junction between the non- insulative region 108 and the region 115 of the semiconductor region 1 14, decreasing, at the same time, the parasitic capacitance between the P terminal 101 and the D terminal 102, as described in more detail herein. Similarly, the semiconductor region 1 14 may be implemented with an n-doped region between the non-insulative region 106 and region 115 of the semiconductor region 1 14 in order to regulate the doping concentration between the plate oxide layer 1 10 and the non-insulative region 106, as described in more detail herein. In certain aspects of the present disclosure, the semiconductor region 114 may be implemented with two or more regions having different doping concentrations and/or different doping types. A junction between the two or more regions may be disposed below the plate oxide layer 1 10 to improve the Q of the transcap device 100.[0032] FIG. 2 illustrates an example differential transcap device 200. The differential transcap device 200 may be obtained by disposing two of the transcap devices 100 back-to-back. In this example, RF+ and RF- terminals (e.g., corresponding to the P terminal in FIG. 1) correspond to the positive and negative nodes of a differential RF port for a differential RF signal. The RF+ terminal may be coupled to a non-insulative region 218, and the RF- terminal may be coupled to a non-insulative region 220, each of the non-insulative regions 218 and 220 disposed above respective oxide layers 202 and 204. N-well regions 206 and 208 may be coupled to a W terminal via a non-insulative region 210 (e.g., n+), as illustrated. The differential transcap device 200 also includes D terminals 21 1 and 212 coupled to respective non-insulative regions 222 and 224. A bias voltage may be applied to the D terminals 21 1 and 212 (or to the W terminal with respect to the other terminals of the device) to adjust a depletion region of the n-well regions 206 and 208, respectively, thereby adjusting the capacitance between respective RF+ and RF- terminals and the W terminal. In some aspects, a buried oxide layer 214 may be positioned below the n-well regions 206 and 208 and above a semiconductor substrate or insulator 216, as illustrated.[0033] The capacitance density achievable with the transcap technology may be increased at the expense of device performance. For example, with reference to FIG. 2, the capacitance density may be increased by reducing the distance between the non- insulative regions 218 and 220 for the RF+ and RF- terminals. However, reducing the distance between the non-insulative regions 218 and 220 may increase the parasitic capacitance associated with the structure, lowering the tuning range of the transcap device 200.[0034] The capacitor-voltage (C-V) characteristic of the transcap device 100 determines its performance parameters, such as tuning range (Cmax/Cmin), max control voltage for achieving the full tuning range, Q, and linearity of the transcap device. However, these figures of merit may depend on several process parameters, such as well doping, oxide thickness, n+/p+ proximity to the Plate terminal, and Plate length. A tradeoff may exist between these performance parameters. For example, one may increase the tuning range of a transcap device either by increasing the plate length or by placing the n+ region far away from the plate terminal. However, in both cases, the device Q is degraded, and the tuning voltage used to improve tunability of the transcap device is increased. Similarly, the oxide thickness may be increased to improve the Q, but this choice may lead to a degradation of the tuning range. Likewise, higher well doping may provide better linearity and Q, but it may also degrade the device tuning range. Certain aspects of the present disclosure soften these tradeoffs. Moreover, certain aspects of the present disclosure allow for a sharper transition between capacitance levels of the transcap device, which may be beneficial for transcap usage in digital tuning.[0035] FIG. 3 illustrates the transcap device 100 having a p-n junction formed underneath the plate oxide layer 110, in accordance with certain aspects of the present disclosure. For example, the region between the non-insulative regions 106 and 108 may be implemented with an n-well region 302 and a p-well region 304, having a p-n junction underneath the plate oxide layer 1 10, modifying the electrical field distribution inside the transcap device 100 during operation with respect to the transcap device 100 of FIG. 1. For example, the behavior of the capacitance and Q as a function of the control voltage depends on how the depletion region created by a horizontal electric field, from the non-insulative region 108, moves with respect to the depletion region created by a vertical electric field from the non-insulative region 1 12. As the p-n junction between the n-well and p-well regions 302 and 304 is formed closer to the n+ region, the depletion region caused by the horizontal electric field may show its effect at a lower control voltage in the C-V characteristic, allowing for the manipulation of the Q of the transcap device 100. This configuration can be useful, especially in modern silicon-on-insulator (SOI) technologies that may use a thin active silicon layer, where the device-Q-versus-control-voltage plot shows a dip due to the unbalanced electric field. In some cases, the structure of transcap device 100 may be implemented by using separate n-well and p-well implantation masks during fabrication.[0036] In certain aspects, the doping concentration of the p-well region 304 may be used to manipulate the electric field distribution inside the transcap device 100. For example, the p-well region 304 may be low-doped or may be replaced with an intrinsic (i) region so as to obtain a p-i-n junction between the non-insulative regions 106 and 108 and further sharpen the transition between high and low capacitance of the transcap device 100. In this case, the length of the intrinsic region may be set to obtain the desired control voltage and C-V characteristic of the transcap device.[0037] FIG. 4 is a graph 400 illustrating the capacitance and Q of a transcap device as a function of the control voltage for different p-well region lengths, in accordance with certain aspects of the present disclosure. As illustrated, the structure of the transcap device 100 may be adjusted to improve the device Q from around 120 up to almost 190 with little to no degradation of the tuning range. Alternatively, the control voltage may be reduced up to 2 volts to obtain a sharp transition in the C-V characteristic, which may be especially beneficial in digital applications.[0038] FIGs. 5A-5C illustrate the transcap device 100 implemented with different structures for the semiconductor region 1 14, in accordance with certain aspects of the present disclosure. The Q curve 402 of graph 400 corresponds to the configuration of the transcap device 100 in FIG. 5 A implemented without a p-well region. The Q curve 404 of graph 400 corresponds to the configuration of the transcap device 100 in FIG. 5B implemented with a 125 nm p-well region length. The Q curve 406 of graph 400 corresponds to the configuration of the transcap device 100 in FIG. 5C implemented with a 50 nm p-well region length. As illustrated in FIG. 5B, when a positive bias voltage is applied at the P terminal, the p-well region 304 may be inverted at its interface with the plate oxide layer 1 10, and electrons are accumulated in the n-well region 302. This causes the maximum capacitance of the transcap device 100 of FIG. 5B to be the same as that of a transcap device realized without a p-well region.[0039] FIG. 6A illustrates the transcap device 100 implemented with an intrinsic region 602, in accordance with certain aspects of the present disclosure. In this case, the intrinsic region 602 spans the entire region between the non-insulative region 106 and the non-insulative region 108. As used herein, the term "intrinsic" refers to an intrinsic semiconductor or a near-intrinsic semiconductor (e.g., lightly-doped semiconductor).[0040] FIGs. 6B and 6C illustrate a cross-section and a top-down view, respectively, of the transcap device 100 implemented with an intrinsic region 602, an n-well region 302, and a p-well region 304, in accordance with certain aspects of the present disclosure. As illustrated, the semiconductor region of the transcap device 100 of FIG. 6B is implemented with the intrinsic region 602, disposed between the n-well and p- well regions 302 and 304. In some cases, the intrinsic region 602 may be a lightly doped p-type (or n-type) region having a dopant concentration on the order of lei 2 cm3.[0041] When a positive bias voltage is applied at the P terminal, the intrinsic region 602 is inverted at its interface with the plate oxide layer (assuming that this region is a lightly doped p-type region), and electrons are accumulated in the n-well region 302. This causes the maximum capacitance of the transcap device to be the same as that of a transcap device realized without an intrinsic region. However, when the P or W terminals are biased such that the transcap device is operated in depletion, the intrinsic region may be depleted faster, causing a steeper reduction in the capacitance with respect to the control voltage when compared to a transcap device implemented without an intrinsic region.[0042] FIG. 7 is a graph 700 illustrating the capacitance and Q of the transcap device 100 of FIGs. 6B and 6C as a function of the control voltage, with different intrinsic region lengths, in accordance with certain aspects of the present disclosure. The graph 700 shows that the transcap device 100 of FIGs. 6B and 6C may be configured to reduce the control voltage up to 1 V and improve Q by up to 20% without degrading the tuning range. The observed improvement in the device Q may be due to the effect of the lateral depletion region caused by the p-n junction, which dominates over the vertical depletion region caused by the plate when the control bias in increased, reducing the maximum effective resistance between the P and W terminals. In certain aspects, an intrinsic region may be added to a two-terminal metal-oxide semiconductor (MOS) varactor to improve the device performance, as described in more detail with respect to FIGs. 8A and 8B.[0043] FIGs. 8A and 8B illustrate capacitors 800 and 801, respectively, implemented with an intrinsic region 802 underneath the non-insulative region 804 for the anode, in accordance with certain aspects of the present disclosure. In certain aspects, the capacitor may include a single cathode, as illustrated by capacitor 800 of FIG. 8A, or two cathodes, as illustrated by capacitor 801 of FIG. 8B. The intrinsic region 802 may be disposed between the non-insulative regions 806 and 808 coupled to the cathodes. In certain aspects, the cathodes of capacitor 801 may be shorted, providing a two-terminal capacitor. In certain aspects, the capacitors 800 and 801 may optionally include n-well regions 840 and/or 842. In some cases, the n-well regions 840 and/or 842 may be replaced with heavily doped regions.[0044] In certain aspects of the present disclosure, the example transcap devices and capacitors described herein may be implemented using a back-gate configuration, as described in more detail with respect to FIG. 9.[0045] FIG. 9 illustrates an example transcap device 900 implemented using a back- gate configuration, in accordance with certain aspects of the present disclosure. For example, a non-insulative region 902 (e.g., a back-side plate terminal) may be formed below at least a portion of a buried oxide (BOX) region 904 of the transcap device 900. Therefore, the BOX region 904 may be used as the plate oxide, and a backside cavity contact may be used as a plate terminal, enabling the use of the transcap device 900 in high voltage applications, for example.[0046] While reducing the maximum control voltage is not a primary objective for this transcap device configuration, the tuning-range- versus -Q performance of the transcap device 900 may be improved by incorporating an intrinsic region 906. The configuration of the transcap device 900 allows for the fabrication of thick oxide transcaps with oxide thicknesses in the range of 30-40 nm with operating voltages up to 15-20 V, for example. In certain aspects, a silici de-blocking layer 908 may be formed above at least a portion of the semiconductor region 1 14 to prevent the junctions between the different regions of the semiconductor region 114 from being shorted.[0047] FIGs. 10A and 10B illustrate graphs 1000 and 1001 showing the C-V characteristics and the Q, respectively, of the transcap device 900 for different intrinsic region lengths, in accordance with certain aspects of the present disclosure. As illustrated in graph 1001, the Q of the transcap device 900 is improved as compared to a reference (ref) transcap device incorporated without an intrinsic region. Moreover, there is little to no impact to the tuning range of the transcap device 900 by incorporating the intrinsic region 906, as illustrated in graph 1000. For example, the transcap device 900 (e.g., implemented with a 35 nm oxide, a poly length (Lpoly) of 0.48 μηι and a 0.25 μηι intrinsic region length) provides a Q of about 122 as compared to a Q of 55 for the reference device for a tuning range of 10 x. As illustrated in FIG. 9, Lpoly is the length of the semiconductor region 1 14 from an edge of the non-insulative region 108 to an edge of the non-insulative region 902.[0048] FIG. 11 is a flow diagram of example operations 1100 for fabricating a semiconductor capacitor, in accordance with certain aspects of the present disclosure. The operations 1 100 may be performed, for example, by a semiconductor-processing chamber.[0049] Operations 1 100 may begin at block 1 102 by forming a semiconductor region (e.g., semiconductor region 114), and at block 1104, forming an insulative layer (e.g., plate oxide layer 110), wherein the semiconductor region is formed adjacent to a first side of the insulative layer. At block 1106, a first non-insulative region (e.g., non- insulative region 112) is formed adjacent to a second side of the insulative layer, and at block 1108, a second non-insulative region (e.g., non-insulative region 106) is formed in the semiconductor region. In certain aspects, the semiconductor region includes at least two regions having at least one of different doping concentrations or different doping types, and one or more junctions between the at least two regions are formed above or below the first non-insulative region. [0050] In certain aspects, the operations 1 100 also include, at block 1 110, forming a third non-insulative region (e.g., non-insulative region 108) in the semiconductor region such that a capacitance between the first non-insulative region and the second non- insulative region is configured to be adjusted by varying a control voltage applied to the third non-insulative region with respect to the first or second non-insulative region. In certain aspects, the at least two regions may include an n-well region (e.g., n-well region 302) and a p-well region (e.g., p-well region 304), wherein the third non-insulative region is formed adjacent to the p-well region and wherein the second non-insulative region is formed adjacent to the n-well region.[0051] In certain aspects, the at least two regions comprise at least three regions including a first n-well region, a second n-well region, and an intrinsic region formed between the first n-well region and the second n-well region. In other aspects, the at least three regions may include a first p-well region, a second p-well region, and an intrinsic region formed between the first p-well region and the second p-well region.[0052] In certain aspects, the at least two regions may include an n-well region and a p-well region. In this case, a p-n junction between the n-well region and the p-well region is formed above or below the first non-insulative region. In some cases, the at least two regions further comprise an intrinsic region (e.g., intrinsic region 602). The intrinsic region may be formed between the n-well region and the p-well region. In some cases, the n-well region is formed between the p-well region and the intrinsic region.[0053] FIG. 12 is a flow diagram of example operations 1200 for fabricating a semiconductor capacitor, in accordance with certain aspects of the present disclosure. The operations 1200 may be performed, for example, by a semiconductor-processing chamber.[0054] Operations 1200 may begin at block 1202 by forming a semiconductor region comprising an intrinsic region, and at block 1204, forming an insulative layer (e.g., plate oxide layer 110), wherein the semiconductor region is formed adjacent to a first side of the insulative layer. At block 1206, a first non-insulative region (e.g., non- insulative region 112) may be formed adjacent to a second side of the insulative layer. At block 1208, a second non-insulative region having a first doping type (e.g., acceptor doping type) may be formed in the semiconductor region. In certain aspects, a third non-insulative region having a second doping type (e.g., donor doping type) may be formed in the semiconductor region. In this case, the intrinsic region is formed between the second non-insulative region and the third non-insulative region.[0055] While several examples have been described herein with specific doping types to facilitate understanding, the examples provided herein may be implemented with different doping types and materials. For example, the p+ regions (e.g., non- insulative region 108) may be replaced with a Schottky contact and/or the n+ regions (e.g., non-insulative region 106) may be replaced with a metal ohmic contact. In the case where a Schottky contact is used in combination with a III-V process technology, an extra wide bandgap layer may be interposed between the metal and the n-doped semiconductor in order to reduce the current leakage associated with the Schottky contact.[0056] Certain aspects described herein may be implemented using different technologies such as bulk complementary metal-oxide semiconductor (CMOS), bipolar CMOS and double-diffused metal-oxide semiconductor (DMOS) referred to as bipolar- CMOS-DMOS (BCD), bipolar CMOS (BiCMOS), bipolar, silicon on insulator (SOI) (including ultra-thin-body, fully depleted, partially depleted, high voltage and any other SOI technology), silicon on sapphire, thin-film, trench MOS, junction field-effect transistor (JFET), fin field-effect transistor (FinFET), multi-gate FET (including tri-gate FET and gate-all-around technology), vertical MOS, silicon carbide (SiC), germanium (Ge), silicon germanium (SiGe) (any other IV-IV compound semiconductor material), III-V technology (e.g. gallium nitride (GaN), aluminum gallium nitride (AlGaN), aluminum nitride (A1N), indium nitride (InN), indium gallium nitride (InGaN), gallium arsenide (GaAs), aluminum gallium arsenide (AlGaAs), aluminum arsenide (AlAs), and any other polar and non-polar III-V compound semiconductor material including ternary and quaternary alloys) with or without heteroj unctions, II-VI technology (polar and non- polar II-VI compound semiconductor material including ternary and quaternary alloys) with or without heteroj unctions, or discrete device technologies (e.g. the ones used for discrete silicon or SiC MOS discrete power devices or for III-V discrete devices), including both organic and inorganic technologies. Different doping profiles can be used in order to improve the device performance. If desired, high-k dielectric materials can be used to form the capacitance dielectric so as to increase the capacitance density. The plate region can be formed with metallic or semiconductor (crystalline, poly- crystalline or amorphous) materials.[0057] Certain aspects described herein may be realized as integrated or discrete components. A dual version of the transcap devices described herein may be obtained by substituting the n-doped regions with p-type ones and vice-versa. In certain aspects, the n+ control regions may be replaced with Schottky contacts and/or the p+ well pickup regions may be replaced with metal ohmic contacts. Many other configurations may be obtained by combining different aspects discussed herein and their variants.[0058] Certain aspects of the present disclosure may be realized with a standard SOI or bulk CMOS process. The distance between the doping implants and the capacitance electrode P may be omitted by auto-aligning the implantations with the MOS structure or may be obtained by adding two spacers to the structure during the fabrication process or by misaligning the n+ (or p+) implantation mask with respect to the MOS oxide edge. The latter allows the achievement of any desired distance between the highly doped regions and the oxide edge. In certain aspects, one or more extra process steps may also be used in order to form pillars/trenches in the semiconductor substrate (by means of a semiconductor etching or a deposition process steps) and/or to obtain the buried doped regions at the beginning of the manufacturing process.[0059] The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application-specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering.[0060] As used herein, the term "determining" encompasses a wide variety of actions. For example, "determining" may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database, or another data structure), ascertaining, and the like. Also, "determining" may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Also, "determining" may include resolving, selecting, choosing, establishing, and the like.[0061] As used herein, a phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c).[0062] The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an ASIC, a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0063] The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.[0064] The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may comprise a processing system in a wireless node. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement the signal processing functions of the physical (PHY) layer. In the case of a user terminal, a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further.[0065] The processing system may be configured as a general-purpose processing system with one or more microprocessors providing the processor functionality and external memory providing at least a portion of the machine-readable media, all linked together with other supporting circuitry through an external bus architecture. Alternatively, the processing system may be implemented with an ASIC with the processor, the bus interface, the user interface in the case of an access terminal, supporting circuitry, and at least a portion of the machine-readable media integrated into a single chip, or with one or more FPGAs, PLDs, controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system.[0066] It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims. |
A method used in forming a memory array comprises forming a conductive tier atop a substrate, with the conductive tier comprising openings therein. An insulator tier is formed atop the conductive tier and the insulator tier comprises insulator material that extends downwardly into the openings in the conductive tier. A stack comprising vertically -alternating insulative tiers and wordline tiers is formed above the insulator tier. Strings comprising channel material that extend through the insulative tiers and the wordline tiers are formed. The channel material of the strings is directly electrically coupled to conductive material in the conductive tier. Structure independent of method is disclosed. |
CLAIMS:1. A method used in forming a memory array, comprising:forming a conductive tier atop a substrate, the conductive tier comprising openings therein;forming an insulator tier atop the conductive tier, the insulator tier comprising insulator material that extends downwardly into the openings in the conductive tier;forming a stack comprising vertically-alternating insulative tiers and wordline tiers above the insulator tier; andforming strings comprising channel material through the insulative tiers and the wordline tiers, the channel material of the strings being directly electrically coupled to conductive material in the conductive tier.2. The method of claim 1 wherein forming the conductive tier and openings therein comprises:depositing the conductive material atop the substrate; andetching the openings into the deposited conductive material.3. The method of claim 1 wherein the insulator material completely fills the openings in the conductive tier.4. The method of claim 1 wherein the openings in the conductive tier and the insulator material therein comprise vertical sidewalls.5. The method of claim 1 wherein the openings in the conductive tier and the insulator material therein are wider at their respective bottoms than at their respective tops.6. The method of claim 1 wherein the openings in the conductive tier and the insulator material therein are wider at their respective tops than at their respective bottoms.7. The method of claim 1 wherein,the strings comprise:the channel material extending elevationally through the insulative tiers and the wordline tiers;insulative charge-passage material in the wordline tiers laterally outward of the channel material;a storage region in the wordline tiers laterally outward of the insulative charge-passage material; anda charge-blocking region in the wordline tiers laterally outward of the storage region; andmultiple of the openings in the conductive tier and the insulator material therein are laterally outward of individual of the strings.8. The method of claim 7 wherein said multiple surround the individual strings and are at least 3 in number.9. The method of claim 8 wherein relative to surround, said multiple are shared by immediately laterally adjacent of the individual strings.10. The method of claim 8 wherein said multiple are equally spaced around the individual strings.11. The method of claim 8 wherein said multiple are at least 4 in number.12. The method of claim 11 wherein said multiple are at least 6 in number.13. The method of claim 12 wherein said multiple are only 6 in number.14. The method of claim 13 wherein,said multiple are equally spaced around the individual strings; and relative to surround, said multiple are shared by immediately laterally adj acent of the individual strings.15. The method of claim 1 wherein the insulator material does not extend through the conductive tier.16. The method of claim 15 wherein the conductive tier has a maximum thickness of greater that 600 Angstroms, the insulator material extending through no less than 600 Angstroms of the conductive tier.17. The method of claim 15 wherein the insulator material extends through no more than 50% of maximum thickness of the conductive tier.18. The method of claim 1 wherein the insulator material extends through the conductive tier.19. The method of claim 1 wherein the strings are everywhere laterally spaced from the openings in the conductive tier and the insulator material therein.20. The method of claim 1 comprising forming and removing a sacrificial plug that extends through the insulator tier, individual of the strings extending through void space that is left after said removing.21. The method of claim 1 wherein the strings extend through the insulator tier.22. The method of claim 21 wherein the strings do not extend into the conductive tier.23. The method of claim 1 wherein the strings extend through the insulator tier and into the conductive tier.24. The method of claim 1 wherein the channel material of the channel material strings is directly against the conductive material in the conductive tier.25. The method of claim 1 wherein the strings extend into the insulator tier.26. A method used in forming a memory array, comprising:forming conductive material of a conductive tier atop a substrate; etching openings into the conductive material, the openings being grouped around individual string locations;forming an insulator tier atop the conductive tier, the insulator tier comprising insulator material that extends downwardly into the openings in the conductive material of the conductive tier;forming a stack comprising vertically-alternating insulative tiers and wordline tiers above the insulator tier; andforming strings comprising channel material through the insulative tiers and the wordline tiers and into the insulator tier in the string locations, the channel material of the strings being directly electrically coupled to the conductive material in the conductive tier.27. The method of claim 26 wherein the grouped openings and insulator material therein are equally spaced around the individual strings locations and around individual of the strings.28. The method of claim 27 wherein, relative to being grouped, the openings and insulator material therein in individual of the groups are shared by immediately laterally adj acent of the individual groups.29. A memory array comprising:a conductive tier comprising openings therein;an insulator tier atop the conductive tier, the insulator tier comprising insulator material that extends downwardly into the openings in the conductive tier;a stack comprising vertically-alternating insulative tiers and wordline tiers above the insulator tier; andstrings comprising channel material extending through the insulative tiers and the wordline tiers, the channel material of the strings being directly electrically coupled to conductive material in the conductive tier.30. The memory array of claim 29 wherein the insulator material completely fills the openings in the conductive tier.31. The memory array of claim 29 wherein,the strings comprise:the channel material extending elevationally through the insulative tiers and the wordline tiers;insulative charge-passage material in the wordline tiers laterally outward of the channel material;a storage region in the wordline tiers laterally outward of the insulative charge-passage material; anda charge-blocking region in the wordline tiers laterally outward of the storage region; andmultiple of the openings in the conductive tier and the insulator material therein are laterally outward of individual of the strings.32. The memory array of claim 3 1 wherein said multiple surround the individual strings and are at least 3 in number. 33. The memory array of claim 32 wherein relative to surround, said multiple are shared by immediately laterally adjacent of the individual strings.34. The memory array of claim 32 wherein said multiple are equally spaced around the individual strings. 35. The memory array of claim 29 wherein the strings are everywhere laterally spaced from the openings in the conductive tier and the insulator material therein.36. A memory array comprising:a conductive tier comprising openings therein;an insulator tier atop the conductive tier, the insulator tier comprising insulator material that extends downwardly into and completely fills the openings in the conductive tier;a stack comprising vertically-alternating insulative tiers and wordline tiers above the insulator tier;strings comprising channel material extending through the insulative tiers and the wordline tiers and into the insulator tier, the channel material of the strings being directly electrically coupled to conductive material in the conductive tier, the strings being everywhere laterally spaced from the openings in the conductive tier and the insulator material therein, the strings comprising:the channel material extending elevationally through the insulative tiers and the wordline tiers;insulative charge-passage material in the wordline tiers laterally outward of the channel material;a storage region in the wordline tiers laterally outward of the insulative charge-passage material; anda charge-blocking region in the wordline tiers laterally outward of the storage region; andmultiple of the openings in the conductive tier and the insulator material therein being laterally outward of individual of the strings; said multiple surrounding the individual strings and being at least 3 in number; and wherein relative to surrounding, said multiple are shared by immediately laterally adj acent of the individual strings. |
DESCRIPTIONMEMORY ARRAYS AND METHODS USED IN FORMING A MEMORYARRAYTECHNICAL FIELDEmbodiments disclosed herein pertain to memory arrays and to methods used in forming a memory array.BACKGROUNDMemory is one type of integrated circuitry and is used in computer systems for storing data. Memory may be fabricated in one or more arrays of individual memory cells. Memory cells may be written to, or read from, using digit lines (which may also be referred to as bitlines, data lines, or sense lines) and access lines (which may also be referred to as wordlines) . The sense lines may conductively interconnect memory cells along columns of the array, and the access lines may conductively interconnect memory cells along rows of the array. Each memory cell may be uniquely addressed through the combination of a sense line and an access line.Memory cells may be volatile, semi-volatile, or non-volatile.Non-volatile memory cells can store data for extended periods of time in the absence of power. Non-volatile memory is conventionally specified to be memory having a retention time of at least about 10 years. Volatile memory dissipates and is therefore refreshed/rewritten to maintain data storage.Volatile memory may have a retention time of milliseconds or less.Regardless, memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a“0” or a“1”. In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.A field effect transistor is one type of electronic component that may be used in a memory cell. These transistors comprise a pair of conductive source/drain regions having a semiconductive channel region there-between. A conductive gate is adjacent the channel region and separated there-from by a thin gate insulator. Application of a suitable voltage to the gate allows current to flow from one of the source/drain regions to the other through the
channel region. When the voltage is removed from the gate, current is largely prevented from flowing through the channel region. Field effect transistors may also include additional structure, for example a reversibly programmable charge-storage region as part of the gate construction between the gate insulator and the conductive gate.Flash memory is one type of memory and has numerous uses in modern computers and devices. For instance, modern personal computers may have BIOS stored on a flash memory chip. As another example, it is becoming increasingly common for computers and other devices to utilize flash memory in solid state drives to replace conventional hard drives. As yet another example, flash memory is popular in wireless electronic devices because it enables manufacturers to support new communication protocols as they become standardized, and to provide the ability to remotely upgrade the devices for enhanced features.NAND may be a basic architecture of integrated flash memory. A NAND cell unit comprises at least one selecting device coupled in series to a serial combination of memory cells (with the serial combination commonly being referred to as a NAND string). NAND architecture may be configured in a three-dimensional arrangement comprising vertically-stacked memory cells individually comprising a reversibly programmable vertical transistor. Control or other circuitry may be formed below the vertically-stacked memory cells. Other volatile or non-volatile memory array architectures may also comprise vertically-stacked memory cells that individually comprise a transistor.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 is a diagrammatic cross-sectional view of a portion of a substrate in process in accordance with an embodiment of the invention and is taken through line 1 - 1 in Fig. 2.Fig. 2 is a diagrammatic cross-sectional view taken through line 2-2 in Fig. 1.Figs. 3- 19 are diagrammatic sequential sectional and/or enlarged views of the construction of Fig. 1 in process in accordance with some embodiments of the invention.
Figs. 20 and 21 are diagrammatic cross-sectional views of a portion of substrates in process in accordance with embodiments of the invention.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSEmbodiments of the invention encompass methods used in forming a memory array, for example an array of NAND or other memory cells having peripheral control circuitry under the array (e.g. , CMOS-under-array).Embodiments of the invention encompass so-called“gate-last” or“replacement-gate” processing, so-called“gate-first” processing, and other processing whether existing or future-developed independent of when transistor gates are formed. Embodiments of the invention also encompass a memory array (e.g., NAND architecture) independent of method of manufacture. First example method embodiments are described with reference to Figs. 1 - 19 which may be considered as a“gate-last” or “replacement-gate” process.Figs. 1 and 2 show a construction 10 comprising a base substrate 11 in a method of forming an array 12 of elevationally-extending strings of transistors and/or memory cells (not yet shown). B ase substrate 1 1 has any one or more of conductive/conductor/conducting,semiconducti ve/semiconductor/semiconducting, orinsulative/insulator/insulating (i.e., electrically herein) materials. Various materials have been formed elevationally over base substrate 11. Materials may be aside, elevationally inward, or elevationally outward of the Figs. 1 and 2-depicted materials. For example, other partially or wholly fabricated components of integrated circuitry may be provided somewhere above, about, or within base substrate 11. Control and/or other peripheral circuitry for operating components within an array (e.g. , array 12) of elevationally- extending strings of memory cells may also be fabricated and may or may not be wholly or partially within an array or sub-array. Further, multiple sub-arrays may also be fabricated and operated independently, in tandem, or otherwise relative one another. In this document, a“sub-array” may also be considered as an array.Construction 10 comprises a conductive tier 16 that has been formed above substrate 1 1. Example conductive tier 16 is shown as comprising conductive material 17 (e.g. , 2,000 Angstroms of conductively-doped
semiconductive material such as conductively-doped polysilicon) above metal material 19 (e.g., 900 Angstroms and WSix). Conductive tier 16 may comprise part of control circuitry (e.g., peripheral-under-array circuitry and/or a common source line or plate) used to control read and write access to the transistors and/or memory cells that will be formed within array 12.Referring to Figs. 3 and 4, openings 15 have been formed inconductive tier 16 (e.g. , by etching). In one embodiment, array 12 may be considered as comprising string locations 27 (e.g. , memory cell string locations) wherein multiple of openings 15 are laterally outward of individual string locations 27. By way example only, string locations 27 are shown as being arranged in groups or columns of staggered rows of four locations 27 per row. Any alternating existing or future-developedarrangement and construction may be used. In one embodiment and as shown, multiple openings 15 surround individual string locations 27 and are at least three in number, in one such embodiment are at least four in number, in one such embodiment are at least six in number, and in one embodiment as shown are only six in number. In one embodiment relative to surround, such multiple openings 15 are shared by immediately-laterally-adj acent individual string locations 27 and in one embodiment are equally spaced around individual string locations 27. In some embodiments, openings 15 may be considered as being grouped around individual string locations 27 and may in such grouping have any of the arrangements described above and as shown.Referring to Fig. 5, an insulator tier 21 has been formed atop conductive tier 16 and comprises insulator material 13 that extends downwardly into openings 15 in conductive tier 16. Any insulative material may be used, with silicon dioxide being but one example. In oneembodiment and as shown, insulator material 13 completely fills openings 15 in conductive tier 16. In one embodiment and as shown, openings 15 and insulator material 13 therein do not extend through conductive tier 16. In one such embodiment, openings 15 and insulator material 13 therein extend through no more than 50% of maximum thickness of conductive tier 16. In one embodiment, conductive tier 16 has a maximum thickness of greater than 600 Angstroms and openings 15 and insulator material 13 therein extend through no less than 600 Angstroms of conductive tier 16. In one
embodiment, openings 15 and insulator material 13 therein extend through conductive tier 16 (not shown).Referring to Fig. 6, a stack 18 comprising vertically-alternating insulative tiers 20 and wordline tiers 22 has been formed above insulator tier 21. Example thickness for each of tiers 20 and 22 is 25 to 60 nanometers. Only a small number of tiers 20 and 22 is shown, with more likely stack 18 comprising dozens, a hundred or more, etc. of tiers 20 and 22. Other circuitry that may or may not be part of peripheral and/or control circuitry may be between conductive tier 16 and stack 18. For example, multiple vertically-alternating tiers of conductive material and insulative material of such circuitry may be below a lowest of the wordline tiers 22 and/or above an uppermost of the wordline tiers 22. Regardless, wordline tiers 22 may not comprise conductive material and insulative tiers 20 may not comprise insulative material or be insulative at this point in processing. Example wordline tiers 22 comprise first material 26 (e.g., silicon nitride) which may be wholly or partially sacrificial. Example insulative tiers 20 comprise second material 24 (e.g. , silicon dioxide) that is of different composition from that of first material 26 and which may be wholly or partiallysacrificial. Only one stack 18 is shown although more than one stack 18 may be above or below (not shown) stack 18 above or below substrate 11.Referring to Figs. 7 and 8, channel openings 25 have been etched through insulative tiers 20 and wordline tiers 22 in string locations 27. In one embodiment, channel openings having been etched into, and in one such embodiment through, insulator tier 21. Channel openings 25 (and strings subsequently formed therein as described below) may or may not extend into conductive tier 16. In one embodiment, a sacrificial plug (e.g. , elemental tungsten and not shown) may be formed initially to extend throughconductive tier 16 in string locations 27. Channel openings 25 may be formed thereto and ideally to stop on or within such sacrificial plugs.Thereafter, such sacrificial plugs may be removed (e.g. , by selective etching relative to other exposed materials) thereby leaving a void space (not shown) after such act of removing. Channel openings 25, in suchembodiment, may thereby effectively be extended through such individual void spaces to stop on or at least proximate to an uppermost surface of conductive tier 16.
Transistor channel material is formed in the individual channel openings elevationally along the insulative tiers and the wordline tiers and is directly electrically coupled with conductive material in the conductive tier. Individual memory cells of the array being formed may comprise a gate region (e.g., a control-gate region) and a memory structure laterally between the gate region and the channel material. In one such embodiment, the memory structure is formed to comprise a charge-blocking region, storage material (e.g. , charge-storage material), and insulative charge-passage material. The storage material (e.g., floating gate material such as doped or undoped silicon or charge-trapping material such as silicon nitride, metal dots, etc.) of the individual memory cells is elevationally along individual of the charge-blocking regions. The insulative charge-passage material (e.g., a bandgap-engineered structure having nitrogen containing material [e.g. , silicon nitride] sandwiched between two insulator oxides [e.g. , silicon dioxide] ) is laterally between the channel material and the storage material.Figs. 9 and 10 show one embodiment wherein charge-blocking material 30, storage material 32, and charge-passage material 34 have been formed in individual channel openings 25 elevationally along insulative tiers 20 and wordline tiers 22. Transistor materials 30, 32 and 34 (e.g. , memory cell materials) may be formed by, for example, deposition of respective thin layers thereof over stack 18 and within individual channel openings 25 followed by planarizing such back at least to an uppermost surface of stack 18. Channel material 36 has been formed in channel openings 25elevationally along insulative tiers 20 and wordline tiers 22. Example channel materials 36 include appropriately-doped crystalline semiconductor material, such as one or more silicon, germanium, and so-called III/V semiconductor materials (e.g., GaAs, InP, GaP, and GaN) . Example thickness for each of materials 30, 32, 34, and 36 is 25 to 100 Angstroms. Punch etching may be conducted as shown to remove materials 30, 32, and 34 from the bases of channel openings 25 to expose conductive tier 16 such that channel material 36 is directly against conductive material 19 of conductive tier 16. Alternately, and by way of example only, no punch etching may be conducted and channel material 36 may be directlyelectrically coupled to material 19 by a separate conductive interconnect (not shown). Channel openings 25 are shown as comprising a radially-
central solid dielectric material 38 (e.g., spin-on-dielectric, silicon dioxide, and/or silicon nitride) . Alternately, and by way of example only, the radially-central portion within channel openings 25 may include void space(s) (not shown) and/or be devoid of solid material (not shown).Referring to Figs. 1 1 and 12, horizontally-elongated trenches 40 have been formed (e.g., by anisotropic etching) through stack 18 to conductive tier 16.Referring to Fig. 13, material 26 (not shown) of wordline tiers 22 has been etched selectively relative to materials 24, 30, 32, 34, 36, and 38 (e.g. , using liquid or vapor H3PO4 as a primary etchant where material 26 is silicon nitride, material 24 is silicon dioxide).Conducting material is ultimately formed into wordline tiers 22 and which will comprise conducting material of the individual wordlines to be formed. Fig. 14 shows such an example embodiment wherein conducting material 48 has been formed into wordline tiers 22 through trenches 40.Any suitable conducting material 48 may be used, for example one or both of metal material and/or conductively-doped semiconductive material. In but one example embodiment, conducting material 48 comprises a first- deposited conformal titanium nitride liner (not shown) followed by deposition of another composition metal material (e.g., elemental tungsten).Referring to Figs. 15- 17, conducting material 48 has been removed from individual trenches 40. Such has resulted in formation of wordlines 29 and elevationally-extending strings 49 of individual transistors and/or memory cells 56. Approximate locations of transistors and/or memory cells 56 are indicated with a bracket in Fig. 17 and some with dashed outlines in Figs. 15 and 16, with transistors and/or memory cells 56 being essentially ring-like or annular in the depicted example. Conducting material 48 may be considered as having terminal ends 50 (Fig. 17) corresponding to control-gate regions 52 of individual transistors and/or memory cells 56. Control-gate regions 52 in the depicted embodiment comprise individual portions of individual wordlines 29. Materials 30, 32, and 34 may be considered as a memory structure 65 that is laterally between control-gate region 52 and channel material 36.A charge-blocking region (e.g., charge-blocking material 30) is between storage material 32 and individual control-gate regions 52. A
charge block may have the following functions in a memory cell: In a program mode, the charge block may prevent charge carriers from passing out of the storage material (e.g., floating-gate material, charge-trapping material, etc.) toward the control gate, and in an erase mode the charge block may prevent charge carriers from flowing into the storage material from the control gate. Accordingly, a charge block may function to block charge migration between the control-gate region and the storage material of individual memory cells. An example charge-blocking region as shown comprises insulator material 30. By way of further examples, a charge blocking region may comprise a laterally (e.g., radially) outer portion of the storage material (e.g. , material 32) where such storage material is insulative (e.g., in the absence of any different-composition material between an insulative storage material 32 and conducting material 48) . Regardless, as an additional example, an interface of a storage material and conductive material of a control gate may be sufficient to function as a charge-blocking region in the absence of any separate-composition-insulator material 30. Further, an interface of conducting material 48 with material 30 (when present) in combination with insulator material 30 may together function as a charge-blocking region, and as alternately or additionally may a laterally- outer region of an insulative storage material (e.g. , a silicon nitride material 32). An example material 30 is one or more of silicon hafnium oxide and silicon dioxide.Referring to Figs. 18 and 19, a material 57 (dielectric and/or silicon-containing such as undoped polysilicon) has been formed in individual trenches 40.Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used with respect to the above- described embodiments.Openings 15 having material 13 therein may provide an anchoring function to restrict or preclude any tendency of stack 18 and/or tier 21 from delaminating from conductive tier 16. In some predecessor constructions, strings 49 were used to provide such an anchoring function typically by being formed into conductive tier 16. In some instances, it is desirable that strings 49 extend very little, or not at all, into conductive tier 16. In such instances, openings 15 and material 13 therein may provide sufficient
anchoring function whereby strings 49 need not provide any such anchoring function.The above example embodiment shows openings 15 in conductive tier 16 with insulator material 13 therein as comprising vertical sidewalls (e.g., in one embodiment that are continuously vertical from top to bottom of individual openings 15). Alternately, vertical or other orientated sidewalls may be used which have a step (not shown) somewhere between an uppermost surface and lowermost surface of conductive tier 16.Additionally, openings 15 in conductive tier 16 in insulator material 13 therein may be wider at their respective bottoms than at their respective tops or wider at their respective tops than at their respective bottoms. Example such embodiment constructions 10a and 10b, respectively, having openings 15a and 15b, respectively, are shown in Figs. 20 and 21 , respectively. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“a” and‘b”, respectively. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Embodiments of the invention encompass memory arrays independent of method of manufacture. Nevertheless, such memory arrays may have any of the attributes as described herein in method embodiments. Likewise, the above-described method embodiments may incorporate and form any of the attributes described with respect to device embodiments.In one embodiment, a memory array (e.g. , 12) comprises a conductive tier (e.g. , 16) comprising openings (e.g., 15) therein. An insulating tier (e.g., 21 ) is atop the conductive tier and comprises insulator material (e.g. , 13) that extends downwardly into the openings in the conductive tier. A stack (e.g. , 18) comprising vertically-alternating insulative tiers (e.g. , 20) and wordline tiers (e.g. , 22) is above the insulator tier. Strings (e.g., 49) comprising channel material (e.g., 36) extend through the insulative tiers and the wordline tiers. The channel material of the strings is directly electrically coupled to conductive material (e.g. , 17/19) in the conductive tier. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.The above processing(s) or construction(s) may be considered as being relative to an array of components formed as or within a single stack
or single deck of such components above or as part of an underlying base substrate ( albeit , the single stack/deck may have multiple tiers) . Control and/or other peripheral circuitry for operating or accessing such components within an array may also be formed anywhere as part of the finished construction, and in some embodiments may be under the array (e.g., CMOS under-array) . Regardless, one or more additional such stack(s)/deck(s) may be provided or fabricated above and/or below that shown in the figures or described above. Further, the array(s) of components may be the same or different relative one another in different stacks/decks. Intervening structure may be provided between immediately-vertically-adjacent stacks/decks (e.g., additional circuitry and/or dielectric layers). Also, different stacks/decks may be electrically coupled relative one another. The multiple stacks/decks may be fabricated separately and sequentially (e.g. , one atop another), or two or more stacks/decks may be fabricated at essentially the same time.The assemblies and structures discussed above may be used in integrated circuits/circuitry and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.In this document unless otherwise indicated,“elevational”,“higher”, “upper”,“lower”,“top”,“atop”,“bottom”,“above”,“below”,“under”, “beneath”,“up”, and“down” are generally with reference to the vertical direction. “Horizontal” refers to a general direction (i.e., within 10 degrees) along a primary substrate surface and may be relative to which the substrate is processed during fabrication, and vertical is a direction generally orthogonal thereto. Reference to“exactly horizontal” is the direction along the primary substrate surface (i.e. , no degrees there-from) and may be relative to which the substrate is processed during fabrication. Further, “vertical” and“horizontal” as used herein are generally perpendicular directions relative one another and independent of orientation of the
substrate in three-dimensional space. Additionally,“elevationally- extending” and“extend(ing) elevationally” refer to a direction that is angled away by at least 45° from exactly horizontal. Further,“extend(ing) elevationally”,“elevationally-extending”,“extend(ing) horizontally”, “horizontally-extending” and the like with respect to a field effect transistor are with reference to orientation of the transistor’s channel length along which current flows in operation between the source/drain regions. For bipolar junction transistors,“extend(ing) elevationally”“elevationally- extending”,“extend(ing) horizontally”,“horizontally-extending” and the like, are with reference to orientation of the base length along which current flows in operation between the emitter and collector. In some embodiments, any component, feature, and/or region that extends elevationally extends vertically or within 10° of vertical.Further,“directly above”,“directly below”, and“directly under” require at least some lateral overlap (i.e. , horizontally) of two stated regions/materials/components relative one another. Also, use of “above” not preceded by“directly” only requires that some portion of the stated region/material/component that is above the other be elevationally outward of the other (i.e. , independent of whether there is any lateral overlap of the two stated regions/materials/components) . Analogously, use of “below” and “under” not preceded by“directly” only requires that some portion of the stated region/material/component that is below/under the other beelevationally inward of the other (i.e. , independent of whether there is any lateral overlap of the two stated regions/materials/components).Any of the materials, regions, and structures described herein may be homogenous or non-homogenous, and regardless may be continuous or discontinuous over any material which such overlie. Where one or more example composition(s) is/are provided for any material, that material may comprise, consist essentially of, or consist of such one or morecomposition(s) . Further, unless otherwise stated, each material may be formed using any suitable existing or future-developed technique, with atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implanting being examples.
Additionally,“thickness” by itself (no preceding directional adjective) is defined as the mean straight-line distance through a given material or region perpendicularly from a closest surface of an immediately-adj acent material of different composition or of an immediately-adjacent region.Additionally, the various materials or regions described herein may be of substantially constant thickness or of variable thicknesses. If of variable thickness, thickness refers to average thickness unless otherwise indicated, and such material or region will have some minimum thickness and some maximum thickness due to the thickness being variable. As used herein, “different composition” only requires those portions of two stated materials or regions that may be directly against one another to be chemically and/or physically different, for example if such materials or regions are not homogenous. If the two stated materials or regions are not directly against one another,“different composition” only requires that those portions of the two stated materials or regions that are closest to one another be chemically and/or physically different if such materials or regions are not homogenous. In this document, a material, region, or structure is“directly against” another when there is at least some physical touching contact of the stated materials, regions, or structures relative one another. In contrast,“over”, “on”,“adjacent”,“along”, and“against” not preceded by“directly” encompass“directly against” as well as construction where intervening material(s), region(s), or structure(s) result(s) in no physical touching contact of the stated materials, regions, or structures relative one another.Herein, regions-materials-components are“electrically coupled” relative one another if in normal operation electric current is capable of continuously flowing from one to the other and does so predominately by movement of subatomic positive and/or negative charges when such are sufficiently generated. Another electronic component may be between and electrically coupled to the regions-materials-components. In contrast, when regions-materials-components are referred to as being "directly electrically coupled”, no intervening electronic component (e.g. , no diode, transistor, resistor, transducer, switch, fuse, etc.) is between the directly electrically coupled regions-materials-components.The composition of any of the conductive/conductor/conducting materials herein may be metal material and/or conductively-doped
semiconductive/semiconductor/semiconducting material. “Metal material” is any one or combination of an elemental metal, any mixture or alloy of two or more elemental metals, and any one or more conductive metalcompound(s) .Herein,“selective” as to etch, etching, removing, removal, depositing, forming, and/or formation is such an act of one stated material relative to another stated material(s) so acted upon at a rate of at least 2: 1 by volume. Further, selectively depositing, selectively growing, or selectively forming is depositing, growing, or forming one material relative to another stated material or materials at a rate of at least 2: 1 by volume for at least the first 75 Angstroms of depositing, growing, or forming.Unless otherwise indicated, use of “or” herein encompasses either and both.CONCLUSIONIn some embodiments, a method used in forming a memory array comprises forming a conductive tier atop a substrate, with the conductive tier comprising openings therein. An insulator tier is formed atop the conductive tier and the insulator tier comprises insulator material that extends downwardly into the openings in the conductive tier. A stack comprising vertically-alternating insulative tiers and wordline tiers is formed above the insulator tier. Strings comprising channel material that extend through the insulative tiers and the wordline tiers are formed. The channel material of the strings is directly electrically coupled to conductive material in the conductive tier.In some embodiments, a method used in forming a memory array comprises forming conductive material of a conductive tier atop a substrate. Openings are etched into the conductive material and the openings are grouped around individual string locations. An insulator tier is formed atop the conductive tier and comprises insulator material that extendsdownwardly into the openings in the conductive material of the conductive tier. A stack comprising vertically-alternating insulative tiers and wordline tiers is formed above the insulator tier. Strings comprising channel material extend through the insulative tiers and the wordline tiers and into the insulator tier in the string locations. The channel material of the strings is
directly electrically coupled to the conductive material in the conductive tier.In some embodiments, a memory array comprises a conductive tier comprising openings therein. An insulator tier is atop the conductive tier and the insulator tier comprises insulator material that extends downwardly into the openings in the conductive tier. A stack comprisingvertically-alternating insulative tiers and wordline tiers is above the insulator tier. Strings comprising channel material extend through the insulative tiers and the wordline tiers. The channel material of the strings is directly electrically coupled to conductive material in the conductive tier.In some embodiments, a memory array comprises a conductive tier comprising openings therein. An insulator tier is atop the conductive tier and comprises insulator material that extends downwardly into and completely fills the openings in the conductive tier. A stack comprising vertically-alternating insulative tiers and wordline tiers is above the insulator tier. Strings comprising channel material extend through the insulative tiers and the wordline tiers and into the insulator tier. The channel material of the strings is directly electrically coupled to conductive material in the conductive tier. The strings are everywhere laterally spaced from the openings in the conductive tier and the insulator material therein. The strings comprising the channel material extend elevationally through the insulative tiers and the wordline tiers. Insulative charge-passage material is in the wordline tiers laterally outward of the channel material. A storage region is in the wordline tiers laterally outward of the insulative charge- passage material. A charge-blocking region is in the wordline tiers laterally outward of the storage region. Multiple of the openings in the conductive tier and the insulator material therein are laterally outward of individual of the strings and said multiple surround the individual strings and are at least 3 in number. Said multiple are shared by immediately laterally adjacent of the individual strings. |
The present invention is directed to a method of forming conductive interconnections in an integrated circuit device to optimize or at least maintain the speed at which signals propagate throughout the integrated circuit device. In one embodiment, the method comprises determining any variation in the size of a contact, as compared to its design size, and varying the size of a conductive line to be coupled to the contact based upon the variation in the size of the contact. |
What is claimed: 1. A system, comprising: a metrology tool for determining a size of a contact; and a process layer manufacturing tool adapted to vary, based upon the determined size of said contact, a thickness of a process layer used in the manufacturing of a conductive line to be coupled to said contact. 2. The system of claim 1, further comprising a controller that allows communications between said metrology tool and said process layer manufacturing tool. 3. The system of claim 1, wherein said process layer manufacturing tool is adapted to form a layer of metal. 4. The system of claim 1, wherein said process layer manufacturing tool is adapted to form a layer of dielectric material. 5. The system of claim 1, wherein said metrology tool is an ellipsometer. 6. The system of claim 1, wherein said metrology tool is a scanning electron microscope. 7. The system of claim 1, wherein said process layer manufacturing tool is a deposition tool. 8. The system of claim 1, wherein said controller is a stand-alone controller. 9. The system of claim 1, wherein said controller is resident on said metrology tool. 10. The system of claim 1, wherein said controller is resident on said process layer manufacturing tool. 11. A system, comprising: a metrology tool for determining a size of a via formed in a layer of dielectric material in which a contact will be formed; and a process layer manufacturing tool adapted to vary, based upon the determined size of said via, a thickness of a process layer used in the manufacturing of a conductive line to be coupled to said contact. 12. The system of claim 11, further comprising a controller that allows communications between said metrology tool and said process layer manufacturing tool. 13. The system of claim 11, wherein said process layer manufacturing tool is adapted to form a layer of metal. 14. The system of claim 11, wherein said process layer manufacturing tool is adapted to form a layer of dielectric material. 15. The system of claim 11, wherein said metrology tool is an ellipsometer. 16. The system of claim 11, wherein said metrology tool is a scanning electron microscope. 17. The system of claim 11, wherein said process layer manufacturing tool is a deposition tool. 18. The system of claim 11, wherein said controller is a stand-alone controller. 19. The system of claim 11, wherein said controller is resident on said metrology tool. 20. The system of claim 11, wherein said controller is resident on said process layer manufacturing tool. 21. A system, comprising: a metrology tool for determining a variation in a size of a contact as compared to a design size of said contact; and a process layer manufacturing tool adapted to vary, based upon said determined size variation, a thickness of a process layer used in the manufacturing of a conductive line to be coupled to said contact. 22. The system of claim 21, further comprising a controller that allows communications between said metrology tool and said process layer manufacturing tool. 23. The system of claim 21, wherein said process layer manufacturing tool is adapted to form a layer of metal. 24. The system of claim 21, wherein said process layer manufacturing tool is adapted to form a layer of dielectric material. 25. The system of claim 21, wherein said metrology tool is an ellipsometer. 26. The system of claim 21, wherein said metrology tool is a scanning electron microscope. 27. The system of claim 21, wherein said process layer manufacturing tool is a deposition tool. 28. The system of claim 21, wherein said controller is a stand-alone controller. 29. The system of claim 21, wherein said controller is resident on said metrology tool. 30. The system of claim 21, wherein said controller is resident on said process layer manufacturing tool. 31. A system, comprising: a metrology tool for determining a variation in a size of a via in which a contact will be formed as compared to a design size for said via; and a process layer manufacturing tool adapted to vary, based upon said determined size variation of said via, a thickness of a process layer used in the manufacturing of a conductive line to be coupled to said contact. 32. The system of claim 31, further comprising a controller that allows communications between said metrology tool and said process layer manufacturing tool. 33. The system of claim 31, wherein said process layer manufacturing tool is adapted to form a layer of metal. 34. The system of claim 31, wherein said process layer manufacturing tool is adapted to form a layer of dielectric material. 35. The system of claim 31, wherein said metrology tool is an ellipsometer. 36. The system of claim 31, wherein said metrology tool is a scanning electron microscope. 37. The system of claim 31, wherein said process layer manufacturing tool is a deposition tool. 38. The system of claim 31, wherein said controller is a stand-alone controller. 39. The system of claim 31, wherein said controller is resident on said metrology tool. 40. The system of claim 31, wherein said controller is resident on said process layer manufacturing tool. 41. A system, comprising: a metrology tool for determining a variation in a physical dimension of at least one of a via and a contact to be formed in said via; and a process layer manufacturing tool adapted to vary, based upon said determined physical dimension of said at least one of a via and a contact, a thickness of a process layer used in the manufacturing of a conductive line to be coupled to said contact. 42. The system of claim 41, further comprising a controller that allows communications between said metrology tool and said process layer manufacturing tool. 43. The system of claim 41, wherein said process layer manufacturing tool is adapted to form a layer of metal. 44. The system of claim 41, wherein said process layer manufacturing tool is adapted to form a layer of dielectric material. 45. The system of claim 41, wherein said metrology tool is an ellipsometer. 46. The system of claim 41, wherein said metrology tool is a scanning electron microscope. 47. The system of claim 41, wherein said process layer manufacturing tool is a deposition tool. 48. The system of claim 41, wherein said controller is a stand-alone controller. 49. The system of claim 41, wherein said controller is resident on said metrology tool. 50. The system of claim 41, wherein said controller is resident on said process layer manufacturing tool. |
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention is directed to the field of semiconductor processing, and, more particularly, to a method of forming metal interconnections on an integrated circuit device. 2. Description of the Related Art There is a constant drive to reduce the channel length of transistors to increase the overall speed of the transistor, as well as integrated circuit devices incorporating such transistors. A conventional integrated circuit device, such as a microprocessor, is typically comprised of many thousands of semiconductor devices, e.g., transistors, formed above the surface of a semiconducting substrate. For the integrated circuit device to function, the transistors must be electrically connected to one another through conductive interconnections. Many modern integrated circuit devices are very densely packed, i.e., there is very little space between the transistors formed above the substrate. Thus, these conductive interconnections must be made in multiple layers to conserve plot space on the semiconducting substrate. This is typically accomplished through the formation of a plurality of conductive lines and conductive plugs formed in alternative layers of dielectric materials formed on the device. As is readily apparent to those skilled in the art, the conductive plugs are means by which various layers of conductive lines, and/or semiconductor devices, may be electrically coupled to one another. The conductive lines and plugs may be made of a variety of conductive materials, such as copper, aluminum, aluminum alloys, titanium, tantalum, titanium nitride, tantalum nitride, tungsten, etc. There is a constant drive within the semiconductor industry to increase the operating speed of integrated circuit devices, e.g., microprocessors, memory devices, etc. This drive is fueled by consumer demands for computers and electronic devices that operate at increasingly greater speeds. One factor that affects the speed at which integrated circuit products operate is the speed at which electrical signals propagate through the device. Electrical signals travel within the device along the interconnected conductive iines and contacts. The greater the resistance of these lines and contacts, the slower the signals will propagate through the integrated circuit device, and the slower it will operate. A great level of effort goes into sizing and routing this vast collection of interconnections in an effort to minimize the resistance of the contacts and lines in the device such that device performance, i.e., speed, is optimized or at least suitable for the design parameters of the particular product under consideration. However, as with most products that have to be fabricated, variations in the physical dimensions or size of the contact, as compared to those contemplated by the particular design, may occur due to a variety of factors inherent in manufacturing operations. For example, contacts, as actually manufactured, may vary from their design size due to under- or over-etching, or the dielectric layer in which they will be formed may be manufactured thinner or thicker than that anticipated by the design process. Whatever the source, variations in the physical size of a contact can have a negative impact on device performance. For example, as contact size decreases, the resistance of the circuit coupled to that contact increases, since the resistance of the contact is inversely proportional to the size of the contact. Left unchecked, errors such as those described above can reduce the overall performance and operating speed of the integrated circuit product. The present invention is directed to a method of manufacturing semiconductor device that minimizes or reduces some or all of the aforementioned problems. SUMMARY OF THE INVENTION The present invention is directed to a method of forming conductive interconnections on a semiconductor device. In one illustrative embodiment, the method comprises determining a variation in the size of a conductive contact as compared to the design size for the contact, and determining if a size of a conductive line to be coupled to the contact needs to be varied based upon the determined size variation of the contact. The invention may also include the act of varying the size of the conductive line based upon the determined size variation of the contact. BRIEF DESCRIPTION OF THE DRAWINGS The invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which: FIG. 1 is a cross-sectional view of an illustrative prior art transistor; FIG. 2A is a cross-sectional view of an illustrative conductive line and contact of the size contemplated by the design process; FIG. 2B is a cross-sectional view of an illustrative conductive line and contact in which compensatory changes have been made to the size of the conductive line due to variations in the size of the contact; FIG. 3 is a flowchart depicting an illustrative embodiment of the present invention; FIG. 4 is a another flowchart depicting another illustrative embodiment of the present invention; FIG. 5 is yet another flowchart depicting yet another illustrative embodiment of the present invention; and FIG. 6 is an illustrative embodiment of a system that may be used with the present invention. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. DETAILED DESCRIPTION OF THE INVENTION Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. The present invention will now be described with reference to FIGS. 2-6. Although the various regions and structures of a semiconductor device are depicted in the drawings as having very precise, sharp configurations and profiles, those skilled in the art recognize that, in reality, these regions and structures are not as precise as indicated in the drawings. Additionally, the relative sizes of the various features depicted in the drawings may be exaggerated or reduced as compared to the size of those feature sizes on fabricated devices. Nevertheless, the attached drawings are included to describe and explain illustrative examples of the present invention. In general, the present invention is directed to adjusting the size of conductive lines based upon the size of contacts to which the line is to be coupled. As will be readily apparent to those skilled in the art upon a complete reading of the present application, the present method is applicable to a variety of technologies, e.g., NMOS, PMOS, CMOS, etc., and is readily applicable to a variety of devices, including, but not limited to, logic devices, memory devices, etc. As stated previously, an integrated circuit device is comprised of many thousands of transistors. An illustrative transistor 10 that may be included in such an integrated circuit device is shown in FIG. 1. The transistor 10 is generally comprised of a gate dielectric 14, a gate conductor 16, a plurality of source/drain regions 18 formed in a semiconducting substrate 12. The gate dielectric 14 may be formed from a variety of dielectric materials, such as silicon dioxide. The gate conductor 16 may also be formed from a variety of materials, such as polysilicon. The source and drain regions 18 may be formed by one or more ion implantation processes in which a dopant material is implanted into the substrate 12. Next, a first dielectric layer 26 is formed above the transistor 10, and a plurality of vias or openings 24 are formed in the first dielectric layer 26. Thereafter, the vias 24 are filled with a conductive material, such as a metal, to form contacts 22. The contacts 22 are electrically coupled to the source and drain regions 18 of the transistor 10. Thereafter, a second dielectric layer 32 may be formed above the first dielectric layer 26. Multiple openings 30 may be formed in the second dielectric layer 32 and the openings 30 may thereafter be filled with a conductive material to form conductive lines 28. Although only one level of contacts and one level of conductive lines are depicted in FIG. 1, there may be multiple levels of contacts and lines interleaved with one another. This interconnected network of contacts and lines allows electrical signals to propagate throughout the integrated circuit device. The techniques used for forming the various components depicted in FIG. 1 are known to those skilled in the art and will not be repeated here in any detail. The present invention will now be further described with references to FIGS. 2A and 2B. As shown in FIG. 2A, a contact 42A is formed in a dielectric layer 44. The contact 42A may be of any size or configuration, it may be formed by any of a variety of techniques, and it may be comprised of any of a variety of conductive materials. Traditionally, the contact 42A has a circular cross-section, i.e., the contact is essentially a cylinder of material. However, the contact 42A can be made into any of a variety of shapes, e.g., square, rectangular, etc. Further, the dielectric layer 44 may be comprised of any dielectric material, such as silicon dioxide or a low-k dielectric. Typically, the dielectric layer 44 is formed by depositing the layer, and thereafter, subject it to a planarization operation, such as a chemical mechanical polishing ("CMP") operation, so as to produce an essentially planar surface 47. Next, a plurality of vias 43 are formed in the dielectric layer 44 by traditional photolithography and one or more traditional etching processes, e.g., an anisotropic plasma etching process. Thereafter, a layer (not shown) of the appropriate conductive material, e.g., a metal, may be blanket-deposited over the transistor, thereby filling the vias 43 formed in the first dielectric layer 44. The metal layer (not shown) may thereafter be subjected to a CMP process to remove the excess material, thereby leaving the contact 42A in the via 43. As stated previously, the contact 42A may be comprised of any of a variety of materials, such as tungsten, aluminum, copper, titanium, etc. Next, a dielectric layer 48 is formed above the dielectric layer 44, and a plurality of openings 45 may be defined in the dielectric layer 48 through use of traditional photolithography and etching processes. Thereafter, a conductive line 46A is formed in the opening 45 in the dielectric layer 48. As with the contact 42A, the conductive line 46A may be formed in any of a variety of shapes, using any of a variety of known techniques for forming such lines, and may be comprised of a variety of materials. For example, the conductive line 46A may be comprised of aluminum that is the result of patterning a layer of aluminum and thereafter forming the dielectric layer 48 between the various lines 46A. That is, the present invention may be employed in situations in which the conductive lines are formed using a dual or single damascene process or in processes where a conductive material is patterned and a dielectric material is thereafter positioned between the patterned layer of conductive material. The contact 42A and line 46A depicted in FIG. 2A represent the physical dimensions of the contact 42A and line 46A contemplated by the design process. In the illustrative example where the contact 42A is circular in cross-section, the diameter of the contact 42A is shown in FIG. 2A. With respect to the conductive line 46A, in the illustrative situation where the line 46A has a substantially rectangular cross-section, it has a width dimension, as indicated by the arrow "X," and a height or thickness dimension, as indicated by the arrow "Y." Of course, as stated previously, the configuration of the contact 42A and the conductive line 46A may be of any desired shape. Of course, the actual design size of any contact or conductive line may depend on the particular application under consideration or may simply be a matter of design choice. FIG. 2B depicts a situation in which, due to a variety of factors, the size of a contact 42B is different from the design size of the contact, and depicts one technique for compensating for such variations in the size of the contact. As shown in FIG. 2B, a contact 42B has a smaller diameter than the design diameter of the contact, as indicated by dashed lines 41 shown in FIG. 2B. Compare the size of the contact 42A with the size of the contact 42B. In short, the contact 42B depicted in FIG. 2B is smaller than its design size. As stated previously, the undersized contact 42B results in an increased resistance for the overall circuit, thereby delaying signal propagation through the circuit and, thus, device performance. FIG. 2B depicts one illustrative technique for compensating for the variation in size of the contact 42B as compared to the design size of the contact. For example, the thickness or height of the conductive line 46B in FIG. 2B, i.e., the dimension in the direction indicated by "Y," is increased to compensate for the undersized contact 42B. That is, the thickness of the conductive line 46B is increased beyond its design thickness, as indicated by dashed line 50, to compensate for the reduced size of the contact 42B. Although the particular example discussed in FIGS. 2A-2B have been discussed in terms of the diameter of a circular contact being smaller than the designed diameter of the contact, the present invention is not limited to merely detecting variations in the diameter of a particular contact. For example, the height of a contact may be greater or less than anticipated by design. Should the dielectric layer 44 in which the contact 42B will be formed be thinner than anticipated by design, the resulting contact formed in that layer will also be less than anticipated by design. The change in size of the contact may be determined by a variety of techniques and at a variety of points in the design process. For example, the thickness of the dielectric layer 44 may be determined using a metrology tool capable of performing such a measurement, e.g., an ellipsometer. Alternatively, after a via 43 is formed in the dielectric layer 44, the diameter of a via 43 may be determined by any metrology tool capable of performing this type of critical dimension measurement, e.g., an in-line scanning electron microscope (SEM), a KLA tool, etc. FIG. 3 depicts one illustrative embodiment of the present invention. As shown therein, the method comprises determining a variation in the size of a contact as compared to its design size, as indicated at block 60, and determining if the size of a conductive line to be coupled to the contact needs to be varied based upon the determined size variation of the contact, as indicated at block 61. As described above, the step of determining a variation in the size of a contact may be accomplished by a variety of techniques using a variety of metrology tools, e.g., an in-line SEM or an ellipsometer. The step performed at block 60 may be performed only on a representative number of samples, and the resulting information extrapolated to reflect conditions on a die or wafer basis. In general, the step of determining a variation in a size of a contact may be performed by determining a variation in any physical dimension of the contact, e.g., height, diameter, length, or width. Moreover, the step of determining a size of the conductive line to be coupled to the contact, as indicated in block 61, may be performed by determining if any physical dimension of the conductive line, or its size, needs to be varied based upon the change in a physical dimension of the contact. The method further comprises varying the size of the conductive line to be coupled to the contact based upon the determined size variation of the contact, as indicated at block 62. The step described at block 62 may be performed by varying a physical dimension of a conductive line, or determining a size of the conductive line, based upon a variation of a physical dimension of the contact, as may be determined in block 60. The step of determining if the size of a conductive line needs to be varied based upon the determined size variation of the contact ma) be performed by a variety of techniques. For example, a computer database which corresponds a given contact size (absolute or differential from the design size) to a desired size of the conductive line that will be coupled to the contact could be created. Alternatively, any variation to the size of the conductive line may be based upon a calculation of the resistance for the contact and the yet to be formed conductive line. Moreover, this calculation could be on an individual basis or on system-wide basis for the entire integrated circuit. Another illustrative embodiment of the present invention is depicted in FIG. 4. As shown therein, the method comprises forming a via for a contact in a dielectric layer, as indicated in block 64, and determining a variation in the size of the via as compared to the design size of the via, as indicated at block 66. The method further comprises determining if the size of a conductive line to be coupled to a contact to be formed in said via needs to be varied based upon the determined size variation of the via, as indicated at block 68, and varying the size of the conductive line to be coupled to the contact based upon the determined size variation of the via, as indicated at block 70. Yet another illustrative embodiment of the present invention is further depicted in FIG. 5. The method comprises forming a dielectric layer in which a contact is to be formed, as indicated at block 72, and determining any variation in the thickness of the dielectric layer as compared to the design thickness of the dielectric layer, as indicated at block 74. The method further comprises determining if the size of a conductive line to be coupled to a contact to be formed in the dielectric layer needs to be varied based upon the determined thickness variation of the dielectric layer, as indicated at block 76, and varying the size of the conductive line to be coupled to the contact based upon the thickness variation of said dielectric layer, as indicated at block 78. The present invention may also be embodied in a machine or computer readable format, e.g, an appropriately programmed computer, a software program written in any of a variety of programming languages. The software program would be written to carry out various functional operations of the present invention, such as those indicated in FIGS. 3-5, and elsewhere in the specification. Moreover, a machine or computer readable format of the present invention may be embodied in a variety of program storage devices, such as a diskette, a hard disk, a CD, a DVD, a nonvolatile electronic memory, or the like. The software program may be run on a variety of devices, e.g., a processor. The present invention is also directed to a processing system, e.g., a processing tool or combination of processing tools, for accomplishing the present invention. As shown in FIG. 6, an illustrative system 80 is comprised of a process layer manufacturing tool 81, a metrology tool 83, and a controller 84. In one illustrative process flow, a first dielectric layer, e.g., dielectric layer 44 in FIG. 2B, is formed, then a surface of the dielectric layer 44 is planarized with, for example, a CMP planarization tool. Thereafter, measurement of the thickness of the first dielectric layer 44 after polishing operations may be taken by a metrology tool 83, such as an ellipsometer. The results obtained by the metrology tool 83 are sent to the controller 84 via input line 85. In turn, the controller 84 may send commands to the process layer manufacturing tool 81 to adjust or vary the manufactured thickness of a process layer used in manufacturing a conductive line to be coupled to a contact to be formed in the first dielectric layer 44. For example, in the situation where the conductive line will be comprised of aluminum, the controller 84 may send commands to a process layer manufacturing tool 81 adapted to blanket-deposit a layer of metal on the device to form a layer of aluminum a given thickness to compensate for changes in the thickness of the dielectric layer 44 (representing the height of the contact) as compared to the design thickness of the dielectric layer. The command from the controller 84 may be based upon calculations made in the controller, or based upon a database containing information correlating desired sizes of conductive lines to actual sizes of the contact to which they will be coupled. Alternatively, the particular process used may be such that a second dielectric layer, e.g., layer 46 in FIG. 2B, may be formed and patterned and, thereafter, a conductive material, such as copper, may be formed in the openings defined in the second dielectric layer 46. In this situation, then the process layer manufacturing tool 81 would be a tool adapted to form such a dielectric layer. The controller 84 may be any type of device that includes logic circuitry for executing instructions. Moreover, the controller 84 depicted in FIG. 6 may be a stand-alone controller or it may be one or more of the controllers already resident on either or both of the process layer manufacturing tool 81 or the metrology tool 83. The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. For example, the process steps set forth above may be performed in a different order. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
A processor includes a front end including circuitry to decode an instruction from an instruction stream and a core including circuitry to process the instruction. The core includes an execution pipeline, a dynamic core frequency logic unit, and a counter compensation logic unit. The execution pipeline includes circuitry to execute the instruction. The dynamic core frequency logic unit includes circuitry to squash a clock of the core to reduce a core frequency. The clock may not be visible to software. The counter compensation logic unit includes circuitry to adjust a performance counter increment associated with a performance counter based on at least the dynamic core frequency logic unit circuitry to squash a clock of the core to reduce a core frequency. |
CLAIMSWhat is claimed is:1. A processor, comprising:a front end including circuitry to decode an instruction from an instruction stream;a core including circuitry to process the instruction, the core comprising:an execution pipeline including circuitry to execute the instruction; a dynamic core frequency logic unit including circuitry to squash a clock of the core to reduce a core frequency, wherein the clock is invisible to software; anda counter compensation logic unit including circuitry to adjust a performance counter increment associated with a performance counter based on at least the dynamic core frequency logic unit circuitry to squash a clock of the core to reduce a core frequency.2. The processor in claim 1, wherein the counter compensation logic unit further includes circuitry to determine whether the performance counter monitors an event measured in cycles, and adjustment of the performance counter increment is further based on a determination that the performance counter monitors an event measured in cycles.3. The processor in claim 1, wherein the counter compensation logic unit further includes circuitry to determine whether a mode of measurement for the performance counter is set to measure in cycles, and adjustment of a performance counter increment is further based on a determination that the mode of measurement for the performance counter is set to measure in cycles.4. The processor in claim 1, wherein the circuitry to adjust the performance counter further includes circuitry to:select a dynamic core frequency ratio; andgenerate an adjusted performance counter increment based on the performance counter increment and the selected dynamic core frequency ratio, wherein the dynamic core frequency ratio represents a number of unsquashed clocks to squashed clocks.5. The processor in claim 4, wherein the circuitry to select the dynamic core frequency ratio is based on a latency associated with circuitry to report the performance counter increment, wherein the selected dynamic core frequency ratio corresponds to a dynamic core frequency ratio before the performance counter increment is reported.6. The processor in claim 4, wherein the circuitry to generate an adjusted performance counter increment further includes circuitry to multiply the performance counter increment by the selected dynamic core frequency ratio.7. The processor in claim 1, further comprising a power control unit including circuitry to increment a performance counter based on the performance counter increment.8. A method, comprising:processing an instruction for a cycle of operation;squashing a clock to reduce a frequency, wherein the clock is invisible to software; and adjusting a performance counter increment associated with a performance counter based on at least squashing the clock to reduce the frequency.9. The method of claim 8, further comprising determining whether the performance counter monitors an event measured in cycles and the step of adjusting the performance counter increment is further based on a determination that the performance counter monitors an event measured in cycles.10. The method of claim 8, further comprising determining whether a mode of measurement for the performance counter is set to measure in cycles and the step of adjusting the performance counter increment is further based on a determination that the performance counter is set to measure in cycles.11. The method of claim 8, wherein the step of adjusting the performance counter further comprises:selecting a dynamic frequency ratio; andgenerating an adjusted performance counter increment based on the performance counter increment and the selected dynamic frequency ratio, wherein the dynamic frequency ratio represents a number of unsquashed clocks to squashed clocks.12. The method of claim 11, wherein the step of selecting the dynamic frequency ratio is based on a latency associated with reporting the performance counter increment, wherein the selected dynamic frequency ratio corresponds to a dynamic frequency ratio before the performance counter increment is reported.13. The method of claim 11, wherein generating an adjusted performance counter further comprises multiplying the performance counter increment by the selected dynamic frequency ratio.14. The method of claim 8, further comprising sending the adjusted performance counter increment to a power control unit and incrementing the performance counter, at the power control unit, based on the performance counter increment.15. A counter compensation logic unit, comprising circuitry to:determine whether a dynamic frequency logic unit squashed a clock to reduce a frequency, wherein the clock is invisible to software; andadjust a performance counter increment associated with a performance counter based on the determination that the dynamic frequency logic unit squashed the clock to reduce the frequency.16. The counter compensation logic unit of claim 15, further comprising circuitry to determine whether the performance counter monitors an event measured in cycles and adjustment of the performance counter increment is further based on a determination that the performance counter monitors an event measured in cycles.17. The counter compensation logic unit of claim 15, further comprising circuitry to determine whether a mode of measurement for the performance counter is set to measure in cycles and adjustment of the performance counter increment is further based on a determination that the mode of measurement for the performance counter is set to measure cycles.18. The counter compensation logic unit of claim 15, wherein the circuitry to adjust the performance counter further includes circuitry to:select a dynamic frequency ratio; andgenerate an adjusted performance counter increment based on the performance counter increment and the selected dynamic frequency ratio, wherein the dynamic frequency ratio represents a number of unsquashed clocks to squashed clocks.19. The counter compensation logic unit of claim 18, wherein the circuitry to select the dynamic frequency ratio is based on a latency associated with circuitry to report the performance counter increment, wherein the selected dynamic frequency ratio corresponds to a dynamic frequency ratio before the performance counter increment is reported.20. The counter compensation logic unit of claim 18, wherein the circuitry to generate an adjusted performance counter increment further includes circuitry to multiply the performance counter increment by the selected dynamic frequency ratio.21. The counter compensation logic unit of claim 15, further comprising circuitry to send the adjusted performance counter increment to a power control unit to increment the performance counter based on the performance counter increment.22. An apparatus comprising means for performing any of the methods of Claims 8 to 14. |
METHOD AND LOGIC FOR MAINTAINING PERFORMANCE COUNTERS WITH DYNAMICFREQUENCIESFIELD OF THE INVENTION[0001] The present disclosure pertains to the field of processing logic, microprocessors, and associated instruction set architecture that, when executed by the processor or other processing logic, perform logical, mathematical, or other functional operations.DESCRIPTION OF RELATED ART[0002] Multiprocessor systems are becoming more and more common. Applications of multiprocessor systems include dynamic domain partitioning all the way down to desktop computing. In order to take advantage of multiprocessor systems, code to be executed may be separated into multiple threads for execution by various processing entities. Each thread may be executed in parallel with one another. The performance of microprocessor systems may be determined by the frequency of various events. Such events may be monitored by counting the number of occurrences within a defined period of time. Thus, a performance counter may be incremented during execution of an instruction on a microprocessor system.DESCRIPTION OF THE FIGURES[0003] Embodiments are illustrated by way of example and not limitation in the Figures of the accompanying drawings:[0004] FIGURE 1A is a block diagram of an exemplary computer system formed with a processor that may include execution units to execute an instruction, in accordance with embodiments of the present disclosure;[0005] FIGURE IB illustrates a data processing system, in accordance with embodiments of the present disclosure;[0006] FIGURE 1C illustrates other embodiments of a data processing system for performing text string comparison operations; [0007] FIGURE 2 is a block diagram of the micro-architecture for a processor that may include logic circuits to perform instructions, in accordance with embodiments of the present disclosure;[0008] FIGURE 3A illustrates various packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure;[0009] FIGURE 3B illustrates possible in-register data storage formats, in accordance with embodiments of the present disclosure;[0010] FIGURE 3C illustrates various signed and unsigned packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure;[0011] FIGURE 3D illustrates an embodiment of an operation encoding format;[0012] FIGURE 3E illustrates another possible operation encoding format having forty or more bits, in accordance with embodiments of the present disclosure;[0013] FIGURE 3F illustrates yet another possible operation encoding format, in accordance with embodiments of the present disclosure;[0014] FIGURE 4A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline, in accordance with embodiments of the present disclosure;[0015] FIGURE 4B is a block diagram illustrating an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor, in accordance with embodiments of the present disclosure;[0016] FIGURE 5A is a block diagram of a processor, in accordance with embodiments of the present disclosure;[0017] FIGURE 5B is a block diagram of an example implementation of a core, in accordance with embodiments of the present disclosure;[0018] FIGURE 6 is a block diagram of a system, in accordance with embodiments of the present disclosure;[0019] FIGURE 7 is a block diagram of a second system, in accordance with embodiments of the present disclosure; [0020] FIGURE 8 is a block diagram of a third system in accordance with embodiments of the present disclosure;[0021] FIGURE 9 is a block diagram of a system-on-a-chip, in accordance with embodiments of the present disclosure;[0022] FIGURE 10 illustrates a processor containing a central processing unit and a graphics processing unit which may perform at least one instruction, in accordance with embodiments of the present disclosure;[0023] FIGURE 11 is a block diagram illustrating the development of IP cores, in accordance with embodiments of the present disclosure;[0024] FIGURE 12 illustrates how an instruction of a first type may be emulated by a processor of a different type, in accordance with embodiments of the present disclosure;[0025] FIGURE 13 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with embodiments of the present disclosure;[0026] FIGURE 14 is a block diagram of an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0027] FIGURE 15 is a more detailed block diagram of an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0028] FIGURE 16 is a block diagram of an execution pipeline for an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0029] FIGURE 17 is a block diagram of an electronic device for utilizing a processor, in accordance with embodiments of the present disclosure;[0030] FIGURE 18 is a block diagram of a system with a counter compensation logic unit, in accordance with embodiments of the present disclosure;[0031] FIGURE 19 is a logical representation of elements of a counter compensation logic unit, in accordance with embodiments of the present disclosure;[0032] FIGURE 20 is a block diagram of a system with performance counters, in accordance with embodiments of the present disclosure; and [0033] FIGURE 21 is a diagram of operation of a method for maintaining performance counters with dynamic frequencies, in accordance with embodiments of the present disclosure.DETAILED DESCRIPTION[0034] The following description describes a method and logic for maintaining performance counters with dynamic frequencies. The instruction and processing logic may be implemented on an out-of-order processor. In the following description, numerous specific details such as processing logic, processor types, micro-architectural conditions, events, enablement mechanisms, and the like are set forth in order to provide a more thorough understanding of embodiments of the present disclosure. It will be appreciated, however, by one skilled in the art that the embodiments may be practiced without such specific details. Additionally, some well-known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring embodiments of the present disclosure.[0035] Although the following embodiments are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments of the present disclosure may be applied to other types of circuits or semiconductor devices that may benefit from maintaining performance counters with dynamic frequencies. The teachings of embodiments of the present disclosure are applicable to any processor or machine that stores data to memory. However, the embodiments are not limited to processors or machines that perform 512-bit, 256-bit, 128-bit, 64-bit, 32-bit, or 16-bit data operations and may be applied to any processor and machine in which manipulation or management of data may be performed. In addition, the following description provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide exam ples of embodiments of the present disclosure rather than to provide an exhaustive list of all possible implementations of embodiments of the present disclosure.[0036] Although the below examples describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present disclosure may be accomplished by way of a data or instructions stored on a machine-readable, tangible medium, which when performed by a machine cause the machine to perform functions consistent with at least one embodiment of the disclosure. In one embodiment, functions associated with embodiments of the present disclosure are embodied in machine-executable instructions. The instructions may be used to cause a general-purpose or special-purpose processor that may be programmed with the instructions to perform the steps of the present disclosure. Embodiments of the present disclosure may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to embodiments of the present disclosure. Furthermore, steps of embodiments of the present disclosure might be performed by specific hardware components that contain fixed-function logic for performing the steps, or by any combination of programmed computer components and fixed-function hardware components.[0037] Instructions used to program logic to perform embodiments of the present disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions may be distributed via a network or by way of other computer-readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium may include any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).[0038] A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as may be useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, designs, at some stage, may reach a level of data representing the physical placement of various devices in the hardware model. In cases wherein some semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine-readable medium. A memory or a magnetic or optical storage such as a disc may be the machine-readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or retransmission of the electrical signal is performed, a new copy may be made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.[0039] In modern processors, a number of different execution units may be used to process and execute a variety of code and instructions. Some instructions may be quicker to complete while others may take a number of clock cycles to complete. The faster the throughput of instructions, the better the overall performance of the processor. Thus it would be advantageous to have as many instructions execute as fast as possible. However, there may be certain instructions that have greater complexity and require more in terms of execution time and processor resources, such as floating point instructions, load/store operations, data moves, etc.[0040] As more computer systems are used in internet, text, and multimedia applications, additional processor support has been introduced over time. In one embodiment, an instruction set may be associated with one or more computer architectures, including data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). [0041] In one embodiment, the instruction set architecture (ISA) may be implemented by one or more micro-architectures, which may include processor logic and circuits used to implement one or more instruction sets. Accordingly, processors with different microarchitectures may share at least a portion of a common instruction set. For example, Intel®Pentium 4 processors, Intel®Core™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale CA implement nearly identical versions of the x86 instruction set (with some extensions that have been added with newer versions), but have different internal designs. Similarly, processors designed by other processor development companies, such as ARM Holdings, Ltd., MIPS, or their licensees or adopters, may share at least a portion a common instruction set, but may include different processor designs. For example, the same register architecture of the ISA may be implemented in different ways in different micro-architectures using new or well-known techniques, including dedicated physical registers, one or more dynamically allocated physical registers using a register renaming mechanism (e.g., the use of a Register Alias Table (RAT), a Reorder Buffer (ROB) and a retirement register file. In one embodiment, registers may include one or more registers, register architectures, register files, or other register sets that may or may not be addressable by a software programmer.[0042] An instruction may include one or more instruction formats. In one embodiment, an instruction format may indicate various fields (number of bits, location of bits, etc.) to specify, among other things, the operation to be performed and the operands on which that operation will be performed. In a further embodiment, some instruction formats may be further defined by instruction templates (or sub-formats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields and/or defined to have a given field interpreted differently. In one embodiment, an instruction may be expressed using an instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and specifies or indicates the operation and the operands upon which the operation will operate.[0043] Scientific, financial, auto-vectorized general purpose, RMS (recognition, mining, and synthesis), and visual and multimedia applications (e.g., 2D/3D graphics, image processing, video compression/decompression, voice recognition algorithms and audio manipulation) may require the same operation to be performed on a large number of data items. In one embodiment, Single Instruction Multiple Data (SIMD) refers to a type of instruction that causes a processor to perform an operation on multiple data elements. SIMD technology may be used in processors that may logically divide the bits in a register into a number of fixed-sized or variable-sized data elements, each of which represents a separate value. For example, in one embodiment, the bits in a 64-bit register may be organized as a source operand containing four separate 16-bit data elements, each of which represents a separate 16-bit value. This type of data may be referred to as 'packed' data type or 'vector' data type, and operands of this data type may be referred to as packed data operands or vector operands. In one embodiment, a packed data item or vector may be a sequence of packed data elements stored within a single register, and a packed data operand or a vector operand may a source or destination operand of a SIMD instruction (or 'packed data instruction' or a 'vector instruction'). In one embodiment, a SIMD instruction specifies a single vector operation to be performed on two source vector operands to generate a destination vector operand (also referred to as a result vector operand) of the same or different size, with the same or different number of data elements, and in the same or different data element order.[0044] SIMD technology, such as that employed by the Intel®Core™ processors having an instruction set including x86, MMX™, Streaming SIMD Extensions (SSE), SSE2, SSE3, SSE4.1, and SSE4.2 instructions, ARM processors, such as the ARM Cortex®family of processors having an instruction set including the Vector Floating Point (VFP) and/or NEON instructions, and MIPS processors, such as the Loongson family of processors developed by the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences, has enabled a significant improvement in application performance (Core™ and MMX™ are registered trademarks or trademarks of Intel Corporation of Santa Clara, Calif.).[0045] In one embodiment, destination and source registers/data may be generic terms to represent the source and destination of the corresponding data or operation. In some embodiments, they may be implemented by registers, memory, or other storage areas having other names or functions than those depicted. For example, in one embodiment, "DEST1" may be a temporary storage register or other storage area, whereas "SRC1" and "SRC2" may be a first and second source storage register or other storage area, and so forth. In other embodiments, two or more of the SRC and DEST storage areas may correspond to different data storage elements within the same storage area (e.g., a SIMD register). In one embodiment, one of the source registers may also act as a destination register by, for example, writing back the result of an operation performed on the first and second source data to one of the two source registers serving as a destination registers.[0046] FIGURE 1A is a block diagram of an exemplary computer system formed with a processor that may include execution units to execute an instruction, in accordance with embodiments of the present disclosure. System 100 may include a component, such as a processor 102 to employ execution units including circuits with logic to perform algorithms for process data, in accordance with the present disclosure, such as in the embodiment described herein. System 100 may be representative of processing systems based on the PENTIUM®III, PENTIUM®4, Xeon™, Itanium®, XScale™ and/or StrongARM™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 100 may execute a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware circuitry, programmable circuitry, and software.[0047] Embodiments are not limited to computer systems. Embodiments of the present disclosure may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that may perform one or more instructions in accordance with at least one embodiment. [0048] Computer system 100 may include a processor 102 that may include one or more execution units 108 to perform an algorithm to perform at least one instruction in accordance with one embodiment of the present disclosure. One embodiment may be described in the context of a single processor desktop or server system, but other embodiments may be included in a multiprocessor system. System 100 may be an example of a 'hub' system architecture. System 100 may include a processor 102 for processing data signals. Processor 102 may include a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. In one embodiment, processor 102 may be coupled to a processor bus 110 that may transmit data signals between processor 102 and other components in system 100. The elements of system 100 may perform conventional functions that are well known to those familiar with the art.[0049] In one embodiment, processor 102 may include a Level 1 (LI) internal cache memory 104. Depending on the architecture, the processor 102 may have a single internal cache or multiple levels of internal cache. In another embodiment, the cache memory may reside external to processor 102. Other embodiments may also include a combination of both internal and external caches depending on the particular implementation and needs. Register file 106 may store different types of data in various registers including integer registers, floating point registers, status registers, and instruction pointer register.[0050] Execution unit 108, including circuits with logic to perform integer and floating point operations, also resides in processor 102. Processor 102 may also include a microcode (ucode) ROM that stores microcode for certain macroinstructions. In one embodiment, execution unit 108 may include circuits with logic to handle a packed instruction set 109. By including the packed instruction set 109 in the instruction set of a general-purpose processor 102, along with associated circuitry to execute the instructions, the operations used by many multimedia applications may be performed using packed data in a general-purpose processor 102. Thus, many multimedia applications may be accelerated and executed more efficiently by using the full width of a processor's data bus for performing operations on packed data. This may eliminate the need to transfer smaller units of data across the processor's data bus to perform one or more operations one data element at a time.[0051] Embodiments of an execution unit 108 may also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 100 may include a memory 120. Memory 120 may be implemented as a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 120 may store instructions and/or data represented by data signals that may be executed by processor 102.[0052] A system logic chip 116 may be coupled to processor bus 110 and memory 120. System logic chip 116 may include a memory controller hub (MCH). Processor 102 may communicate with MCH 116 via a processor bus 110. MCH 116 may provide a high bandwidth memory path 118 to memory 120 for instruction and data storage and for storage of graphics commands, data and textures. MCH 116 may direct data signals between processor 102, memory 120, and other components in system 100 and to bridge the data signals between processor bus 110, memory 120, and system I/O 122. In some embodiments, the system logic chip 116 may provide a graphics port for coupling to a graphics controller 112. MCH 116 may be coupled to memory 120 through a memory interface 118. Graphics card 112 may be coupled to MCH 116 through an Accelerated Graphics Port (AGP) interconnect 114.[0053] System 100 may use a proprietary hub interface bus 122 to couple MCH 116 to I/O controller hub (ICH) 130. In one embodiment, ICH 130 may provide direct connections to some I/O devices via a local I/O bus. The local I/O bus may include a high-speed I/O bus for connecting peripherals to memory 120, chipset, and processor 102. Examples may include the audio controller, firmware hub (flash BIOS) 128, wireless transceiver 126, data storage 124, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller 134. Data storage device 124 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0054] For another embodiment of a system, an instruction in accordance with one embodiment may be used with a system on a chip. One embodiment of a system on a chip comprises of a processor and a memory. The memory for one such system may include a flash memory. The flash memory may be located on the same die as the processor and other system components. Additionally, other logic blocks such as a memory controller or graphics controller may also be located on a system on a chip.[0055] FIGURE IB illustrates a data processing system 140 which implements the principles of embodiments of the present disclosure. It will be readily appreciated by one of skill in the art that the embodiments described herein may operate with alternative processing systems without departure from the scope of embodiments of the disclosure.[0056] Computer system 140 comprises a processing core 159 for performing at least one instruction in accordance with one embodiment. In one embodiment, processing core 159 represents a processing unit of any type of architecture, including but not limited to a CISC, a RISC or a VLIW type architecture. Processing core 159 may also be suitable for manufacture in one or more process technologies and by being represented on a machine-readable media in sufficient detail, may be suitable to facilitate said manufacture.[0057] Processing core 159 comprises an execution unit 142, a set of register files 145, and a decoder 144. Processing core 159 may also include additional circuitry (not shown) which may be unnecessary to the understanding of embodiments of the present disclosure. Execution unit 142 may execute instructions received by processing core 159. In addition to performing typical processor instructions, execution unit 142 may perform instructions in packed instruction set 143 for performing operations on packed data formats. Packed instruction set 143 may include instructions for performing embodiments of the disclosure and other packed instructions. Execution unit 142 may be coupled to register file 145 by an internal bus. Register file 145 may represent a storage area on processing core 159 for storing information, including data. As previously mentioned, it is understood that the storage area may store the packed data might not be critical. Execution unit 142 may be coupled to decoder 144. Decoder 144 may decode instructions received by processing core 159 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, execution unit 142 performs the appropriate operations. In one embodiment, the decoder may interpret the opcode of the instruction, which will indicate what operation should be performed on the corresponding data indicated within the instruction.[0058] Processing core 159 may be coupled with bus 141 for communicating with various other system devices, which may include but are not limited to, for example, synchronous dynamic random access memory (SDRAM) control 146, static random access memory (SRAM) control 147, burst flash memory interface 148, personal computer memory card international association (PCMCIA)/compact flash (CF) card control 149, liquid crystal display (LCD) control 150, direct memory access (DMA) controller 151, and alternative bus master interface 152. In one embodiment, data processing system 140 may also comprise an I/O bridge 154 for communicating with various I/O devices via an I/O bus 153. Such I/O devices may include but are not limited to, for example, universal asynchronous receiver/transmitter (UART) 155, universal serial bus (USB) 156, Bluetooth wireless UART 157 and I/O expansion interface 158.[0059] One embodiment of data processing system 140 provides for mobile, network and/or wireless communications and a processing core 159 that may perform SIMD operations including a text string comparison operation. Processing core 159 may be programmed with various audio, video, imaging and communications algorithms including discrete transformations such as a Walsh-Hadamard transform, a fast Fourier transform (FFT), a discrete cosine transform (DCT), and their respective inverse transforms; compression/decompression techniques such as color space transformation, video encode motion estimation or video decode motion compensation; and modulation/demodulation (MODEM) functions such as pulse coded modulation (PCM).[0060] FIGURE 1C illustrates other embodiments of a data processing system that performs SIMD text string comparison operations. In one embodiment, data processing system 160 may include a main processor 166, a SIMD coprocessor 161, a cache memory 167, and an input/output system 168. Input/output system 168 may optionally be coupled to a wireless interface 169. SIMD coprocessor 161 may perform operations including instructions in accordance with one embodiment. In one embodiment, processing core 170 may be suitable for manufacture in one or more process technologies and by being represented on a machine- readable media in sufficient detail, may be suitable to facilitate the manufacture of all or part of data processing system 160 including processing core 170.[0061] In one embodiment, SIMD coprocessor 161 comprises an execution unit 162 and a set of register files 164. One embodiment of main processor 165 comprises a decoder165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment for execution by execution unit 162. In other embodiments, SIMD coprocessor 161 also comprises at least part of decoder 165 to decode instructions of instruction set 163. Processing core 170 may also include additional circuitry (not shown) which may be unnecessary to the understanding of embodiments of the present disclosure.[0062] In operation, main processor 166 executes a stream of data processing instructions that control data processing operations of a general type including interactions with cache memory 167, and input/output system 168. Embedded within the stream of data processing instructions may be SIMD coprocessor instructions. Decoder 165 of main processor166 recognizes these SIMD coprocessor instructions as being of a type that should be executed by an attached SIMD coprocessor 161. Accordingly, main processor 166 issues these SIMD coprocessor instructions (or control signals representing SIMD coprocessor instructions) on the coprocessor bus 166. From coprocessor bus 166, these instructions may be received by any attached SIMD coprocessors. In this case, SIMD coprocessor 161 may accept and execute any received SIMD coprocessor instructions intended for it.[0063] Data may be received via wireless interface 169 for processing by the SIMD coprocessor instructions. For one example, voice communication may be received in the form of a digital signal, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples representative of the voice communications. For another example, compressed audio and/or video may be received in the form of a digital bit stream, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples and/or motion video frames. In one embodiment of processing core 170, main processor 166, and a SIMD coprocessor 161 may be integrated into a single processing core 170 comprising an execution unit 162, a set of register files 164, and a decoder 165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment. [0064] FIGURE 2 is a block diagram of the micro-architecture for a processor 200 that may include logic circuits to perform instructions, in accordance with embodiments of the present disclosure. In some embodiments, an instruction in accordance with one embodiment may be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment, in-order front end 201 may implement a part of processor 200 that may fetch instructions to be executed and prepares the instructions to be used later in the processor pipeline. Front end 201 may include several units. In one embodiment, instruction prefetcher 226 fetches instructions from memory and feeds the instructions to an instruction decoder 228 which in turn decodes or interprets the instructions. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called "micro-instructions" or "micro-operations" (also called micro op or uops) that the machine may execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that may be used by the micro-architecture to perform operations in accordance with one embodiment. In one embodiment, trace cache 230 may assemble decoded uops into program ordered sequences or traces in uop queue 234 for execution. When trace cache 230 encounters a complex instruction, microcode ROM 232 provides the uops needed to complete the operation.[0065] Some instructions may be converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete an instruction, decoder 228 may access microcode ROM 232 to perform the instruction. In one embodiment, an instruction may be decoded into a small number of micro ops for processing at instruction decoder 228. In another embodiment, an instruction may be stored within microcode ROM 232 should a number of micro-ops be needed to accomplish the operation. Trace cache 230 refers to an entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from micro-code ROM 232. After microcode ROM 232 finishes sequencing micro-ops for an instruction, front end 201 of the machine may resume fetching micro-ops from trace cache 230. [0066] Out-of-order execution engine 203 may prepare instructions for execution. The out-of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 202, slow/general floating point scheduler 204, and simple floating point scheduler 206. Uop schedulers 202, 204, 206, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. Fast scheduler 202 of one embodiment may schedule on each half of the main clock cycle while the other schedulers may only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.[0067] Register files 208, 210 may be arranged between schedulers 202, 204, 206, and execution units 212, 214, 216, 218, 220, 222, 224 in execution block 211. Each of register files 208, 210 perform integer and floating point operations, respectively. Each register file 208, 210, may include a bypass network that may bypass or forward just completed results that have not yet been written into the register file to new dependent uops. Integer register file 208 and floating point register file 210 may communicate data with the other. In one embodiment, integer register file 208 may be split into two separate register files, one register file for low- order thirty-two bits of data and a second register file for high order thirty-two bits of data. Floating point register file 210 may include 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0068] Execution block 211 may contain execution units 212, 214, 216, 218, 220, 222, 224. Execution units 212, 214, 216, 218, 220, 222, 224 may execute the instructions. Execution block 211 may include register files 208, 210 that store the integer and floating point data operand values that the micro-instructions need to execute. In one embodiment, processor 200 may comprise a number of execution units: address generation unit (AGU) 212, AGU 214, fast ALU 216, fast ALU 218, slow ALU 220, floating point ALU 222, floating point move unit 224. In another embodiment, floating point execution blocks 222, 224, may execute floating point, MMX, SIMD, and SSE, or other operations. In yet another embodiment, floating point ALU 222 may include a 64-bit by 64-bit floating point divider to execute divide, square root, and remainder micro-ops. In various embodiments, instructions involving a floating point value may be handled with the floating point hardware. In one embodiment, ALU operations may be passed to high-speed ALU execution units 216, 218. High-speed ALUs 216, 218 may execute fast operations with an effective latency of half a clock cycle. I n one embodiment, most complex integer operations go to slow ALU 220 as slow ALU 220 may include integer execution hardware for long-latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations may be executed by AGUs 212, 214. In one embodiment, integer ALUs 216, 218, 220 may perform integer operations on 64-bit data operands. In other embodiments, ALUs 216, 218, 220 may be implemented to support a variety of data bit sizes including sixteen, thirty-two, 128, 256, etc. Similarly, floating point units 222, 224 may be implemented to support a range of operands having bits of various widths. In one embodiment, floating point units 222, 224, may operate on 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions.[0069] In one embodiment, uops schedulers 202, 204, 206, dispatch dependent operations before the parent load has finished executing. As uops may be speculatively scheduled and executed in processor 200, processor 200 may also include circuits with logic to handle memory misses. If a data load misses in the data cache, there may be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations might need to be replayed and the independent ones may be allowed to complete. The schedulers and replay mechanism of one embodiment of a processor may also be designed to catch instruction sequences for text string comparison operations.[0070] The term "registers" may refer to the on-board processor storage locations that may be used as part of instructions to identify operands. In other words, registers may be those that may be usable from the outside of the processor (from a programmer's perspective). However, in some embodiments registers might not be limited to a particular type of circuit. Rather, a register may store data, provide data, and perform the functions described herein. The registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store 32-bit integer data. A register file of one embodiment also contains eight multimedia SIMD registers for packed data. For the discussions below, the registers may be understood to be data registers designed to hold packed data, such as 64-bit wide MMX™ registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, may operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128-bit wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology may hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point may be contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.[0071] In the examples of the following figures, a number of data operands may be described. FIGURE 3A illustrates various packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure. FIGURE 3A illustrates data types for a packed byte 310, a packed word 320, and a packed doubleword (dword) 330 for 128-bit wide operands. Packed byte format 310 of this example may be 128 bits long and contains sixteen packed byte data elements. A byte may be defined, for example, as eight bits of data. Information for each byte data element may be stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits may be used in the register. This storage arrangement increases the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation may now be performed on sixteen data elements in parallel. [0072] Generally, a data element may include an individual piece of data that is stored in a single register or memory location with other data elements of the same length. In packed data sequences relating to SSEx technology, the number of data elements stored in a XMM register may be 128 bits divided by the length in bits of an individual data element. Similarly, in packed data sequences relating to MMX and SSE technology, the number of data elements stored in an MMX register may be 64 bits divided by the length in bits of an individual data element. Although the data types illustrated in FIGURE 3A may be 128 bits long, embodiments of the present disclosure may also operate with 64-bit wide or other sized operands. Packed word format 320 of this example may be 128 bits long and contains eight packed word data elements. Each packed word contains sixteen bits of information. Packed doubleword format 330 of FIGURE 3A may be 128 bits long and contains four packed doubleword data elements. Each packed doubleword data element contains thirty-two bits of information. A packed quadword may be 128 bits long and contain two packed quad-word data elements.[0073] FIGURE 3B illustrates possible in-register data storage formats, in accordance with embodiments of the present disclosure. Each packed data may include more than one independent data element. Three packed data formats are illustrated; packed half 341, packed single 342, and packed double 343. One embodiment of packed half 341, packed single 342, and packed double 343 contain fixed-point data elements. For another embodiment one or more of packed half 341, packed single 342, and packed double 343 may contain floating-point data elements. One embodiment of packed half 341 may be 128 bits long containing eight 16- bit data elements. One embodiment of packed single 342 may be 128 bits long and contains four 32-bit data elements. One embodiment of packed double 343 may be 128 bits long and contains two 64-bit data elements. It will be appreciated that such packed data formats may be further extended to other register lengths, for example, to 96-bits, 160-bits, 192-bits, 224-bits, 256-bits, 512-bits or more.[0074] FIGURE 3C illustrates various signed and unsigned packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure. Unsigned packed byte representation 344 illustrates the storage of an unsigned packed byte in a SIMD register. Information for each byte data element may be stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits may be used in the register. This storage arrangement may increase the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation may now be performed on sixteen data elements in a parallel fashion. Signed packed byte representation 345 illustrates the storage of a signed packed byte. Note that the eighth bit of every byte data element may be the sign indicator. Unsigned packed word representation 346 illustrates how word seven through word zero may be stored in a SIMD register. Signed packed word representation 347 may be similar to the unsigned packed word in-register representation 346. Note that the sixteenth bit of each word data element may be the sign indicator. Unsigned packed doubleword representation 348 shows how doubleword data elements are stored. Signed packed doubleword representation 349 may be similar to unsigned packed doubleword in-register representation 348. Note that the necessary sign bit may be the thirty-second bit of each doubleword data element.[0075] FIGURE 3D illustrates an embodiment of an operation encoding (opcode). Furthermore, format 360 may include register/memory operand addressing modes corresponding with a type of opcode format described in the "IA-32 Intel Architecture Software Developer's Manual Volume 2: Instruction Set Reference," which is available from Intel Corporation, Santa Clara, CA on the world-wide-web (www) at intel.com/design/litcentr. In one embodiment, and instruction may be encoded by one or more of fields 361 and 362. Up to two operand locations per instruction may be identified, including up to two source operand identifiers 364 and 365. In one embodiment, destination operand identifier 366 may be the same as source operand identifier 364, whereas in other embodiments they may be different. In another embodiment, destination operand identifier 366 may be the same as source operand identifier 365, whereas in other embodiments they may be different. In one embodiment, one of the source operands identified by source operand identifiers 364 and 365 may be overwritten by the results of the text string comparison operations, whereas in other embodiments identifier 364 corresponds to a source register element and identifier 365 corresponds to a destination register element. In one embodiment, operand identifiers 364 and 365 may identify 32-bit or 64-bit source and destination operands.[0076] FIGURE 3E illustrates another possible operation encoding (opcode) format 370, having forty or more bits, in accordance with embodiments of the present disclosure. Opcode format 370 corresponds with opcode format 360 and comprises an optional prefix byte 378. An instruction according to one embodiment may be encoded by one or more of fields 378, 371, and 372. Up to two operand locations per instruction may be identified by source operand identifiers 374 and 375 and by prefix byte 378. In one embodiment, prefix byte 378 may be used to identify 32-bit or 64-bit source and destination operands. In one embodiment, destination operand identifier 376 may be the same as source operand identifier 374, whereas in other embodiments they may be different. For another embodiment, destination operand identifier 376 may be the same as source operand identifier 375, whereas in other embodiments they may be different. In one embodiment, an instruction operates on one or more of the operands identified by operand identifiers 374 and 375 and one or more operands identified by operand identifiers 374 and 375 may be overwritten by the results of the instruction, whereas in other embodiments, operands identified by identifiers 374 and 375 may be written to another data element in another register. Opcode formats 360 and 370 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD fields 363 and 373 and by optional scale-index-base and displacement bytes.[0077] FIGURE 3F illustrates yet another possible operation encoding (opcode) format, in accordance with embodiments of the present disclosure. 64-bit single instruction multiple data (SIMD) arithmetic operations may be performed through a coprocessor data processing (CDP) instruction. Operation encoding (opcode) format 380 depicts one such CDP instruction having CDP opcode fields 382 and 389. The type of CDP instruction, for another embodiment, operations may be encoded by one or more of fields 383, 384, 387, and 388. Up to three operand locations per instruction may be identified, including up to two source operand identifiers 385 and 390 and one destination operand identifier 386. One embodiment of the coprocessor may operate on eight, sixteen, thirty-two, and 64-bit values. In one embodiment, an instruction may be performed on integer data elements. In some embodiments, an instruction may be executed conditionally, using condition field 381. For some embodiments, source data sizes may be encoded by field 383. In some embodiments, Zero (Z), negative (N), carry (C), and overflow (V) detection may be done on SIMD fields. For some instructions, the type of saturation may be encoded by field 384.[0078] FIGURE 4A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline, in accordance with embodiments of the present disclosure. FIGURE 4B is a block diagram illustrating an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor, in accordance with embodiments of the present disclosure. The solid lined boxes in FIGURE 4A illustrate the in-order pipeline, while the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline. Similarly, the solid lined boxes in FIGURE 4B illustrate the in-order architecture logic, while the dashed lined boxes illustrates the register renaming logic and out-of-order issue/execution logic.[0079] In FIGURE 4A, a processor pipeline 400 may include a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 416, a write-back/memory-write stage 418, an exception handling stage 422, and a commit stage 424.[0080] In FIGURE 4B, arrows denote a coupling between two or more units and the direction of the arrow indicates a direction of data flow between those units. FIGURE 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both may be coupled to a memory unit 470.[0081] Core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. In one embodiment, core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like.[0082] Front end unit 430 may include a branch prediction unit 432 coupled to an instruction cache unit 434. Instruction cache unit 434 may be coupled to an instruction translation lookaside buffer (TLB) 436. TLB 436 may be coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. Decode unit 440 may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which may be decoded from, or which otherwise reflect, or may be derived from, the original instructions. The decoder may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode readonly memories (ROMs), etc. In one embodiment, instruction cache unit 434 may be further coupled to a level 2 (L2) cache unit 476 in memory unit 470. Decode unit 440 may be coupled to a rename/allocator unit 452 in execution engine unit 450.[0083] Execution engine unit 450 may include rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler units 456. Scheduler units 456 represent any number of different schedulers, including reservations stations, central instruction window, etc. Scheduler units 456 may be coupled to physical register file units 458. Each of physical register file units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed),, etc. Physical register file units 458 may be overlapped by retirement unit 154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using one or more reorder buffers and one or more retirement register files, using one or more future files, one or more history buffers, and one or more retirement register files; using register maps and a pool of registers; etc.). Generally, the architectural registers may be visible from the outside of the processor or from a programmer's perspective. The registers might not be limited to any known particular type of circuit. Various different types of registers may be suitable as long as they store and provide data as described herein. Examples of suitable registers include, but might not be limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. Retirement unit 454 and physical register file units 458 may be coupled to execution clusters 460. Execution clusters 460 may include a set of one or more execution units 162 and a set of one or more memory access units 464. Execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler units 456, physical register file units 458, and execution clusters 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments may be implemented in which only the execution cluster of this pipeline has memory access units 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[0084] The set of memory access units 464 may be coupled to memory unit 470, which may include a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which may be coupled to data TLB unit 472 in memory unit 470. L2 cache unit 476 may be coupled to one or more other levels of cache and eventually to a main memory.[0085] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement pipeline 400 as follows: 1) instruction fetch 438 may perform fetch and length decoding stages 402 and 404; 2) decode unit 440 may perform decode stage 406; 3) rename/allocator unit 452 may perform allocation stage 408 and renaming stage 410; 4) scheduler units 456 may perform schedule stage 412; 5) physical register file units 458 and memory unit 470 may perform register read/memory read stage 414; execution cluster 460 may perform execute stage 416; 6) memory unit 470 and physical register file units 458 may perform write-back/memory-write stage 418; 7) various units may be involved in the performance of exception handling stage 422; and 8) retirement unit 454 and physical register file units 458 may perform commit stage 424.[0086] Core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA).[0087] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads) in a variety of manners. Multithreading support may be performed by, for example, including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof. Such a combination may include, for example, time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel®Hyperthreading technology.[0088] While register renaming may be described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include a separate instruction and data cache units 434/474 and a shared L2 cache unit 476, other embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that may be external to the core and/or the processor. In other embodiments, all of the cache may be external to the core and/or the processor.[0089] FIGURE 5A is a block diagram of a processor 500, in accordance with embodiments of the present disclosure. In one embodiment, processor 500 may include a multicore processor. Processor 500 may include a system agent 510 communicatively coupled to one or more cores 502. Furthermore, cores 502 and system agent 510 may be communicatively coupled to one or more caches 506. Cores 502, system agent 510, and caches 506 may be communicatively coupled via one or more memory control units 552. Furthermore, cores 502, system agent 510, and caches 506 may be communicatively coupled to a graphics module 560 via memory control units 552.[0090] Processor 500 may include any suitable mechanism for interconnecting cores 502, system agent 510, and caches 506, and graphics module 560. In one embodiment, processor 500 may include a ring-based interconnect unit 508 to interconnect cores 502, system agent 510, and caches 506, and graphics module 560. In other embodiments, processor 500 may include any number of well-known techniques for interconnecting such units. Ring- based interconnect unit 508 may utilize memory control units 552 to facilitate interconnections.[0091] Processor 500 may include a memory hierarchy comprising one or more levels of caches within the cores, one or more shared cache units such as caches 506, or external memory (not shown) coupled to the set of integrated memory controller units 552. Caches 506 may include any suitable cache. In one embodiment, caches 506 may include one or more mid- level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.[0092] In various embodiments, one or more of cores 502 may perform multithreading. System agent 510 may include components for coordinating and operating cores 502. System agent unit 510 may include for example a power control unit (PCU). The PCU may be or include logic and components needed for regulating the power state of cores 502. System agent 510 may include a display engine 512 for driving one or more externally connected displays or graphics module 560. System agent 510 may include an interface for communications busses for graphics. In one embodiment, the interface may be implemented by PCI Express (PCIe). In a further embodiment, the interface may be implemented by PCI Express Graphics (PEG) 514. System agent 510 may include a direct media interface (DMI) 516. DMI 516 may provide links between different bridges on a motherboard or other portion of a computer system. System agent 510 may include a PCIe bridge 518 for providing PCIe links to other elements of a computing system. PCIe bridge 518 may be implemented using a memory controller 520 and coherence logic 522. [0093] Cores 502 may be implemented in any suitable manner. Cores 502 may be homogenous or heterogeneous in terms of architecture and/or instruction set. In one embodiment, some of cores 502 may be in-order while others may be out-of-order. In another embodiment, two or more of cores 502 may execute the same instruction set, while others may execute only a subset of that instruction set or a different instruction set.[0094] Processor 500 may include a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, XScale™ or StrongARM™ processor, which may be available from Intel Corporation, of Santa Clara, Calif. Processor 500 may be provided from another company, such as ARM Holdings, Ltd, MIPS, etc. Processor 500 may be a special- purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. Processor 500 may be implemented on one or more chips. Processor 500 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[0095] In one embodiment, a given one of caches 506 may be shared by multiple ones of cores 502. In another embodiment, a given one of caches 506 may be dedicated to one of cores 502. The assignment of caches 506 to cores 502 may be handled by a cache controller or other suitable mechanism. A given one of caches 506 may be shared by two or more cores 502 by implementing time-slices of a given cache 506.[0096] Graphics module 560 may implement an integrated graphics processing subsystem. In one embodiment, graphics module 560 may include a graphics processor. Furthermore, graphics module 560 may include a media engine 565. Media engine 565 may provide media encoding and video decoding.[0097] FIGURE 5B is a block diagram of an example implementation of a core 502, in accordance with embodiments of the present disclosure. Core 502 may include a front end 570 communicatively coupled to an out-of-order engine 580. Core 502 may be communicatively coupled to other portions of processor 500 through cache hierarchy 503.[0098] Front end 570 may be implemented in any suitable manner, such as fully or in part by front end 201 as described above. In one embodiment, front end 570 may communicate with other portions of processor 500 through cache hierarchy 503. In a further embodiment, front end 570 may fetch instructions from portions of processor 500 and prepare the instructions to be used later in the processor pipeline as they are passed to out-of-order execution engine 580.[0099] Out-of-order execution engine 580 may be implemented in any suitable manner, such as fully or in part by out-of-order execution engine 203 as described above. Out- of-order execution engine 580 may prepare instructions received from front end 570 for execution. Out-of-order execution engine 580 may include an allocate module 1282. In one embodiment, allocate module 1282 may allocate resources of processor 500 or other resources, such as registers or buffers, to execute a given instruction. Allocate module 1282 may make allocations in schedulers, such as a memory scheduler, fast scheduler, or floating point scheduler. Such schedulers may be represented in FIGURE 5B by resource schedulers 584. Allocate module 1282 may be implemented fully or in part by the allocation logic described in conjunction with FIGURE 2. Resource schedulers 584 may determine when an instruction is ready to execute based on the readiness of a given resource's sources and the availability of execution resources needed to execute an instruction. Resource schedulers 584 may be implemented by, for example, schedulers 202, 204, 206 as discussed above. Resource schedulers 584 may schedule the execution of instructions upon one or more resources. In one embodiment, such resources may be internal to core 502, and may be illustrated, for example, as resources 586. In another embodiment, such resources may be external to core 502 and may be accessible by, for example, cache hierarchy 503. Resources may include, for example, memory, caches, register files, or registers. Resources internal to core 502 may be represented by resources 586 in FIGURE 5B. As necessary, values written to or read from resources 586 may be coordinated with other portions of processor 500 through, for example, cache hierarchy 503. As instructions are assigned resources, they may be placed into a reorder buffer 588. Reorder buffer 588 may track instructions as they are executed and may selectively reorder their execution based upon any suitable criteria of processor 500. In one embodiment, reorder buffer 588 may identify instructions or a series of instructions that may be executed independently. Such instructions or a series of instructions may be executed in parallel from other such instructions. Parallel execution in core 502 may be performed by any suitable number of separate execution blocks or virtual processors. In one embodiment, shared resources— such as memory, registers, and caches— may be accessible to multiple virtual processors within a given core 502. In other embodiments, shared resources may be accessible to multiple processing entities within processor 500.[0100] Cache hierarchy 503 may be implemented in any suitable manner. For example, cache hierarchy 503 may include one or more lower or mid-level caches, such as caches 572, 574. In one embodiment, cache hierarchy 503 may include an LLC 595 communicatively coupled to caches 572, 574. In another embodiment, LLC 595 may be implemented in a module 590 accessible to all processing entities of processor 500. In a further embodiment, module 590 may be implemented in an uncore module of processors from Intel, Inc. Module 590 may include portions or subsystems of processor 500 necessary for the execution of core 502 but might not be implemented within core 502. Besides LLC 595, Module 590 may include, for example, hardware interfaces, memory coherency coordinators, interprocessor interconnects, instruction pipelines, or memory controllers. Access to RAM 599 available to processor 500 may be made through module 590 and, more specifically, LLC 595. Furthermore, other instances of core 502 may similarly access module 590. Coordination of the instances of core 502 may be facilitated in part through module 590.[0101] FIGURES 6-8 may illustrate exemplary systems suitable for including processor 500, while FIGURE 9 may illustrate an exemplary system on a chip (SoC) that may include one or more of cores 502. Other system designs and implementations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, may also be suitable. In general, a huge variety of systems or electronic devices that incorporate a processor and/or other execution logic as disclosed herein may be generally suitable.[0102] FIGURE 6 illustrates a block diagram of a system 600, in accordance with embodiments of the present disclosure. System 600 may include one or more processors 610, 615, which may be coupled to graphics memory controller hub (GMCH) 620. The optional nature of additional processors 615 is denoted in FIGURE 6 with broken lines.[0103] Each processor 610,615 may be some version of processor 500. However, it should be noted that integrated graphics logic and integrated memory control units might not exist in processors 610,615. FIGURE 6 illustrates that GMCH 620 may be coupled to a memory 640 that may be, for example, a dynamic random access memory (DRAM). The DRAM may, for at least one embodiment, be associated with a non-volatile cache.[0104] GMCH 620 may be a chipset, or a portion of a chipset. GMCH 620 may communicate with processors 610, 615 and control interaction between processors 610, 615 and memory 640. GMCH 620 may also act as an accelerated bus interface between the processors 610, 615 and other elements of system 600. In one embodiment, GMCH 620 communicates with processors 610, 615 via a multi-drop bus, such as a frontside bus (FSB) 695.[0105] Furthermore, GMCH 620 may be coupled to a display 645 (such as a flat panel display). I n one embodiment, GMCH 620 may include an integrated graphics accelerator. GMCH 620 may be further coupled to an input/output (I/O) controller hu b (ICH) 650, which may be used to couple various peripheral devices to system 600. External graphics device 660 may include be a discrete graphics device coupled to ICH 650 along with another peripheral device 670.[0106] I n other embodiments, additional or different processors may also be present in system 600. For example, additional processors 610, 615 may include additional processors that may be the same as processor 610, additional processors that may be heterogeneous or asymmetric to processor 610, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor. There may be a variety of differences between the physical resources 610, 615 in terms of a spectrum of metrics of merit including architectural, micro-architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst processors 610, 615. For at least one embodiment, various processors 610, 615 may reside in the same die package. [0107] FIGURE 7 illustrates a block diagram of a second system 700, in accordance with embodiments of the present disclosure. As shown in FIGURE 7, multiprocessor system 700 may include a point-to-point interconnect system, and may include a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. Each of processors 770 and 780 may be some version of processor 500 as one or more of processors 610,615.[0108] While FIGURE 7 may illustrate two processors 770, 780, it is to be understood that the scope of the present disclosure is not so limited. In other embodiments, one or more additional processors may be present in a given processor.[0109] Processors 770 and 780 are shown including integrated memory controller units 772 and 782, respectively. Processor 770 may also include as part of its bus controller units point-to-point (P-P) interfaces 776 and 778; similarly, second processor 780 may include P-P interfaces 786 and 788. Processors 770, 780 may exchange information via a point-to-point (P-P) interface 750 using P-P interface circuits 778, 788. As shown in FIGURE 7, IMCs 772 and 782 may couple the processors to respective memories, namely a memory 732 and a memory 734, which in one embodiment may be portions of main memory locally attached to the respective processors.[0110] Processors 770, 780 may each exchange information with a chipset 790 via individual P-P interfaces 752, 754 using point to point interface circuits 776, 794, 786, 798. In one embodiment, chipset 790 may also exchange information with a high-performance graphics circuit 738 via a high-performance graphics interface 739.[0111] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[0112] Chipset 790 may be coupled to a first bus 716 via an interface 796. In one embodiment, first bus 716 may be a Peripheral Component I nterconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited. [0113] As shown in FIGURE 7, various I/O devices 714 may be coupled to first bus 716, along with a bus bridge 718 which couples first bus 716 to a second bus 720. I n one embodiment, second bus 720 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 720 including, for example, a keyboard and/or mouse 722, communication devices 727 and a storage unit 728 such as a disk drive or other mass storage device which may include instructions/code and data 730, in one embodiment. Further, an audio I/O 724 may be coupled to second bus 720. Note that other architectures may be possible. For example, instead of the point-to-point architecture of FIGURE 7, a system may implement a multi-drop bus or other such architecture.[0114] FIGURE 8 illustrates a block diagram of a third system 800 in accordance with embodiments of the present disclosure. Like elements in FIGURES 7 and 8 bear like reference numerals, and certain aspects of FIGURE 7 have been omitted from FIGURE 8 in order to avoid obscuring other aspects of FIGURE 8.[0115] FIGURE 8 illustrates that processors 870, 880 may include integrated memory and I/O control logic ("CI") 872 and 882, respectively. For at least one embodiment, CL 872, 882 may include integrated memory controller units such as that described above in connection with FIGURES 5 and 7. In addition, CL 872, 882 may also include I/O control logic. FIGURE 8 illustrates that not only memories 832, 834 may be coupled to CL 872, 882, but also that I/O devices 814 may also be coupled to control logic 872, 882. Legacy I/O devices 815 may be coupled to chipset 890.[0116] FIGURE 9 illustrates a block diagram of a SoC 900, in accordance with embodiments of the present disclosure. Similar elements in FIGURE 5 bear like reference numerals. Also, dashed lined boxes may represent optional features on more advanced SoCs. An interconnect units 902 may be coupled to: an application processor 910 which may include a set of one or more cores 902A-N and shared cache units 906; a system agent unit 910; a bus controller units 916; an integrated memory controller units 914; a set or one or more media processors 920 which may include integrated graphics logic 908, a n image processor 924 for providing still and/or video camera functionality, an audio processor 926 for providing hardware audio acceleration, and a video processor 928 for providing video encode/decode acceleration; an static random access memory (SRAM) unit 930; a direct memory access (DMA) unit 932; and a display unit 940 for coupling to one or more external displays.[0117] FIGURE 10 illustrates a processor containing a central processing unit (CPU) and a graphics processing unit (GPU), which may perform at least one instruction, in accordance with embodiments of the present disclosure. In one embodiment, an instruction to perform operations according to at least one embodiment could be performed by the CPU. In another embodiment, the instruction could be performed by the GPU. In still another embodiment, the instruction may be performed through a combination of operations performed by the GPU and the CPU. For example, in one embodiment, an instruction in accordance with one embodiment may be received and decoded for execution on the GPU. However, one or more operations within the decoded instruction may be performed by a CPU and the result returned to the GPU for final retirement of the instruction. Conversely, in some embodiments, the CPU may act as the primary processor and the GPU as the co-processor.[0118] In some embodiments, instructions that benefit from highly parallel, throughput processors may be performed by the GPU, while instructions that benefit from the performance of processors that benefit from deeply pipelined architectures may be performed by the CPU. For example, graphics, scientific applications, financial applications and other parallel workloads may benefit from the performance of the GPU and be executed accordingly, whereas more sequential applications, such as operating system kernel or application code may be better suited for the CPU.[0119] In FIGURE 10, processor 1000 includes a CPU 1005, GPU 1010, image processor 1015, video processor 1020, USB controller 1025, UART controller 1030, SPI/SDIO controller 1035, display device 1040, memory interface controller 1045, MIPI controller 1050, flash memory controller 1055, dual data rate (DDR) controller 1060, security engine 1065, and l2S/l2C controller 1070. Other logic and circuits may be included in the processor of FIGURE 10, including more CPUs or GPUs and other peripheral interface controllers.[0120] One or more aspects of at least one embodiment may be implemented by representative data stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine-readable medium ("tape") and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. For example, IP cores, such as the Cortex™ family of processors developed by ARM Holdings, Ltd. and Loongson IP cores developed the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences may be licensed or sold to various customers or licensees, such as Texas Instruments, Qualcomm, Apple, or Samsung and implemented in processors produced by these customers or licensees.[0121] FIGURE 11 illustrates a block diagram illustrating the development of IP cores, in accordance with embodiments of the present disclosure. Storage 1130 may include simulation software 1120 and/or hardware or software model 1110. In one embodiment, the data representing the IP core design may be provided to storage 1130 via memory 1140 (e.g., hard disk), wired connection (e.g., internet) 1150 or wireless connection 1160. The IP core information generated by the simulation tool and model may then be transmitted to a fabrication facility where it may be fabricated by a 3rdparty to perform at least one instruction in accordance with at least one embodiment.[0122] In some embodiments, one or more instructions may correspond to a first type or architecture (e.g., x86) and be translated or emulated on a processor of a different type or architecture (e.g., ARM). An instruction, according to one embodiment, may therefore be performed on any processor or processor type, including ARM, x86, MIPS, a GPU, or other processor type or architecture.[0123] FIGURE 12 illustrates how an instruction of a first type may be emulated by a processor of a different type, in accordance with embodiments of the present disclosure. In FIGURE 12, program 1205 contains some instructions that may perform the same or substantially the same function as an instruction according to one embodiment. However the instructions of program 1205 may be of a type and/or format that is different from or incompatible with processor 1215, meaning the instructions of the type in program 1205 may not be able to execute natively by the processor 1215. However, with the help of emulation logic, 1210, the instructions of program 1205 may be translated into instructions that may be natively be executed by the processor 1215. In one embodiment, the emulation logic may be embodied in hardware. In another embodiment, the emulation logic may be embodied in a tangible, machine-readable medium containing software to translate instructions of the type in program 1205 into the type natively executable by processor 1215. In other embodiments, emulation logic may be a combination of fixed-function or programmable hardware and a program stored on a tangible, machine-readable medium. In one embodiment, the processor contains the emulation logic, whereas in other embodiments, the emulation logic exists outside of the processor and may be provided by a third party. In one embodiment, the processor may load the emulation logic embodied in a tangible, machine-readable medium containing software by executing microcode or firmware contained in or associated with the processor.[0124] FIGURE 13 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with embodiments of the present disclosure. In the illustrated embodiment, the instruction converter may be a software instruction converter, although the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIGURE 13 shows a program in a high level language 1302 may be compiled using an x86 compiler 1304 to generate x86 binary code 1306 that may be natively executed by a processor with at least one x86 instruction set core 1316. The processor with at least one x86 instruction set core 1316 represents any processor that may perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. x86 compiler 1304 represents a compiler that may be operable to generate x86 binary code 1306 (e.g., object code) that may, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1316. Similarly, FIGURE 13 shows the program in high level language 1302 may be compiled using an alternative instruction set compiler 1308 to generate alternative instruction set binary code 1310 that may be natively executed by a processor without at least one x86 instruction set core 1314 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). Instruction converter 1312 may be used to convert x86 binary code 1306 into code that may be natively executed by the processor without an x86 instruction set core 1314. This converted code might not be the same as alternative instruction set binary code 1310; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, instruction converter 1312 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 1306.[0125] FIGURE 14 is a block diagram of an instruction set architecture 1400 of a processor, in accordance with embodiments of the present disclosure. Instruction set architecture 1400 may include any suitable number or kind of components.[0126] For example, instruction set architecture 1400 may include processing entities such as one or more cores 1406, 1407 and a graphics processing unit 1415. Cores 1406, 1407 may be communicatively coupled to the rest of instruction set architecture 1400 through any suitable mechanism, such as through a bus or cache. In one embodiment, cores 1406, 1407 may be communicatively coupled through an L2 cache control 1408, which may include a bus interface unit 1409 and an L2 cache 1410. Cores 1406, 1407 and graphics processing unit 1415 may be communicatively coupled to each other and to the remainder of instruction set architecture 1400 through interconnect 1410. In one embodiment, graphics processing unit 1415 may use a video code 1420 defining the manner in which particular video signals will be encoded and decoded for output.[0127] Instruction set architecture 1400 may also include any number or kind of interfaces, controllers, or other mechanisms for interfacing or communicating with other portions of an electronic device or system. Such mechanisms may facilitate interaction with, for example, peripherals, communications devices, other processors, or memory. In the example of FIGURE 14, instruction set architecture 1400 may include a liquid crystal display (LCD) video interface 1425, a subscriber interface module (SIM) interface 1430, a boot ROM interface 1435, a synchronous dynamic random access memory (SDRAM) controller 1440, a flash controller 1445, and a serial peripheral interface (SPI) master unit 1450. LCD video interface 1425 may provide output of video signals from, for example, GPU 1415 and through, for example, a mobile industry processor interface (MIPI) 1490 or a high-definition multimedia interface (HDMI) 1495 to a display. Such a display may include, for example, an LCD. SIM interface 1430 may provide access to or from a SIM card or device. SDRAM controller 1440 may provide access to or from memory such as an SDRAM chip or module. Flash controller 1445 may provide access to or from memory such as flash memory or other instances of RAM. SPI master unit 1450 may provide access to or from communications modules, such as a Bluetooth module 1470, high-speed 3G modem 1475, global positioning system module 1480, or wireless module 1485 implementing a communications standard such as 802.11.[0128] FIGURE 15 is a more detailed block diagram of an instruction set architecture 1500 of a processor, in accordance with embodiments of the present disclosure. Instruction architecture 1500 may implement one or more aspects of instruction set architecture 1400. Furthermore, instruction set architecture 1500 may illustrate modules and mechanisms for the execution of instructions within a processor.[0129] Instruction architecture 1500 may include a memory system 1540 communicatively coupled to one or more execution entities 1565. Furthermore, instruction architecture 1500 may include a caching and bus interface unit such as unit 1510 communicatively coupled to execution entities 1565 and memory system 1540. In one embodiment, loading of instructions into execution entities 1564 may be performed by one or more stages of execution. Such stages may include, for example, instruction prefetch stage 1530, dual instruction decode stage 1550, register rename stage 155, issue stage 1560, and writeback stage 1570.[0130] In one embodiment, memory system 1540 may include an executed instruction pointer 1580. Executed instruction pointer 1580 may store a value identifying the oldest, undispatched instruction within a batch of instructions. The oldest instruction may correspond to the lowest Program Order (PO) value. A PO may include a unique number of an instruction. Such an instruction may be a single instruction within a thread represented by multiple strands. A PO may be used in ordering instructions to ensure correct execution semantics of code. A PO may be reconstructed by mechanisms such as evaluating increments to PO encoded in the instruction rather than an absolute value. Such a reconstructed PO may be known as an "RPO." Although a PO may be referenced herein, such a PO may be used interchangeably with an RPO. A strand may include a sequence of instructions that are data dependent upon each other. The strand may be arranged by a binary translator at compilation time. Hardware executing a strand may execute the instructions of a given strand in order according to PO of the various instructions. A thread may include multiple strands such that instructions of different strands may depend upon each other. A PO of a given strand may be the PO of the oldest instruction in the strand which has not yet been dispatched to execution from an issue stage. Accordingly, given a thread of multiple strands, each strand including instructions ordered by PO, executed instruction pointer 1580 may store the oldest— illustrated by the lowest number— PO in the thread.[0131] In another embodiment, memory system 1540 may include a retirement pointer 1582. Retirement pointer 1582 may store a value identifying the PO of the last retired instruction. Retirement pointer 1582 may be set by, for example, retirement unit 454. If no instructions have yet been retired, retirement pointer 1582 may include a null value.[0132] Execution entities 1565 may include any suitable number and kind of mechanisms by which a processor may execute instructions. In the example of FIGURE 15, execution entities 1565 may include ALU/multiplication units (MUL) 1566, ALUs 1567, and floating point units (FPU) 1568. In one embodiment, such entities may make use of information contained within a given address 1569. Execution entities 1565 in combination with stages 1530, 1550, 1555, 1560, 1570 may collectively form an execution unit.[0133] Unit 1510 may be implemented in any suitable manner. In one embodiment, unit 1510 may perform cache control. In such an embodiment, unit 1510 may thus include a cache 1525. Cache 1525 may be implemented, in a further embodiment, as an L2 unified cache with any suitable size, such as zero, 128k, 256k, 512k, 1M, or 2M bytes of memory. In another, further embodiment, cache 1525 may be implemented in error-correcting code memory. In another embodiment, unit 1510 may perform bus interfacing to other portions of a processor or electronic device. In such an embodiment, unit 1510 may thus include a bus interface unit 1520 for communicating over an interconnect, intraprocessor bus, interprocessor bus, or other communication bus, port, or line. Bus interface unit 1520 may provide interfacing in order to perform, for example, generation of the memory and input/output addresses for the transfer of data between execution entities 1565 and the portions of a system external to instruction architecture 1500.[0134] To further facilitate its functions, bus interface unit 1520 may include an interrupt control and distribution unit 1511 for generating interrupts and other communications to other portions of a processor or electronic device. In one embodiment, bus interface unit 1520 may include a snoop control unit 1512 that handles cache access and coherency for multiple processing cores. In a further embodiment, to provide such functionality, snoop control unit 1512 may include a cache-to-cache transfer unit that handles information exchanges between different caches. In another, further embodiment, snoop control unit 1512 may include one or more snoop filters 1514 that monitors the coherency of other caches (not shown) so that a cache controller, such as unit 1510, does not have to perform such monitoring directly. Unit 1510 may include any suitable number of timers 1515 for synchronizing the actions of instruction architecture 1500. Also, unit 1510 may include an AC port 1516.[0135] Memory system 1540 may include any suitable number and kind of mechanisms for storing information for the processing needs of instruction architecture 1500. In one embodiment, memory system 1504 may include a load store unit 1530 for storing information such as buffers written to or read back from memory or registers. I n another embodiment, memory system 1504 may include a translation lookaside buffer (TLB) 1545 that provides look-up of address values between physical and virtual addresses. In yet another embodiment, bus interface unit 1520 may include a memory management unit (MMU) 1544 for facilitating access to virtual memory. In still yet another embodiment, memory system 1504 may include a prefetcher 1543 for requesting instructions from memory before such instructions are actually needed to be executed, in order to reduce latency. [0136] The operation of instruction architecture 1500 to execute an instruction may be performed through different stages. For example, using unit 1510 instruction prefetch stage 1530 may access an instruction through prefetcher 1543. I nstructions retrieved may be stored in instruction cache 1532. Prefetch stage 1530 may enable an option 1531 for fast-loop mode, wherein a series of instructions forming a loop that is small enough to fit within a given cache are executed. In one embodiment, such an execution may be performed without needing to access additional instructions from, for example, instruction cache 1532. Determination of what instructions to prefetch may be made by, for example, branch prediction unit 1535, which may access indications of execution in global history 1536, indications of target addresses 1537, or contents of a return stack 1538 to determine which of branches 1557 of code will be executed next. Such branches may be possibly prefetched as a result. Branches 1557 may be produced through other stages of operation as described below. I nstruction prefetch stage 1530 may provide instructions as well as any predictions about future instructions to dual instruction decode stage.[0137] Dual instruction decode stage 1550 may translate a received instruction into microcode-based instructions that may be executed. Dual instruction decode stage 1550 may simultaneously decode two instructions per clock cycle. Furthermore, dual instruction decode stage 1550 may pass its results to register rename stage 1555. I n addition, dual instruction decode stage 1550 may determine any resulting branches from its decoding and eventual execution of the microcode. Such results may be input into branches 1557.[0138] Register rename stage 1555 may translate references to virtual registers or other resources into references to physical registers or resources. Register rename stage 1555 may include indications of such mapping in a register pool 1556. Register rename stage 1555 may alter the instructions as received and send the result to issue stage 1560.[0139] Issue stage 1560 may issue or dispatch commands to execution entities 1565. Such issuance may be performed in an out-of-order fashion. I n one embodiment, multiple instructions may be held at issue stage 1560 before being executed. Issue stage 1560 may include an instruction queue 1561 for holding such multiple commands. I nstructions may be issued by issue stage 1560 to a particular processing entity 1565 based upon any acceptable criteria, such as availability or suitability of resources for execution of a given instruction. In one embodiment, issue stage 1560 may reorder the instructions within instruction queue 1561 such that the first instructions received might not be the first instructions executed. Based upon the ordering of instruction queue 1561, additional branching information may be provided to branches 1557. Issue stage 1560 may pass instructions to executing entities 1565 for execution.[0140] Upon execution, writeback stage 1570 may write data into registers, queues, or other structures of instruction set architecture 1500 to communicate the completion of a given command. Depending upon the order of instructions arranged in issue stage 1560, the operation of writeback stage 1570 may enable additional instructions to be executed. Performance of instruction set architecture 1500 may be monitored or debugged by trace unit 1575.[0141] FIGURE 16 is a block diagram of an execution pipeline 1600 for an instruction set architecture of a processor, in accordance with embodiments of the present disclosure. Execution pipeline 1600 may illustrate operation of, for example, instruction architecture 1500 of FIGURE 15.[0142] Execution pipeline 1600 may include any suitable combination of steps or operations. In 1605, predictions of the branch that is to be executed next may be made. In one embodiment, such predictions may be based upon previous executions of instructions and the results thereof. In 1610, instructions corresponding to the predicted branch of execution may be loaded into an instruction cache. In 1615, one or more such instructions in the instruction cache may be fetched for execution. In 1620, the instructions that have been fetched may be decoded into microcode or more specific machine language. In one embodiment, multiple instructions may be simultaneously decoded. In 1625, references to registers or other resources within the decoded instructions may be reassigned. For example, references to virtual registers may be replaced with references to corresponding physical registers. In 1630, the instructions may be dispatched to queues for execution. In 1640, the instructions may be executed. Such execution may be performed in any suitable manner. In 1650, the instructions may be issued to a suitable execution entity. The manner in which the instruction is executed may depend upon the specific entity executing the instruction. For example, at 1655, an ALU may perform arithmetic functions. The ALU may utilize a single clock cycle for its operation, as well as two shifters. In one embodiment, two ALUs may be employed, and thus two instructions may be executed at 1655. At 1660, a determination of a resulting branch may be made. A program counter may be used to designate the destination to which the branch will be made. 1660 may be executed within a single clock cycle. At 1665, floating point arithmetic may be performed by one or more FPUs. The floating point operation may require multiple clock cycles to execute, such as two to ten cycles. At 1670, multiplication and division operations may be performed. Such operations may be performed in four clock cycles. At 1675, loading and storing operations to registers or other portions of pipeline 1600 may be performed. The operations may include loading and storing addresses. Such operations may be performed in four clock cycles. At 1680, write-back operations may be performed as required by the resulting operations of 1655-1675.[0143] FIGURE 17 is a block diagram of an electronic device 1700 for utilizing a processor 1710, in accordance with embodiments of the present disclosure. Electronic device 1700 may include, for example, a notebook, an ultrabook, a computer, a tower server, a rack server, a blade server, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device.[0144] Electronic device 1700 may include processor 1710 communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. Such coupling may be accomplished by any suitable kind of bus or interface, such as l2C bus, system management bus (SMBus), low pin count (LPC) bus, SPI, high definition audio (HDA) bus, Serial Advance Technology Attachment (SATA) bus, USB bus (versions 1, 2, 3), or Universal Asynchronous Receiver/Transmitter (UART) bus.[0145] Such components may include, for example, a display 1724, a touch screen 1725, a touch pad 1730, a near field communications (NFC) unit 1745, a sensor hub 1740, a thermal sensor 1746, an express chipset (EC) 1735, a trusted platform module (TPM) 1738, BlOS/firmware/flash memory 1722, a digital signal processor 1760, a drive 1720 such as a solid state disk (SSD) or a hard disk drive (HDD), a wireless local area network (WLAN) unit 1750, a Bluetooth unit 1752, a wireless wide area network (WWAN) unit 1756, a global positioning system (GPS), a camera 1754 such as a USB 3.0 camera, or a low power double data rate (LPDDR) memory unit 1715 implemented in, for example, the LPDDR3 standard. These components may each be implemented in any suitable manner.[0146] Furthermore, in various embodiments other components may be communicatively coupled to processor 1710 through the components discussed above. For example, an accelerometer 1741, ambient light sensor (ALS) 1742, compass 1743, and gyroscope 1744 may be communicatively coupled to sensor hub 1740. A thermal sensor 1739, fan 1737, keyboard 1746, and touch pad 1730 may be communicatively coupled to EC 1735. Speaker 1763, headphones 1764, and a microphone 1765 may be communicatively coupled to an audio unit 1764, which may in turn be communicatively coupled to DSP 1760. Audio unit 1764 may include, for example, an audio codec and a class D amplifier. A SIM card 1757 may be communicatively coupled to WWAN unit 1756. Components such as WLAN unit 1750 and Bluetooth unit 1752, as well as WWAN unit 1756 may be implemented in a next generation form factor (NGFF).[0147] Embodiments of the present disclosure involve maintaining performance counters with dynamic frequencies. Dynamic frequencies may include frequency changes invisible to software, in which software cannot detect, observe, and/or measure the frequency changes. Invisibility may be due to a variety of factors, including but not limited to the rate at which the frequency changes or the frequency changes being controlled by a source other than software, such as a microprocessor system or FPGA. Dynamic frequencies may also include frequency changes not controlled by software. Software may request frequency changes using pre-defined performance states (P-states), and/or processor states (C-states). Dynamic frequencies may operate in parallel with such states or may replace such states, in part or whole. Maintaining performance counters may enable software reliant on performance counters to operate correctly for situations in which the dynamic frequency is not software visible. FIGURE 18 is an illustration of a system 1800 with a counter compensation logic unit, in accordance with embodiments of the present disclosure. [0148] System 1800 may include any suitable number and kind of elements to perform the operations described herein, including a processor, SoC, integrated circuit, and other mechanism suitable for maintaining performance counters with dynamic frequencies. Furthermore, although specific elements of system 1800 may be described herein as performing a specific function, any suitable portion of 1800 may perform the functionality described herein. For example, system 1800 may include processor 1802. Although processor 1802 is shown and described as an example in FIGURE 18, any suitable mechanism may be used. System 1800 may include any suitable mechanism for maintaining performance counters with dynamic frequencies. In one embodiment, such mechanisms may be implemented in hardware. In another embodiment, such mechanisms may include a memory-mapped address for the configuration. In a further embodiment, such mechanisms may include an instruction for a programmer, compiler, or firmware to configure processor 1802 to enable maintaining performance counters with dynamic frequencies.[0149] Processor 1802 may be implemented fully or in part by the elements described in FIGURES 1-17. Instructions may be received from instruction stream 1804, which may reside within a memory subsystem of system 1800. Instructions stream 1804 may be included in any suitable portion of processor 1802 or system 1800. In one embodiment, instruction stream 1804A may be included in an SoC, system, or other mechanism. In another embodiment, instruction stream 1804B may be included in a processor, integrated circuit, or other mechanism. Processor 1802 may include a front end 1806 to receive or retrieve instructions from any suitable location, including a cache or memory. Instructions may include instruction stream 1804. Front end 1806 may include a fetcher 1808 to fill the pipeline efficiently with possible instructions to execute. Front end 1806 may include an instruction decoder 1810 to decode an instruction into opcodes for execution, which may determine the meaning, side effects, data required, data consumed, and data to be produced for the instruction. A binary translator 1812 may be used to optimize or improve the efficiency of code.[0150] The decoded instructions may be passed to an out-of-order or in-order execution in an execution pipeline 1816. Execution pipeline 1816 may include a rename and allocate unit 1818 for renaming instructions for out-of-order execution, and a reorder buffer (ROB) coextensive with a retirement unit 1824 so that instructions may appear to be retired in the order that they were received. Rename and allocate unit 1818 may further rename or allocate resources for execution of instructions in parallel. Scheduler 1820 may schedule or allocate instructions to execute on execution units 1822 when inputs are available. Outputs of execution units 1822 may queue in the ROB 1824. Front end 1806 may attempt to anticipate any behaviors that will prevent instructions from executing in a sequential stream and may fetch streams of instructions that might execute. When there is, for example, a mis-prediction of a branch, the ROB may inform the front-end and a different set of instructions might be executed instead. Front end 1806 may store data such as metadata for branch prediction. The instructions may be retired as-if they were executed in-order. Various portions of such execution pipelining may be performed by one or more cores 1814. Each core 1814 may include one or more threads or logical cores for execution.[0151] Each core 1814 may include a dynamic core frequency logic unit (DCF) 1826, also known as a dynamic frequency logic unit. Although the dynamic core frequency logic unit is described as a portion of a core, the logic unit may reside in any suitable portion of processor 1802, including but not limited to an uncore module. DCF 1826 may include circuitry to provide the ability to change the frequency of the clock for the core. Such changes may occur independent of processor 1802 or system 1800. Such changes may not be controlled by software operating on or with access to processor 1802 or system 1800. DCF 1826 may adjust the clock for any suitable purpose, including but not limited to cases in which the core is stalled, such as when the core is waiting due to a cache miss served by external memory, and cases in which saving power may be desired, such as when the core is not being used. DCF 1826 may adjust the frequency by squashing the clock. A squashed clock may represent a clock changed by a DCF ratio. An unsquashed clock may represent a clock unchanged by a DCF ratio. A DCF ratio may be the ratio between unsquashed and squashed clocks. For example, a DCF ratio of 16:1 may represent 16 unsquashed clocks to one squashed clock. In one embodiment, DCF 1826 may operate in a global mode in which the actual frequency change is emulated. Global mode may enable the frequency change to be visible by software operating on system 1800 or with access to system 1800. In another embodiment, DCF 1826 may operate a local mode in which the power may be saved. Software operating on system 1800 or with access to system 1800 may not have visibility of the frequency change in local mode. Although a DCF ratio of 16:1 is described, any DCF ratio may be used, including but not limited to ratios that are a power of 2, such as 8:1, 4:1, and 2:1. DCF 1826 may modify the DCF ratio and the core clock several times in a millisecond.[0152] DCF 1826 may impact the operation of one or more performance counters. A performance counter may measure an event. A performance counter logic unit may be associated with the performance event. In one embodiment, the event may be a cycle-based event, which may measure the duration of a machine state. For example, a cycle-based event may measure the number of recovery cycles or stall cycles. Although recovery cycles and stall cycles are described, any cycle-based event may be measured by a performance counter. A cycle-based event may be measured by a counter representing the number of clock cycles completed during or within the machine state. In another embodiment, the event may be an occurrence-based event, also known as a pre-defined event, which may measure the number of occurrences of an event. For example, an occurrence-based event may measure the number of cache misses or the number of micro-operations retried. Although the number of cache misses and the number of micro-operations retried are described, any occurrence-based event may be measured by a performance counter. An occurrence-based event may be measured by a counter representing the number of occurrences.[0153] Cycle-based events and occurrence-based events may be represented in the cycle domain. A cycle-based event may be represented in the cycle domain by measuring the number of cycles associated with an event. An occurrence-based event may be represented in the cycle domain when in cycle domain counting mode. Cycle domain counting mode may represent an occurrence-based event as a counter accumulating the number of cycles associated with an event. For example, cycle domain counting mode may be set to count the clock cycles in which less than five micro-operations are retried, which may correspond to most clock cycles, or the clock cycles in which less than one micro-operation is retired, which may correspond to cycles in which no retirement occurred. Although multiple events in a clock cycle are described, any number of events in a clock cycle may be set for cycle domain counting mode.[0154] Each core 1814 may include a counter compensation logic unit (CCLU) 1828. CCLU 1828 may adjust one or more performance counter increments to compensate for a DCF ratio. The DCF ratio may be sent from DCF 1826 to CCLU 1828. CCLU 1828 may reduce the amount of error associated with performance counters in operation during a squashed clock. CCLU 1828 may account for the DCF ratio at the time the performance counter was determined or at the time the performance counter information was transmitted to an accumulation unit. The accumulation unit of each core 1814 may centralize counting of the performance counters. I n one embodiment, the accumulation unit may be a power control unit, in part or whole. Each remote performance counter may utilize a data bus to determine whether to transmit an increment for the performance counter to the accumulation unit. The data bus may be shared among several remote performance events or counters.[0155] Performance counters may be categorized into one or more performance monitor domains based on the transmission of the performance counter information. For example, a transmission requiring two clock cycles may be logically placed into one performance monitor domain, and a transmission requiring four clock cycles may be logically placed into another performance monitor domain. CCLU 1828 may compensate for the DCF ratio for each performance monitor domain by right staging or delaying the DCF ratio. Thus, the DCF ratio from several clock cycles may be available for compensation. CCLU 1828 may compensate for various performance monitor domains to ensure proper compensation for a performance counter increment, which may be measured by a performance counter logic unit.[0156] Although various operations are described in this disclosure as performed by specific components of processor 1802, the functionality may be performed by any suitable portion of processor 1802.[0157] FIGURE 19 illustrates a logical representation of elements of a counter compensation logic unit, in accordance with embodiments of the present disclosure. Counter compensation logic unit (CCLU) 1900 may receive an increment 1902. I ncrement 1902 may be driven on a data bus shared among performance counter logic units. CCLU 1900 may receive a DCF ratio 1904 from DCF 1826. DCF ratio 1904 may be input into multiplexer 1916 after one or more delays 1914, which may compensate for the reporting latency of one or more performance counter domains. CCLU 1904 may receive a selection, such as event select 1906 and/or sub event select 1908. The selection may be determined by a decoder for a performance counter model specific register (MSR). The decoder may determine or may be used to determine the performance counter domain, the type of event measured by the performance counter, and/or the mode of measurement set for the performance counter. The selection may be used to select the bus latency 1910. Bus latency selection 1910 may be used to select between one or more DCF ratios. Each DCF ratio may correspond to a performance counter domain. Although multiplier 1916 may receive six inputs, any number of inputs may be used for selecting between DCF ratios.[0158] Increment 1902 and the selected DCF ratio may be inputs into AND gate 1918 and multiplier 1920. Increment 1902 may be the first input into multiplier 1920, and the logical conjunction of the DCF ratio and increment 1902 may be the other input into multiplier 1920. Event classification selection 1912 may select between the two inputs for counter 1922. In one embodiment, counter 1922 may be in counter compensation logic unit 1828. In another embodiment, counter 1922 may be in an accumulation unit. In one case, increment 1902 may indicate no increment, in which case the conjunction of the DCF ratio and increment 1902 may result in no increment. In another case, increment 1902 may indicate that the counter should be incremented. In one embodiment, the other case may indicate that the conjunction of the DCF ratio and increment 1902 may include the multiplication of the increment by the DCF ratio. In another embodiment, the increment may be zero or one, and an increment of one may indicate that the conjunction of the DCF ratio and increment 1902 may be the DCF ratio itself.[0159] FIGURE 20 illustrates a system 2000 with performance counters, in accordance with embodiments of the present disclosure. System 2000 may include processor 2002, which may include one or more performance counter clusters 2004. Processor 2002 may implement processor 1802, in part or whole. Clusters 2004 may operate independently from the DCF ratio. Each performance counter cluster 2004 may include one or more end points 2008. Although six clusters and eleven end points are shown, any suitable number of clusters with any suitable number of end points may be used. In one embodiment, an end point may be a performance counter logic unit. End point 2008 may measure an event at a remote location within a cluster 2004. End point 2008 may interface with a shared data bus 2010 to transmit increments to performance counters. In one embodiment, the shared data bus is also known as a crossbar. The increments may be transmitted to an accumulation unit, which may be within a cluster 2004, such as cluster 2004F. In one embodiment, each core in processor 2002 may have an accumulation unit. In another embodiment, processor 2002 may have one accumulation unit, and one or more counter compensation logic units 2006. End point 2008 may be categorized or classified according to a performance counter domain. A performance counter domain may represent the number of cycles required for a performance counter increment to propagate from an end point 2008 to a counter compensation logic unit 2006. Counter compensation logic unit 2006 may adjust the increment before it is added to the counter to account for a DCF ratio, associated with dynamic frequencies. Counter compensation logic unit 2006 may be implemented, in part or in whole, by counter compensation logic unit 1900.[0160] As an example, cluster 2004A may include two end points 2008E and 2008F. End point 2008E may be part of a performance counter domain with five delays, corresponding to shared data buses 2010D, 2010H, 2010J, 2010K, and 2010L. End point 2000F may be part of a performance counter domain with six delays, corresponding to shared data buses 2010E, 2010F, 2010G, 2010J, 2010K, and 2010L. A delay may represent any number of clock cycles, including but not limited to one clock cycle. Each delay may represent different numbers of clock cycles. A performance counter domain with two delays, for instance, may not have twice the delay of a performance counter domain with one delay.[0161] As another example, cluster 2004C may include one end point 2008A, which may be part of a performance counter domain with three delays, corresponding to shared data buses 2010A, 2010K, and 2010L. Counter compensation logic unit 2006 may select the DCF ratio from three delays in the past to ensure the correct DCF ratio is selected for events monitored at end point 2008A, and may select the DCF ratio from six delays in the past to ensure the correct DCF ratio is selected for events monitored at end point 2008F. [0162] As a further example, cluster 2004B may include three end points 2008B, 2008C, and 2008D. End points 2008B and 2008C may be in the same performance counter domain with five delays, corresponding to shared data buses 2010B/C, 2010H, 2010J, 2010K, and 2010L. Accordingly, counter compensation logic unit 2006 may select the same DCF ratio for end points 2008B and 2008C.[0163] Each cluster 2004 and/or end point 2008 may be in any suitable portion of processor 2002. Although shared data buses are shown between end points, each e nd point may be communicatively coupled to the shared data bus, which may provide a continuous shared data bus between end points. Such a bus may receive incremented performance counters from more than one remote location.[0164] FIGURE 21 is a diagram of operation of a method for maintaining performance counters with dynamic frequencies, in accordance with embodiments of the present disclosure. Method 2100 may be implemented by any of the elements shown in FIGURES 1-20. Method 2100 may be initiated by any suitable criteria and may initiate operation at any suitable point. I n one embodiment, method 2100 may initiate operation at 2105. Method 2100 may include greater or fewer steps than those illustrated. Moreover, method 2100 may execute its steps in an order different than those illustrated below. Method 2100 may terminate at any suitable step. Furthermore, method 2100 may repeat operation at any suitable step. Method 2100 may perform any of its steps in parallel with other steps of method 2100, or in other methods. Method 2100 may perform any of its steps on any element of data in parallel with other elements of data, such that method 2100 operates in a vectorized way.[0165] At 2105, in one embodiment the type of event to monitor and/or the mode of monitoring may be configured. The configuration may occur in a performance monitor model specific register (MSR). I n one embodiment, the type of event may be a cycle-based event, which may count the number of cycles in which the system is in a particular machine state. In another embodiment, the type of event may be an occurrence-based event, which may count the number of occurrences of the event on the system. At 2110, in one embodiment a performance counter may be reset. A reset may serve to clear the performa nce counter accumulation or to store the starting accumulation of a performance counter, which may be used to calculate an accumulation for a performance counter. At 2115, in one embodiment the processor may execute for at least one cycle. During execution, the event configured for monitoring may be measured and relayed as an increment for the performance counter. The measurement of the event may occur at an end point or a performance counter logic unit, which may be remotely located within a processor within a cluster of performance monitors. The relay of the increment may occur over a shared data bus.[0166] At 2120, in one embodiment, it may be determined whether the power control unit (PCU) squashed the clock. In the alternative, it may be determined whether the dynamic core frequency (DCF) unit squashed the clock. Squashing the clock may involve reducing the frequency of the clock. A DCF ratio may indicate to a counter compensation logic unit whether the clock was squashed, or the DCF unit, also known as a dynamic frequency unit, may determine whether the clock was squashed over a duration of time. In one embodiment, the DCF unit may operate in a global mode in which a frequency decrease, which may be caused by the squashing of the clock, is visible to software. In another embodiment, the DCF unit may operate in a local mode in which a frequency decrease, which may be caused by the squashing of the clock, is invisible to software. At 2125, if the clock was not squashed, method 2100 may proceed to method step 2160. Otherwise, the clock was squashed and method 2100 may proceed to method step 2130. At 2130, in one embodiment, it may be determined whether a cycle-based event was configured. The configuration may correspond to one or more settings in a performance counter model specific register (MSR). A cycle-based event may refer to an event measured by the duration of a machine state. The duration may be represented by the number of clock cycles in the machine state. A performance counter associated with a cycle- based event may require compensation for squashed clocks. At 2135, if the event is cycle based, method 2100 may proceed to method step 2150. Otherwise, the event is not cycle based, and method 2100 may proceed to method step 2140.[0167] At 2140, in one embodiment, it may be determined whether a cycle-counting mode was configured. The configuration may correspond to one or more settings in a performance counter model specific register (MSR). An event in cycle-counting mode may refer to a measurement defined by when to count a cycle. For example, any cycle in which no micro- operations were retired may increment the performance counter. Cycle-counting mode may be used to transform an occurrence-based event into the cycle domain, measured by a number of cycles. A performance counter associated with an event in cycle-counting mode may require compensation for squashed clocks. At 2145, if the configuration is not in cycle-counting mode, method 2100 may proceed to method step 2160. Otherwise, the configuration is in cycle- counting mode and method 2100 may proceed to method step 2150.[0168] At 2150, in one embodiment, the DCF ratio may be selected based on the performance counter domain. The performance counter domain may have a common reporting latency to transmit or send a performance counter increment from an end point or performance counter logic unit to a central location, which may be a counter compensation logic unit. The transmission may occur over a shared data bus. The DCF ratio selected may be different than the DCF ratio at the time the central location receives the performance counter increment. At 2155, in one embodiment, the increment to the performance counter may be adjusted using the selected DCF ratio. The adjustment may be based on the selected DCF ratio and the performance counter increment. In one embodiment, the DCF ratio may be multiplied by the performance counter increment. In another embodiment, the DCF ratio may adjust or compensate the performance counter increment without a multiply operation, including but not limited to a shift operation, a divide operation, and/or an operation to support DCF ratios that are not powers of 2. At 2160, the performance counter may be updated with the adjusted or unadjusted increment. The performance counter may be in the counter compensation logic unit, an accumulation unit, or in a power control unit. Method 2100 may optionally repeat or terminate.[0169] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. [0170] Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system may include any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[0171] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[0172] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine-readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[0173] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0174] Accordingly, embodiments of the disclosure may also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.[0175] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part-on and part-off processor.[0176] Thus, techniques for performing one or more instructions according to at least one embodiment are disclosed. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on other embodiments, and that such embodiments not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims.[0177] In some embodiments of the present disclosure, a processor may include a front end and a core. The front end may include circuitry to decode an instruction from an instruction stream. The core may include circuitry to process the instruction, an execution pipeline, a dynamic core frequency logic unit, and a counter compensation logic unit. The execution pipeline may include circuitry to execute the instruction. The dynamic core frequency logic unit may include circuitry to squash a clock of the core to reduce a core frequency. The clock and/or core frequency may be invisible to software and may be controlled by the dynamic core frequency logic unit rather than by software. The counter compensation logic unit may include circuitry to adjust a performance counter increment. The performance counter increment may be associated with a performance counter. The adjustment may be based on at least the dynamic core frequency logic unit circuitry to squash the clock of the core to reduce the core frequency.[0178] In combination with any of the above embodiments, in an embodiment the counter compensation logic unit may include circuitry to determine whether the performance counter monitors an event measured in cycles. The circuitry to adjust the performance counter may be based on the determination that the performance counter monitors an event measured in cycles. In combination with any of the above embodiments, in an embodiment the counter compensation logic unit may include circuitry to determine whether a mode of measurement for the performance counter may be set to measure in cycles. The circuitry to adjust the performance counter may be based on the determination that the mode of measurement for the performance counter is set to measure in cycles. In combination with any of the above embodiments, in an embodiment the circuitry to adjust the performance counter may include circuitry to select a dynamic core frequency ratio and to generate an adjusted performance counter increment based on the performance counter increment and the selected dynamic core frequency ratio. The dynamic core frequency ratio may represent a number of unsquashed clocks to squashed clocks. In combination with any of the above embodiments, in an embodiment the circuitry to select the dynamic core frequency ratio may be based on a latency associated with circuitry to report the performance counter increment. The selected dynamic core frequency ratio may correspond to a dynamic core frequency ratio before the performance counter increment is reported. In combination with any of the above embodiments, in an embodiment the circuitry to generate an adjusted performance counter increment may include circuitry to multiply the performance counter increment by the selected dynamic core frequency ratio. In combination with any of the above embodiments, in an embodiment the processor may include a power control unit. The power control unit may include circuitry to increment a performance counter based on the performance counter increment. [0179] In some of the present embodiments, a method may include processing an instruction for a cycle of operation, squashing a clock to reduce a frequency, and adjusting a performance counter increment associated with a performance counter. The adjustment may be based on squashing the clock to reduce the frequency. The clock and/or frequency may be invisible to software and may be controlled by squashing rather than by software. In combination with any of the above embodiments, in an embodiment the method may include determining whether the performance counter monitors an event measured in cycles and adjusting the performance counter increment may be based on the determination that the performance counter monitors an event measured in cycles. In combination with any of the above embodiments, in an embodiment the method may include determining whether a mode of measurement for the performance counter may be set to measure in cycles and adjusting the performance counter increment may be based on the determination that the performance counter is set to measure in cycles. In combination with any of the above embodi ments, in an embodiment the method may include adjusting the performance counter, which may include selecting a dynamic frequency ratio and generating an adjusted performance counter increment. The adjusted performance counter increment may be based on the performance counter increment and the selected dynamic frequency ratio. The dynamic frequency ratio may represent a number of unsquashed clocks to squashed clocks. In combination with any of the above embodiments, in an embodiment the method may include selecting the dynamic frequency ratio, which may be based on a latency. The latency may be associated with reporting the performance counter increment. The selected dynamic frequency ratio may correspond to a dynamic frequency ratio before the performance counter increment is reported. In combination with any of the above embodiments, in an embodiment the method may include generating the adjusted performance counter increment by multiplying the performance counter increment by the selected dynamic frequency ratio.[0180] In some embodiments of the present disclosure, a system may include a front end and a core. The front end may include circuitry to decode an instruction from an instruction stream. The core may include circuitry to process the instruction, an execution pipeline, a dynamic core frequency logic unit, and a counter compensation logic unit. The execution pipeline may include circuitry to execute the instruction. The dynamic core frequency logic unit may include circuitry to squash a clock of the core to reduce a core frequency. The clock and/or core frequency may be invisible to software and may be controlled by the dynamic core frequency logic unit rather than by software. The counter compensation logic unit may include circuitry to adjust a performance counter increment. The performance counter increment may be associated with a performance counter. The adjustment may be based on at least the dynamic core frequency logic unit circuitry to squash the clock of the core to reduce the core frequency.[0181] In combination with any of the above embodiments, in an embodiment the counter compensation logic unit may include circuitry to determine whether the performance counter monitors an event measured in cycles. The circuitry to adjust the performance counter may be based on the determination that the performance counter monitors an event measured in cycles. In combination with any of the above embodiments, in an embodiment the counter compensation logic unit may include circuitry to determine whether a mode of measurement for the performance counter may be set to measure in cycles. The circuitry to adjust the performance counter may be based on the determination that the mode of measurement for the performance counter is set to measure in cycles. In combination with any of the above embodiments, in an embodiment the circuitry to adjust the performance counter may include circuitry to select a dynamic core frequency ratio and to generate an adjusted performance counter increment based on the performance counter increment and the selected dynamic core frequency ratio. The dynamic core frequency ratio may represent a number of unsquashed clocks to squashed clocks. In combination with any of the above embodiments, in an embodiment the circuitry to select the dynamic core frequency ratio may be based on a latency associated with circuitry to report the performance counter increment. The selected dynamic core frequency ratio may correspond to a dynamic core frequency ratio before the performance counter increment is reported. In combination with any of the above embodiments, in an embodiment the circuitry to generate an adjusted performance counter increment may include circuitry to multiply the performance counter increment by the selected dynamic core frequency ratio. In combination with any of the above embodiments, in an embodiment the system may include a power control unit. The power control unit may include circuitry to increment a performance counter based on the performance counter increment.[0182] In some of the present embodiments, an apparatus may include a means for processing an instruction for a cycle of operation, a means for squashing a clock to reduce a frequency, and a means for adjusting a performance counter increment associated with a performance counter. The clock and/or frequency may be invisible to software and may be controlled by the means for squashing the clock rather than by software. The means for adjusting may be based on the means for squashing the clock to reduce the frequency. In combination with any of the above embodiments, in an embodiment the apparatus may include a means for determining whether the performance counter monitors an event measured in cycles and the means for adjusting the performance counter increment may be based on the means for determining that the performance counter monitors an event measured in cycles. In combination with any of the above embodiments, in an embodiment the apparatus may include a means for determining whether a mode of measurement for the performance counter may be set to measure in cycles and the means for adjusting the performance counter increment may be based on the means for determining that the performance counter is set to measure in cycles. In combination with any of the above embodiments, in an embodiment the apparatus may include a means for adjusting the performance counter, which may include a means for selecting a dynamic frequency ratio and a means for generating an adjusted performance counter increment. The adjusted performance counter increment may be based on the performance counter increment and the selected dynamic frequency ratio. The dynamic frequency ratio may represent a number of unsquashed clocks to squashed clocks. In combination with any of the above embodiments, in an embodiment the apparatus may include a means for selecting the dynamic frequency ratio, which may be based on a latency. The latency may be associated with means for reporting the performance counter increment. The selected dynamic frequency ratio may correspond to a dynamic frequency ratio before the performance counter increment is reported. In combination with any of the above embodiments, in an embodiment the apparatus may include the means for generating the adjusted performance counter increment using a means for multiplying the performance counter increment by the selected dynamic frequency ratio. |
The present application discloses a Boem-Weili accelerator. A processor package includes at least one Boem-Weili core. The Boem-Weili kernel includes a likelihood value generator, a transmission probability generator, and a transition probability generator. A likelihood value generator generates a forward value and a backward value for the observation set. A transmission probability generator generates a transmission probability for the observation set. A transition probability generator generates a transition probability for the observation set. Further, the BW core includes a lookup table that includes a pre-configured transition * transmit value for use by the LV generator in generating the FV and BV. Other embodiments are described and claimed. |
1.A processor package including:At least one Baum-Welch BW core;a likelihood value LV generator in the BW kernel, the LV generator is used to generate a forward value FV and a backward value BV for an observation set;an emission probability EP generator in the BW core, the EP generator being used to generate an EP for the observation set;a transition probability TP generator in the BW core for generating a TP for the set of observations; andA transition*transmit lookup table TELUT store in the BW core for storing pre-configured transition*transmit T*E values for use by the LV generator in generating FV and BV.2.The processor package of claim 1, wherein the TELUT storage enables the TP generator to complete Baum without computing T*E values for at least some of the observations in the set of observations - Iteration of Welch's algorithm.3.The processor package of claim 1, further comprising:at least a first likelihood and transition probability LVTP engine and a second LVTP engine in the BW core, where:the first LVTP engine includes a first LV generator and a first TELUT store, the first LV generator for generating FVs for a first subset of observations from the set of observations;the second LVTP engine includes a second LV generator and a second TELUT store, the second LV generator for generating FVs for a second subset of observations from the set of observations;the first LVTP engine and the second LVTP engine are configured to engage in generating the FV in parallel; andThe first LV generator and the second LV generator are configured to use the T*E value stored from the first TELUT and the T*E value stored from the second TELUT, respectively, in generating FV and BV.4.The processor package of claim 1, further comprising:A control section in the BW core for comparing the FV to a threshold and for discarding FVs with values below the threshold.5.The processor package of claim 4, wherein the control portion is further configured to:Sort the FV during the first timestamp;comparing the FV to a threshold probability value; andFVs with values below the threshold are discarded during the second timestamp.6.The processor package of claim 4, wherein the control portion is further configured to:Sort the FV during the first timestamp;determining a threshold probability value for classifying a threshold amount of FV to be retained; andFVs with values below the threshold probability value are discarded during the second timestamp.7.The processor package of claim 1, further comprising:A global event controller in communication with the BW core for configuring the TELUT store with a predetermined T*E value before the LV generator begins to generate FV and BV.8.8. The processor package of claim 7, wherein the TELUT storage is used to store at least one TELUT comprising 36 entries.9.The processor package of claim 1, wherein the BW core is configured to generate at least two types of probability values in parallel from the group consisting of FV, BV, EP, and TP.10.10. The processor package of claim 9, wherein the EP generator is to generate at least one EP for the observation set before the LV generator has finished generating the BV.11.A data processing system comprising:host processor;random access memory RAM in communication with the host processor;at least one Baum-Welch BW core in communication with the host processor;a likelihood value LV generator in the BW kernel, the LV generator is used to generate a forward value FV and a backward value BV for an observation set;an emission probability EP generator in the BW core, the EP generator being used to generate an EP for the observation set;a transition probability TP generator in the BW core for generating a TP for the set of observations; andTransition*transmit lookup table TELUT store in the BW core for storing the TELUT including preconfigured transition*transmit T*E values for use by the LV generator in generating FV and BV .12.11. The data processing system of claim 11, wherein the TELUT storage enables the TP generator to complete Baum without computing T*E values for at least some of the observations in the set of observations - Iteration of Welch's algorithm.13.The data processing system of claim 11, further comprising:at least a first likelihood and transition probability LVTP engine and a second LVTP engine in the BW core, where:the first LVTP engine includes a first LV generator and a first TELUT store, the first LV generator for generating FVs for a first subset of observations from the set of observations;the second LVTP engine includes a second LV generator and a second TELUT store, the second LV generator for generating FVs for a second subset of observations from the set of observations;the first LVTP engine and the second LVTP engine are configured to engage in generating the FV in parallel; andThe first LV generator and the second LV generator are configured to use the T*E value stored from the first TELUT and the T*E value stored from the second TELUT, respectively, in generating FV and BV.14.The data processing system of claim 11, further comprising:A control section in the BW core for comparing the FV to a threshold and for discarding FVs with values below the threshold.15.The data processing system of claim 14, wherein the control portion is further configured to:Sort the FV during the first timestamp;comparing the FV to a threshold probability value; andFVs with values below the threshold are discarded during the second timestamp.16.The data processing system of claim 14, wherein the control portion is further configured to:Sort the FV during the first timestamp;determining a threshold probability value for classifying a threshold amount of FV to be retained; andFVs with values below the threshold probability value are discarded during the second timestamp.17.The data processing system of claim 11, further comprising:A global event controller in communication with the BW core, the global event controller configured to configure the TELUT store with a predetermined T*E value before the LV generator begins to generate FV and BV.18.The data processing system of claim 11, wherein the BW core is configured to generate at least two types of probability values in parallel from the group consisting of FV, BV, EP, and TP.19.A method for generating emission probabilities and transition probabilities for a set of observations, the method comprising:execution of instructions by a host core in a data processing system including a Baum-Welch BW subsystem for causing the BW subsystem to include at least one BW core that includes The likelihood value LV generator, the emission probability EP generator, the transition probability TP generator, and the transition*transmission lookup table TELUT store:Obtain a preconfigured transition*transmit T*E value from the TELUT store;generating forward values FV and backward values BV for an observation set using the LV generator and the preconfigured T*E values stored from the TELUT;generating an EP for the set of observations using the EP generator; andA TP is generated for the set of observations using the TP generator.20.The method of claim 19, further comprising:The iteration of the BW algorithm is accomplished by the TP generator without computing T*E values for at least some of the observations in the set of observations.21.The method of claim 19, further comprising:An FV is generated for a first subset of observations from the set of observations by a first likelihood and transition probability LVTP generator in a first LVTP engine in the BW core, wherein the first LVTP engine includes a first TELUT storage; andgenerating an FV for a second subset of observations from the set of observations by a second LVTP generator in a second LVTP engine in the BW core, wherein the second LVTP engine includes a second TELUT store;wherein the first LVTP engine and the second LVTP engine are configured to engage in generating the FV in parallel; andWherein, the first LV generator and the second LV generator are used to respectively use the T*E value stored from the first TELUT and the T*E stored from the second TELUT when generating FV and BV value.22.The method of claim 19, further comprising:The FV is compared to a threshold by a control part in the BW core and FVs with values below the threshold are discarded.23.The method of claim 22, further comprising:sorting the FVs during a first timestamp by the control portion, determining a threshold probability value for classifying a threshold amount of FVs to be retained, and discarding during a second timestamp The FV of the value of the probability value.24.The method of claim 19, further comprising:The TELUT store is configured by a global event controller in communication with the BW core with a predetermined T*E value before the LV generator starts generating FV and BV.25.A machine-readable medium comprising instructions that, when executed by a machine, cause the machine to perform the method of any of claims 19-24. |
Baum-Welch acceleratortechnical fieldThe present disclosure relates generally to data processing systems, and in particular to processing accelerators for facilitating execution of Baum-Welch algorithms.Background techniqueThe Baum-Welch algorithm is a method for estimating the values of unknown parameters of a Hidden Markov Model (HMM). Baum-Welch algorithms are commonly used for a wide range of applications, including speech recognition, cryptanalysis, database search engines, and more. It is also used to solve learning problems associated with HMMs.Software in the data processing system may use the general purpose processing core in the processing unit to execute the Baum-Welch algorithm. For example, a data processing system may use processing cores in a central processing unit (CPU) or graphics processing unit (GPU), such as a general purpose GPU (GPGPU), to execute the various stages of the Baum-Welch algorithm.However, the Baum-Welch algorithm requires multiple iterations of computationally expensive dynamic programming algorithms, including so-called "forward" algorithms and so-called "backward" algorithms. Therefore, the Baum-Welch algorithm can have a high execution time and can result in considerable performance overhead for applications that use it. For a typical case, the forward and backward stages of the Baum-Welch algorithm require a large number of multiply-accumulate (MAC) operations. Furthermore, the stages of the algorithm for updating the emission probability (EP) and transition probability (TP) may require up to twice as many multiplication operations as required for the backward stage. The EP stage and TP stage may also require a large number of division operations.Each stage of the Baum-Welch algorithm (except the forward stage) depends on the output from the previous stage. Therefore, it is very difficult to execute these stages in parallel. Also, the growth in the amount of incoming data leads to an increase in storage and bandwidth requirements. Additionally, the performance overhead is exacerbated by multiple iterations of the same input for training.Therefore, even though applications using the Baum-Welch algorithm have the advantage of accuracy, they also have the disadvantage of high execution time. The time required to execute the Baum-Welch algorithm itself is often a major factor in the high execution time of such applications.Description of drawingsFeatures and advantages of the present invention will become apparent from the appended claims, the following detailed description of one or more example embodiments, and the accompanying drawings, wherein:1 is a block diagram of an example embodiment of a data processing system including a Baum-Welch accelerator.2 is a block diagram illustrating an example EP matrix, example forward and backward value vectors, and example output data for an example scenario.3 is a block diagram illustrating an example TP matrix for this example scenario.4 is a block diagram illustrating an example slice and associated data structures.5 presents a flowchart of an example embodiment of a process for generating maximization parameters for an HMM using the Baum-Welch accelerator of FIG. 1 .FIG. 6 is a block diagram with further details regarding communications within the data processing system of FIG. 1 .FIG. 7 presents a flowchart illustrating parallel and other operations within the data processing system of FIG. 1 .FIG. 8 is a block diagram with further details regarding the Baum-Welch kernel from FIG. 1 .FIG. 9 is a block diagram with further details regarding the computational portion of the Baum-Welch kernel from FIG. 8 .FIG. 10 is a block diagram with further details regarding the likelihood and transition probability engine from FIG. 9 .FIG. 11 is a block diagram with further details regarding the EP generator from FIG. 8 .12 is a block diagram with further details regarding the transmit divide pipeline from FIG. 11 .Figure 13 is a block diagram with details on the likelihood and transition probability engine from an alternative embodiment.14 is a block diagram of a system in accordance with one or more embodiments.15 is a block diagram of a first more specific exemplary system in accordance with one or more embodiments.16 is a block diagram of a second more specific exemplary system in accordance with one or more embodiments.17 is a block diagram of a system on a chip in accordance with one or more embodiments.Detailed waysThe present disclosure describes a processing accelerator for executing the Baum-Welch algorithm. Such accelerators may be referred to as "Baum-Welch (BW) accelerators". As described in more detail below, the BW accelerator may include features that enable the accelerator to achieve parallelism across stages of the algorithm (eg, forward and backward stages). The BW accelerator may also include the following features: (a) features for reducing or minimizing memory bandwidth and storage requirements, and (b) general-purpose data for performing Baum-Welch algorithms relative to the use of general-purpose processing cores A feature of a processing system that reduces computational overhead. However, the BW accelerator can maintain high accuracy. In one embodiment, the BW accelerator uses novel hardware optimizations to reduce and speed up overall computational operations, including 1) for enabling aspects of the Baum-Welch algorithm (such as forward-backward value computation) Features of parallelization, and 2) features for caching state that may be reused. This disclosure also describes one or more new instructions for driving the BW accelerator.According to one embodiment, the data processing system may use the BW accelerator to perform the Baum-Welch algorithm in a wide range of domains (eg, speech recognition, cryptanalysis, database searches, mitigation of learning problems associated with HMMs, etc.). For example, data processing systems can use BW accelerators to alleviate learning problems for HMMs used in conjunction with deep neural networks (DNNs).Baum-Welch algorithm:The Baum-Welch algorithm is a type of expectation maximization (EM) algorithm. Thus, the Baum-Welch algorithm is a method for solving the expectation maximization problem. Specifically, the Baum-Welch algorithm is an iterative method for estimating parameters in a statistical model involving unobserved variables (which may also be referred to as "latent variables" or "hidden variables"). ”). More specifically, the Baum-Welch algorithm is a method for finding the most likely correct parameter values ("maximum likelihood parameters") based on observed data ("observations"). In other words, the Baum-Welch algorithm produces maximum likelihood estimates for unknown parameters. The Baum-Welch algorithm does this by maximizing the marginal likelihood of the observed data.Each iteration of the Baum-Welch algorithm involves an expectation phase followed by a maximization phase. In the expectation phase, the algorithm calculates likelihood values based on observations. Specifically, in the expectation phase, the algorithm performs a forward computation phase to compute forward probability values and a backward computation phase to compute backward probability values, as described in more detail below. For the purposes of this disclosure, forward and backward probability values from a desired stage may be referred to as forward and backward values, respectively, and collectively as "forward-backward (F-B) values" or " Likelihood Value" (LV).In the maximization phase, the algorithm uses those likelihood values to update the parameters of the model to maximize the likelihood of observations when using these posterior (ie, updated) parameters. The parameters that are updated in the maximization phase may be referred to as "maximization parameters" and include transition probability (TP) and emission probability (EP). Specifically, TP may be stored in a TP vector in a TP matrix, and EP may be stored in an EP vector in an EP matrix.In other words, the Baum-Welch algorithm takes the set "S" of input values and uses those input values as observations to update the maximization parameters (ie, TP and EP) of the statistical model "G(V,A)", where, V is a set of vectors or nodes, and A is a set of directed edges or transitions. Specifically, the algorithm performs expectation maximization based on S in the following three stages: 1) forward computation, 2) backward computation, and 3) maximized parameter update.Forward calculation:In the forward computation phase, the algorithm processes the observations (or "elements") in S in order from the first element "S[1]" to the last element "S[ns]", where "ns" is The length of S (ie, the number of elements in S). For each step "t" in the process, given that all previous inputs S[1] to S[t-1] are processed by following an unknown path leading to state vi, the algorithm uses the maximization parameter to Computes the set of forward values "Ft(i)" when element "S[t]" is emitted in state "vi". Thus, it is assumed that Ft(i) represents the likelihood of such an event occurring for a given element S[t] and state vi.For example, for the first step in the process, the algorithm uses the value of the element numbered 1 and the maximum parameter to compute F2(i), where F2(i) is an element indicating that for each state i in V, the element numbered 2 A collection of how high the probability will be to have that state. Specifically, in one embodiment, the forward calculation stage calculates Ft(i) according to Equation 1 below.backward calculationThe backward computation stage uses the maximization parameter to process from the last element of input S (ie, S[ns]) to the first element of input S (ie, S[1]). The goal of backward computation is similar to that of forward computation, except that it processes state and input in reverse to find backward values. The set of backward values "Bt(i)" means that given all further inputs S[t+1] to S[ns] are processed by following an unknown path from backward leading to state vi (ie, taking reverse transition), the likelihood that the element S[t] is in state vi. In one embodiment, the backward calculation stage calculates Bt(i) according to Equation 2 below.Maximize parameter update :In the maximizing parameter update stage, the algorithm uses the likelihood values (i.e., forward and backward values) computed in the first two stages as expectations to update EP and TP in G(V,A) such that the backward The test probability will maximize the likelihood when the observation is S. Thus, the Baum-Welch algorithm uses the likelihood values from the desired phase as statistics to update the maximization parameters. Specifically, in one embodiment, the algorithm updates TP according to Equation 3 below and updates EP using Equation 4 below.In Equation 4, [S[t]=X] is a condition variable that returns 1 if the condition is satisfied (ie, S[t]=X, where X is an element in Σ) and 0 otherwise.In addition, the stages are usually iterated. Specifically, the Baum-Welch algorithm typically involves using the posterior maximization parameters (ie, the posterior TP and EP) from one iteration in the next iteration, where these posterior maximization parameters are used to perform a new A set of forward and backward computations of to generate a new set of likelihood values, and where those new likelihood values are used to generate a new set of maximization parameters. Iterations can be performed on the same input until the probability converges to the point where the update to the maximization parameter becomes negligible.BW Accelerator:As indicated above, the present disclosure describes a processing accelerator for executing the Baum-Welch algorithm, and such an accelerator may be referred to as a "BW accelerator." As described in more detail below, a BW accelerator may include one or more processing cores designed to efficiently and efficiently perform some or all aspects of the Baum-Welch algorithm. Such processing cores may be referred to as "BW cores" and may be implemented as hardware circuitry.FIG. 1 is a block diagram of an example embodiment of a data processing system 10 that includes a BW accelerator 41 . Specifically, for illustrative purposes, FIG. 1 depicts a hypothetical data processing system 10 including a processor package 12 including a host core 20, a system agent 30, a BW accelerator 41, with BW cores 40A-40B Residing in BW Accelerator 41 and Global Event Controller 52 and other components reside in System Agent 30 . Each of those components (ie, host core 20, system agent 30, BW accelerator 41, BW cores 40A-40B, global event controller 52, and other components residing in system agent 30) may be implemented as corresponding hardware circuit. For example, host core 20 may be a general-purpose processing core. Accordingly, host core 20 may also be referred to as a "host processor". However, in other embodiments, the number and arrangement of components may vary. For example, in other embodiments, the processor package may include a single BW core or more than two BW cores. Additionally or alternatively, a processor package may include more than one host core. Additionally or alternatively, a data processing system may include multiple processor packages with one or more host cores on one (or more) processor packages and with one or more host cores on one or more different processor packages One or more BW accelerators for each BW core. In another embodiment, some or all of the components of the BW accelerator may reside inside the system agent. Possible embodiments include data processing systems with 4, 8, 16, 32 or more BW cores.Data processing system 10 also includes random access memory (RAM) 14 coupled to or in communication with processor package 12 and non-volatile storage (NVS) 18 . For example, RAM 14 acts as main memory or system memory, and it may be implemented as one or more modules of dynamic random access memory (DRAM). NVS 18 may include software such as an operating system (OS) 60 and applications 62 . The application 62 may be a speech recognition application, a cryptanalysis application, a database search application, or any other type of application that uses the Baum-Welch algorithm to estimate the parameters of the HMM.Data processing system 10 may copy software from NVS 18 into RAM 14 for execution. NVS 18 may also include input data for application 62, and data processing system 10 may copy the input data into RAM 14 for processing. In another embodiment or scenario, application 62 obtains BW input data from another source. For the purposes of this disclosure, the raw input data for the Baum-Welch algorithm may be referred to as "BW input data" 64 . Specifically, BW input data 64 includes observation sequences. For example, in one embodiment or scenario, those observations represent nucleotides (eg, adenine, cytosine, guanine, and uracil (or thymine)) detected in deoxyribonucleic acid (DNA) from the patient , and each observation has one of four values, such as A, C, G, and U (or such as 0-3, where each number corresponds to A, C, G, or U). In other embodiments or scenarios, the observations relate to financial transactions or to any other subject suitable for analysis using the Baum-Welch algorithm.As described in more detail below, application 62 uses components such as global event controller 52 and BW accelerator 41 to process BW input data 64 . For purposes of this disclosure, global event controller 52 , BW accelerator 41 , and components external to host core 20 that enable global event controller 52 and BW accelerator 41 to cooperate may be collectively referred to as “BW subsystem” 50 .Application 62 includes instructions that, when executed by host core 20 , cause BW accelerator 41 to estimate maximization parameters for the HMM based on observations in BW input data 64 using the Baum-Welch algorithm. Thus, the application 62 is designed to utilize the BW accelerator to generate maximization parameters for the HMM based on the BW input data. Those generated parameters may be referred to as "processed output" or as "BW output data 66".As illustrated in FIG. 1 , some components of the BW subsystem 50 reside in the system agent 30 . Those components include the second level cache (L2C) 32 . Also, each BW core includes a first level cache (L1C), such as L1C 46A in BW core 40A and L1C 46B in BW core 40B. L1C may also be referred to as "local memory" or "local buffer". Components of the BW subsystem 50 in the system agent 30 also include an L1 direct memory access (DMA) engine 36 that manages the transfer of data from a source (such as L2C 32) to an L1 cache (such as L1C 46A). DMA transfers into the L1 cache may be referred to as "L1 DMA transfers". In the embodiment of FIG. 1, DMA engine 36 is implemented in hardware. In alternative embodiments, the data processing system may use one or more DMA engines implemented in software or in a combination of hardware and software. In general, a DMA engine initiates and supervises data transfers from a source to a destination. Components of the BW subsystem 50 in the system agent 30 also include an L2 DMA engine 34 that manages the transfer of data from a source (such as RAM 14 ) into the L2C 32 . The components of the BW subsystem 50 in the system agent 30 also include a global event controller 52 that keeps track of the scheduling and execution of various events at different stages and components of the system. For example, global event controller 52 may act as a synchronization proxy or master across other components of BW subsystem 50 . Correspondingly, the global event controller may also be referred to as the "main event controller".As indicated above, BW accelerator 41 includes BW cores 40A-40B. The BW accelerator 41 may also include a TP DMA engine 38 that initiates and supervises the transfer of TP data from the RAM 14 to the BW core. Specifically, as described in more detail below, each BW core includes a TP cache, and the global event controller 52 uses the TP DMA engine 38 to write to or read from the TP cache. For example, as described in more detail below with reference to FIG. 10, BW core 40A includes TP cache 179, and when global event controller 52 decides to copy vectors from the initial TP matrix from RAM 14 to TP cache 179 in BW core 40A , the global event controller 52 uses the TP DMA engine 38 to initiate and supervise those transfers.The L2 DMA engine 34 uses the L2 DMA table to load observation data and vectors from the initial EP matrix into the L2C 32, and the L1 DMA engine 36 uses the L1 DMA table to write such data into the L1C within each core. As illustrated, the processor package 12 also includes various interconnects for coupling the various components to enable those components to communicate with each other.As indicated above, each BW core includes an L1C of a certain size and a TP cache of a certain size. As described in more detail below, each BW core also includes numerous other components. In one embodiment, due to constraints imposed by one or more of these other components of the BW core, the L1C is able to hold significantly more than the BW core can handle in an iteration of the Baum-Welch algorithm data (for example, observations and EP vectors). For the purposes of this disclosure, the amount of data that a BW core can process in an iteration of the Baum-Welch algorithm may be referred to as the "L1 block size." Also, the data value specifying the size of the L2C in the BW subsystem may be referred to as the "L2 block size".The BW subsystem 50 (eg, the global event controller 52 ) uses certain types of instructions to cause certain BW cores to generate maximization parameters for the HMM or for a portion of the HMM. For the purposes of this disclosure, such instructions may be referred to as "BW accelerated instructions" or "BWAXF instructions."Additionally, before any BWAXF instructions are executed, application 62 may configure BW subsystem 50 with data such that BW subsystem 50 applies a particular statistical model, such as an HMM. This configuration may be referred to as the "network" for this particular statistical model. The statistical model itself may also be referred to as a "network". In particular, application 62 may configure BW subsystem 50 to process input data according to a Bayesian network involving directed acyclic graphs, wherein the network has predetermined characteristics related to the type of input data being processed/ Attributes. The data used by application 62 to configure BW subsystem 50 may be referred to as "BW configuration data." For example, BW configuration data may include/specify properties such as:"read size" (ie, total number "N" of observations/elements in BW input data 64);"read type", which is the data type of the observation/element, which may also indicate the "observation size" or "element size" (ie, the amount of storage space required to hold the observation/element);the number of possible states/values to observe;possible states/values of observations;·Initial EP matrix;the initial TP matrix; and• Convergence threshold used by the BW kernel to determine if enough Baum-Welch algorithm iterations have been completed.Additionally, the BW configuration data may specify properties of the BW subsystem 50 , such as the number of BW cores, the size of the L1C in each core, and the size of the L2C in the BW subsystem 50 . Alternatively, the global event controller 52 may be preconfigured with those kinds of properties, and/or the global event controller 52 may detect those kinds of properties. Thus, the global event controller 52 will "know" the attributes specified by the application, as well as the attributes preconfigured into the global event controller 52 or discovered by the global event controller 52 . The global event controller 52 may also determine the characteristics of the BW subsystem 50 based on other known properties. For example, the global event controller 52 may calculate the L1C block size based at least in part on the size of the L1C.After loading the desired network into the BW subsystem 50, if the BW input data 64 is relatively small (eg, containing no more than 100 or 200 or 500, depending on factors such as the viewing size or the storage capacity of each BW core or 1000 or 2000 observations), the BW subsystem 50 can process the entire observation set using a single BWAXF instruction and a single BW core. However, if the BW input data 64 is relatively large (eg, contains more than 100 or 200 or 500 or 1000 or 2000 observations), the BW subsystem 50 may divide the input data into multiple observation sub-vectors or Subsets are observed, and the BW subsystem 50 can cause multiple BW cores (or a single BW core) to apply the Baum-Welch algorithm to those subsets.For the purposes of this disclosure, the term "input slice" refers to a sequence of elements in the BW input data to be processed as a set by the BW kernel by applying one or more iterations of the Baum-Welch algorithm to it. Thus, BW subsystem 50 may divide BW input data 64 into two (or more) input slices. BW subsystem 50 may then use multiple BW cores (or a single BW core) to process those input slices. In one embodiment, the global event controller 52 defines each input slice as having (a) a L1C that can be loaded into a BW core and (b) any access to RAM that can be made by the BW core without making any accesses to the RAM by the BW core The size of the case to be processed.Also, the BW kernel uses the EP and TP vectors corresponding to elements in the input slice to apply the Baum-Welch algorithm to those elements. For the purposes of this disclosure, the term "filter" refers to a maximization parameter related to a particular observation. Specifically, in one embodiment or scenario, each filter contains (a) one EP vector and (b) some number of TP vectors (eg, one TP vector for each possible observation state).Additionally, the global event controller 52 may collect multiple filters involving input slices into a collection, referred to as a "filter block." For example, in conjunction with generating an input slice, global event controller 52 may generate a filter block containing all filters related to observations in that input slice. For the purposes of this disclosure, the EP vectors in a filter block may be collectively referred to as "EP slices," and the TP vectors in the filter block may be collectively referred to as "TP slices." In other words, EP slices include vectors from the EP matrix that relate to observations in the input slice, and TP slices include vectors from the TP matrix that relate to those observations.Also, input slices and corresponding filter blocks may be collectively referred to as "BW input units". In one embodiment, the global event controller 52 creates BW input cells according to the L1 block size. In other words, each BW cell is designed to be processed by a BW core as a set. Thus, each BW input unit is less than or equal to the L1 block size.The global event controller 52 may also create a data structure that includes one or more BW input elements, referred to as a "slice". In one embodiment, the global event controller 52 defines each slice to have a size that can be loaded into the L1C of the BW core. The global event controller 52 may then cause each BW core to process one or more slices. For example, global event controller 52 may divide input data into slices (each slice containing at least one input slice), and global event controller 52 may cause a different BW core to apply Baum to each of those slices - Welch's algorithm. Thus, the BW subsystem 50 may execute at least some portions of the Baum-Welch algorithm in parallel. Also, as indicated above, BW subsystem 50 may assign multiple consecutive slices to BW cores. Also, when a BW core is processing a slice, the BW core can apply the Baum-Welch algorithm to one slice at a time.For example, global event controller 52 may create a first slice and a second slice, and global event controller 52 may then cause BW core 40A to process the first slice using a first BWAXF instruction and BW core 40B to process the first slice using a second BWAXF instruction Two slices. Likewise, in a data processing system with 4 BW cores and BW input data containing 10,000 observations, the global event controller can divide the input data into 16 input slices each containing 625 observations, and the global event controller can 4 slices are created, each containing 4 input slices, and the global event controller can use four BWAXF instructions to cause each BW core to process one of those slices. And to process the same BW input data in a data processing system with only two BW cores, the global event controller can use four BWAXF instructions (two BWAXF instructions for each BW core) to have each BW core process those on-chip of two slices. Thus, each BWAXF instruction is directed to a specific BW core, and the BW core then executes the BWAXF instruction.Data structures such as input slices, filters, filter blocks, and slices may be defined or specified using any suitable technique. For example, input slices may include related observations, or input slices may include data for specifying (or for identifying or locating) related observations. In either case, the input slice may be said to "contain" or "include" those observations. The set of maximum parameters can also be referenced in this way. For example, a filter discussed as containing or including certain TP vectors may include associated TP vector elements, or it may include data specifying associated TP vector elements. Other data structures (eg, filter blocks and slices) may also be referenced in this manner.For purposes of illustration, this disclosure discusses a hypothetical scenario where the BW input data 64 contains 3000 observations reflecting DNA read sequences involving four possible states of the observations: A, C, G, and U. Also, as described in more detail below, in such a scenario, the global event controller 52 divides the BW input data 64 into 6 input slices, each input slice containing 500 observations, and the global event controller 52 creates two slices, each slice containing 3 of those input slices. Also, all observations in a slice may be collectively referred to as "observation slices", all TP vectors in a slice may be collectively referred to as "TP slices", and so on.2 is a block diagram illustrating an example EP matrix 210, example forward and backward value vectors (230 and 240, respectively), and example BW output data 66 for such a scenario. Each of those data structures or files may reside in RAM 14 at one time or another. In the example scenario, the EP matrix 210 gives the probability of observing each of the possible states (A, C, G, and U) for each position in the input sequence. In other words, the EP matrix 210 includes indications for each position "t" within the BW input data 64 (ie, for t1 to t3000) and for each possible state "V" (ie, for states A, C, G, and U). each state of ), the probability value of the likelihood that an observation in that location is in that state. For example, the first element in EP matrix 210 reflects the probability that the first observation is in state A, the second element in EP matrix 210 reflects the probability that the first observation is in state C, and so on, where the last element in EP matrix 210 The elements reflect the probability that the last observation (at position t3000) was in state U.FIG. 3 is a block diagram illustrating an example TP matrix 220 for such a scenario. In an example scenario, the TP matrix 220 gives the probability of transitioning from each possible state to each possible state for each position in the input sequence. In other words, the TP matrix 220 includes an indication that for each current position "t" within the BW input data 64 except the last position (ie, for t1 to t2999) and for each possible state "V" at that current position, at The probability value of the likelihood that the observation at the current location is in a particular one of these possible states as specified when the observation in the next location (ie, at location t+1) is. For example, the first element in TP matrix 220 reflects the probability that the second observation is in state A when the first observation is A, and the second element in TP matrix 220 reflects the probability that the second observation is in state C when the first observation is A probability, and so on, where the last element in TP matrix 220 reflects the probability that the last observation was in state U when the penultimate observation was in state U.The network loaded into the BW subsystem 50 by the application 62 may include initial probability values for the EP matrix 210 and the TP matrix 220 . Accordingly, EP matrix 210 and TP matrix 220 may be referred to as initial EP matrix 210 and initial TP matrix 220, respectively, to indicate that those matrices include initial probability values. Accordingly, the initial EP matrix 210 and the initial TP matrix 220 may reflect predetermined expectations for the performance of the network.Also, although the matrix is illustrated herein as having some structure including rows and columns, the elements of the matrix may simply be stored as sequences of probability values. For example, the elements of the initial TP matrix 220 in RAM 14 may look like a sequence of the following probability values (where each probability value is a direct numerical value of 0 and 1): probA1A, probA1C, probA1G, probA1U, probC1A, probC1C, … probU2999U.2, by processing BW input data 64, data processing system 10 may generate BW output data 66 including a posteriori EP matrix 212 and a posteriori TP matrix 222 including those matrices that BW subsystem 50 has passed to BW The input data 64 has updated or changed probability values relative to the initial EP matrix 210 and the initial TP matrix 220 using the Baum-Welch algorithm. Additionally, as indicated above, the BW subsystem 50 may complete multiple iterations of the Baum-Welch algorithm before determining that updates to the maximization parameters have become negligible. For purposes of this disclosure, the EPs and TPs used by the BW subsystem 50 to generate new/updated EPs and TPs may be referred to as "current" EPs and TPs, and the generated EPs and TPs may be referred to as " Posterior" EP and TP. As indicated above, for the first iteration, BW subsystem 50 may use initial EP matrix 210 and initial TP matrix 220 as the current EP and TP matrices. And in response to determining that the update to the maximization parameter becomes negligible, the BW subsystem 50 may take the set of the final a posteriori EP matrix 212 and the posterior TP matrix 222 as the final EP matrix 212 and the final TP matrix 222 .Additionally, as indicated above, in the expectation phase of the Baum-Welch algorithm, the BW kernel generates forward and backward values. For the purposes of this disclosure, the complete set of forward values for the BW input data 64 may be referred to as the forward value (FV) matrix 230, and the complete set of backward values for the BW input data 64 may be referred to as the backward values (BV) matrix 240 .Also, in the example scenario involving six input slices, each BW core may generate a portion of the FV matrix 230 and a portion of the BV matrix 240 . For the purposes of this disclosure, the portion of the FV matrix that is generated based on the input slice may be referred to as an "FV slice," and the portion of the BV matrix that is generated based on the input slice may be referred to as a "BV slice." When BW core 40A processes input slice number 1, BW core 40A generates a corresponding FV slice number 1. As shown in Figure 2, the FV slice numbered 1 contains a sequence with 500 FVs (from FV1 to FV500). And when the BW core 40A processes the input slice numbered 2, the BW core 40A generates the corresponding FV slice numbered 2. And so on, where BW core 40A generates three FV slices and three BV slices, and where BW core 40B generates three other FV slices and other BV slices.Similarly, in the maximization phase, the BW kernel generates posterior EP and TP values. Also, in the example scenario involving six input slices, each BW core may generate part of the posterior EP matrix 212 and part of the posterior TP matrix 222 . For the purposes of this disclosure, the portion of the EP matrix generated by applying the Baum-Welch algorithm to the input slices may be referred to as "posterior EP slices", and by applying the Baum-Welch algorithm to the input slices And the portion of the generated TP matrix may be referred to as a "posterior TP slice". When BW core 40A processes input slices numbered 1 to 3, BW core 40A generates three corresponding posterior EP slices and three corresponding posterior TP slices. And when BW core 40B processes input slices numbered 4 to 6, BW core 40B generates three additional posterior EP slices and three additional posterior TP slices. Thus, a posteriori EP matrix 212 and a posteriori TP matrix 222 may each include 6 slices.As indicated above, in the example scenario, the BW input data 64 includes 3000 observations, and the BW subsystem 50 divides the BW input data 64 into 6 input slices, each input slice containing 500 observations. BW subsystem 50 will then use BW core 40A to process three of those input slices, and BW core 40B to process the other three input slices.4 is a block diagram illustrating slices and associated data structures. In FIG. 4, the first three input slices from BW input data 64 are shown as input slice number 1 through input slice number 3, and the last three input slices are shown as slice number 4 through input slice number 4 Slice numbered 6. Specifically, in a hypothetical scenario, global event controller 52 has decided to dispatch input slices numbered 1 to 3 to BW core 40A, and input slices numbered 4 to 6 to BW core 40B . Thus, global event controller 52 has copied input slices numbered 1 through 3 into first slice 63A in RAM 14, and global event controller 52 has copied input slices numbered 4 through 6 into In the second slice 63B in the RAM 14 . The global event controller 52 will then use the BWAXF instruction to cause core 40A to process slice 63A and to cause core 40B to process slice 63B.As indicated above, global event controller 52 may also include filter blocks in each slice. For example, as shown in FIG. 4, in a hypothetical scenario, the global event controller 52 has stored input slices numbered 1 to 3 and corresponding filter blocks (FB) numbered 1 to 3 in In slice 63A, and global event controller 52 has stored input slices numbered 4 to 6 and corresponding FBs numbered 4 to 6 in slice 63B. (For illustration purposes, "FB" in slices 63A and 63B is shown in solid lines to indicate that the filter blocks reside in those slices, while "filter blocks numbered 1", "blocks numbered 2" Filter Blocks" etc. are shown in dashed lines below slices 63A and 63B to indicate that "Filter Block No. 1", "Filter Block No. 2", etc. are expanded views of the corresponding FBs in slices 63A and 63B .) The global event controller 52 may have created filter block number 1 by adding the first vector from the current EP matrix and the first four vectors from the current TP matrix to the filter block number 1 the first filter within ("filter number 1"), and so on until the filter block number 1 contains filters number 1 through number 500. Thus, the global event controller 52 may create a filter for each observation in the input slice, where those filters reside in the filter block.In Figure 4, in the filter block numbered 1, the box labeled "Vector numbered 1 from the current EP matrix" represents the first row or first vector of the current EP matrix, as shown in Figure 2 As shown, the first row or vector includes the emission probability that the position numbered 1 (ie, the observation numbered 1) is in each of the four possible observation states. Similarly, the block labeled "Vectors numbered 1 to 4 from the current TP matrix" represents the first four rows or the first four vectors of the current TP matrix. Those rows include transition probability values for each of the four possible states at the position numbered 1. For example, as shown in Figure 3, the first of those rows reflects the probability that position number 2 is in each of the four possible observation states when position number 1 has state A. Thus, in a hypothetical scenario, the filter block numbered 1 includes the EP and TP vectors related to observations in the input slice numbered 1, and so on. Alternatively, as indicated above, the filter may include pointers to TP vectors instead of those actual vectors.As illustrated in Figure 4, the global event controller 52 creates six filter blocks, where each filter block contains 500 filters, and where each filter contains the EP vector and the EP vector for the corresponding observation in the input slice some number of TP vectors. As further illustrated in Figure 4, the first three input slices and three corresponding filter blocks reside in slice 63A, and the other three input slices and filter blocks for those input slices reside in slice 63B .After creating slices 63A and 63B, global event controller 52 may cause BW core 40A to process input slices in slice 63A using a first BWAXF instruction and BW core 40B to process input slices in slice 63B using a second BWAXF instruction. Thus, the global event controller 52 can process at least some portions of the Baum-Welch algorithm in parallel.In the scenario of FIG. 4, slice 63A includes three BW input units, and slice 63B includes three BW input units. The global event controller 52 has configured each slice to fit within the L1C of the BW core, and configured each input slice to be processable by the BW core as a set. For example, if the L1C of each BW core has a capacity of 1 megabyte (MB), the global event controller 52 may use a slice size less than or equal to 1 MB, and the global event controller 52 may divide the BW input data 64 into input Slices, each of these input slices is small enough to be processed by the BW kernel as a set when combined with the corresponding filter block.Before sending the BWAXF instruction to the BW core, the global event controller 52 loads the slice to be processed into the L1C of the BW core. The global event controller 52 thus allows the BW core to avoid consuming execution events and data transfer bandwidth during processing of the slice. Also, as indicated above, the global event controller 52 may provide each BW core with slices containing one or more input slices and one or more corresponding filter blocks, and the global event controller 52 may provide each BW core with The cores send BWAXF instructions with different parameter values to cause those BW cores to process the data.In one embodiment, BW accelerated instructions (or "BWAXF instructions") use a format with the following instruction ID and parameters:BWAXF dest, src1, src2, src3 (BWAXF destination, source 1, source 2, source 3). According to this format, the last two characters or bytes of the instruction ID (ie, X and F) identify, respectively, the number of input slices from a particular slice to be processed, and the number of filters used with each input slice, respectively. quantity. For example, according to the hypothetical scenario discussed above, "BMA3500" indicates that three input slices from the current slice are to be processed, and 500 filters are used with each input slice.Also, the Source 3 parameter has three parts, which may be referred to as "Source 3-F," "Source 3-Y," and "Source 3-Z," respectively. Regarding the value (or set of values) provided for source 3, the last byte is used for source 3-Z, the penultimate byte is used for source 3-Y, and the remaining bytes are used for source 3-F. The following list reiterates the meaning of the "X" and "F" bytes from the instruction ID, and the list explains the other parameters of the BWAXF instruction:X: Specifies the number of input slices from the current slice to be processed.F: Specifies the number of filters to use with each input slice.Destination: Specifies the base address for saving the posterior EP value of the current slice.source1: Specifies the base address of the input slice used for the current slice.Source 2: Specifies the base address of the filter block used for the current slice.Source 3-F: Specifies the base address for the forward value of the current slice.Source 3-Y: Specifies the number of observations/elements per input slice.Source 3-Z: Specifies the probability/total number of elements in each filter.For example, referring to FIG. 4, in the scenario discussed above, global event controller 52 sends a first BWAXF instruction to BW core 40A to cause BW core 40A to process slice 63A, and global event controller 52 sends a first BWXF instruction to BW core 40B Two BWAXF instructions to cause BW core 40B to process slice 63B. Specifically, for this first BWAXF instruction, in order for BW core 40A to process slice 63A as the "current" slice, global event controller 52 may set parameters to the following values:X=3: Specifies the number of input slices from the current slice to be processed.F=500: Specifies the number of filters to use with each input slice.destination = base address for a posteriori EP matrix: specifies the base address for the posterior EP value for the current slice (see Figure 2).source 1 = base address for slice number 1: Specifies the base address of the input slice for the current slice.source 2 = base address for FB numbered 1: specifies the base address of the filter block used for the current slice.src3-F – base address for FV slice numbered 1: Specifies the base address used for the forward value of the current slice (see Figure 2).Source 3-Y=500: Specifies the number of elements per input slice.Source3-Z=20: Specifies the total number of elements in each filter.In a hypothetical scenario, the global event controller sets Source 3-Z to 20, since there are 20 elements in each slice: 4 elements from the EP vector, and 4 elements from the TP vector (each TP vector has 16 elements of 4 elements).Also, as shown in FIG. 2, for slice 63A, global event controller 52 sets "source 3-F" for the FV slice that BW core 40A will generate based on the input slice numbered 1 (ie, numbered 1). the base address of the FV slice). Similarly, the global event controller 52 sets the "destination" to the base address for writing back processed data (ie, updated TP and EP) for the current slice. Specifically, as shown in FIG. 2, global event controller 52 sets the "destination" for slice 63A to the EP slice that BW core 40A will generate based on the input slice numbered 1 (ie, numbered 1). EP slice) base address.In contrast, the above-mentioned second BWAXF instruction that is ultimately directed to BW core 40B includes a "destination" parameter that points to the start of EP slice numbered 4 (see FIG. 2 ), which points to The "source 1" parameter for the input slice of 4 (see Figure 4), the "source 2" parameter to the filter block number 4 (see Figure 4), and the "source" to the start of the FV slice number 4 3-F” parameter (see Figure 2).The BW core can treat parameters such as "destination" as pointers, and the BW core can update those pointers as necessary. For example, when the BW core ends one input slice and starts the next input slice from the current slice, the BW core can automatically adjust the relative pointer accordingly. For example, when BW core 40A finishes processing input slice number 1, BW core 40A may automatically update the "destination" pointer to point to the start of EP slice number 2 (see Figure 2), changing " The source 1" pointer is updated to point to the input slice numbered 2 (see Figure 4), the "source 2" pointer is updated to point to the filter block numbered 2 (see Figure 4), and the "source 3-F" pointer is updated Updated to point to the start of FV slice number 2 (see Figure 2).Also, the BW core can automatically calculate base addresses for other data structures based on the above parameters. Those other data structures may include BV slices and TP slices. For example, BW core 40A may automatically calculate the base address for holding the backward value of slice 63A by adding the size of the FV matrix to the "source 3-F" pointer. Similarly, the BW core 40A can automatically calculate the base address for holding the TP by adding the size of the EP matrix to the "destination" pointer. And the BW kernel may calculate values such as the size of the FV matrix and the size of the EP matrix based on the total number "N" of observations in the BW input data 64 . Similarly, BW core 40A may calculate the total size of FV matrix 230 based on the size of the elements and the total number of elements of FV matrix 230 (which is equal to "N"). Accordingly, BW core 40A may be configured to store BV matrix 240 immediately after FV matrix 230 .Additionally, the global event controller 52 supports instructions for loading TP data into the TP cache of the BW core using direct memory access (DMA). For the purposes of this disclosure, such instructions may be referred to as "Baum-Welch transition probability load instructions," "BW TP load instructions," or "BWTPL instructions." In one embodiment, the BW TP load instruction uses the format with the following instruction ID and parameters:BWTPL src1, src2 (BWTPL source 1, source 2).The "source 1" parameter points to the base address of the TP data in RAM, and the "source 2" parameter identifies the number of TP vectors in the L1C to be loaded into the BW core. Referring to the flowchart of FIG. 5, further details regarding the BWTPL instruction are provided below.FIG. 5 presents a flowchart of an example embodiment of a process for generating maximization parameters for an HMM using the BW accelerator 41 . This process is discussed below in the context of the hypothetical scenarios discussed above involving slices 63A and 63B.The process of Figure 5 begins at block 310: the application 62 configures the global event controller 52 with BW configuration data for the relevant statistical model. For example, as indicated above, the BW configuration data may specify the possible values or states to observe, the number of possible states, the read size, the convergence threshold, the initial EP matrix 210, and the initial TP matrix 220, and the application 62 may convert the BW configuration data to (or a pointer to this data) is sent to the global event controller 52 .Application 62 may then launch BW subsystem 50 as shown at block 312 . For example, application 62 may send a start signal to global event controller 52 . In response, as shown at block 314, the global event controller 52 may determine, based on the BW configuration data and known properties of the BW subsystem 50, such as the number of BW cores and the size of the L1C in each BW core, the The appropriate slice attributes of the BW input data 64 are processed. Those slice attributes may include the number of slices to use, the number of input slices included in each slice, the number of BW input units included in each input slice, the filtering to be used for each filter in the filter block filter size, and the number of TP vectors included in each filter. For example, global event controller 52 may determine that each filter should include an EP vector with 4 elements and 4 TP vectors with 4 elements each, for a total of 20 elements, each of which has a predetermined size ( such as 2 bits), resulting in a filter size of 80 bits or 10 bytes. The global event controller 52 can then determine how many observations, along with the same number of filters, can fit in the BW input unit. The global event controller 52 may then divide the read size (ie, the total number of observations) by the number of observations in the BW input unit to determine the number of input slices to use. The global event controller 52 may then determine how many slices to use. For example, if there are more input slices than cores, global event controller 52 may decide to use at least one slice for each BW core. The global event controller 52 may then determine how many input slices to include in each slice based on the L1C size and the size of the BW input unit. In a hypothetical scenario, the global event controller 52 decides to use two slices and assigns three BW input units to each slice (where each BW input unit includes an input slice and a filter block).Accordingly, as shown at block 316, the global event controller 52 then creates those slices. For example, global event controller 52 may copy slices and filter blocks for inputs numbered 1 through 3 into slice 63A, and global event controller 52 may copy input slices and filter blocks for inputs numbered 4 through 6 The slice and filter block data are copied into slice 63B. Alternatively, global event controller 52 may create slices 63A and 63B by creating one or more tables that indicate where certain components of the slice reside. Correspondingly, a table (or a collection of tables) that indicates where components of a slice reside may also be referred to as a "slice". For example, as indicated above, a slice may comprise an input slice, and a filter block contains an EP vector and a pointer to a TP vector.As shown at block 318, the global event controller 52 may then configure the other components of the BW subsystem 50 accordingly. For example, the global event controller 52 may then configure the BW cores 40A and 40B with configuration data that specifies attributes of the data to be loaded into the L1C, such as read type, convergence threshold, observed possible states/values The number and value, the number of TP vectors in the filter, the number of observations in the input slice, and the relative positions of different items within the BW input unit. For example, the configuration data may indicate that each BW input unit in the L1C is to be organized into observations starting with that BW input unit, followed by associated EP vectors, followed by associated TP vectors (or pointers to those TP vectors). Also, global event controller 52 may send configuration data to L1 DMA engine 36, L2DMA engine 34, and TP DMA engine 38, and the configuration data may specify properties such as L1 block size and L2 block size, as follows This is described in more detail with reference to FIG. 6 .As shown at block 320, the global event controller 52 may then populate each core with data to be processed by that core. For example, the global event controller 52 may load a different slice into the L1C of each BW core. Specifically, the global event controller 52 may copy the slice's observations, EP vectors, and TP vector pointers to the L1C of the BW core, and the global event controller 52 may use the BWTPL instruction to load the slice's actual TP vector into the BW core in the TP cache. When the global event controller 52 executes the BWTPL instruction, the instruction causes the global event controller 52 to use the TP DMA engine 38 to load the associated TP vector into the associated BW core's TP cache.For example, when global event controller 52 prepares BW core 40A for processing slice 63A, global event controller 52 may load the TP vector for the slice into TP cache 179 using a BWTPL instruction. For example, in a hypothetical scenario, the global event controller 52 may be based on (a) the number of filters in a slice (1500 in a hypothetical scenario: 500 per filter block) and (b) the TP vector per filter The number of (4 in the hypothetical scenario) calculates the number of TP vectors to be loaded as 6000. Thus, the global event controller 52 can execute a BWTPL instruction where "source 1" points to the base address of filter block number 1 (see Figure 4) and where "source 2" is set to 6000. Alternatively, if the filter includes a pointer to a TP vector, the global event controller 52 sets the "source 1" parameter to point to the address reflected in the first TP vector pointer in the filter block numbered one.Also, in one embodiment, the destination (TP cache 179) is fixed and therefore no destination address is required. Thus, global event controller 52 may load slice 63A into BW core 40A and slice 63B into BW core 40B. In one embodiment, BW subsystem 50 includes various communication paths to support various types of communication between components during configuration and during execution of the Baum-Welch algorithm.FIG. 6 is a block diagram with further details regarding communications within data processing system 10 . The arrows in Figure 6 identify different types of communication between different components. Also, different letters in the reference numbers for those arrows indicate different endpoints for the corresponding communications. For example, arrow 70A1 instructs host core 20 to provide global event controller 52 with read types such as observations in BW input data 64 (ie, for a stream of observations to be used as input to the Baum-Welch algorithm) and Parameters such as size are read, and arrow 70A2 instructs host core 20 to load items such as BW input data 64 and initial EP matrix 210 and initial TP matrix 220 into RAM 14 .Arrow 70B1 instructs global event controller 52 to provide the L1 block size to each of BW cores 40A and 40B. Similarly, arrows 70B2 and 70B3 instruct global event controller 52 to provide L1 block sizes to L1 DMA engine 36 and TP DMA engine 38, respectively. Arrow 70B2 also instructs global event controller 52 to send requests to and receive responses from TP DMA engine 38 . Arrow 70B4 instructs global event controller 52 to provide L2 block size to L2 DMA engine 34.Arrow 70C1 indicates that L2 DMA engine 34 obtains the L2 DMA table from RAM 14 . Arrow 70C2 indicates that L2 DMA engine 34 obtains BW input data from RAM 14, and arrow 70C3 indicates that L2 DMA engine 34 sends the data to L2C 32 in conjunction with loading the data into the BW core. In conjunction with copying BW input data from RAM 14 to L2C 32, L2 DMA engine 34 may use the L2 DMA table to perform address translation.Arrow 70D1 instructs L1 DMA engine 36 to send responses from some L1 DMA operations (eg, to indicate events such as command completed, operation complete, etc.) to global event controller 52 . Arrow 70D2 instructs L1 DMA engine 36 to send responses from some L1 DMA operations (eg, to indicate events such as command completed, operation complete, etc.) to L2C 32 . Arrow 70D3 instructs the L1 DMA engine 36 to send data to the L1C in the BW core via the shared bus.Arrow 70E1 instructs TP DMA engine 38 to obtain the TP DMA table from RAM 14 . Arrows 70E2 and 70E3 instruct the TP DMA engine 38 to use the TP DMA table to load TP data from the RAM 14 into the BW core, where the TP DMA engine 38 uses the shared bus to access the BW core. In one embodiment, RAM 14 includes one TP DMA table for each BW core.Additionally, arrow 70A2 also indicates that the host core 20 obtains the final EP matrix and the final TP matrix from RAM after the BW subsystem 50 has completed the Baum-Welch algorithm.Additionally, the global event controller 52 may send synchronization signals (eg, start and stop signals) to various components to coordinate or synchronize activities. For example, global event controller 52 may send a synchronization signal (eg, a start acknowledgment or "ack") to host core 20 to indicate that global event controller 52 has responded to host core 20 transferring control to global event controller 52 System execution is taken over, as shown at block 312 in FIG. 5 . And the global event controller 52 may send various synchronization signals (eg, start pulses) to components, such as the BW core and the DMA engine, to cause those components to begin their operations within the system. For example, global event controller 52 may send a start signal to TP DMA engine 38 after sending a BWTPL command to TP DMA engine 38, and global event controller 52 may send a start signal to a BW core after sending a BWAXF command to the BW core. Similarly, global event controller 52 may send a start signal to L1 DMA engine 36 after sending parameters for loading a block of data (eg, an input slice) into the BW core, and in response to the signal, L1 DMA engine 36 may The L1 DMA table from RAM 14 is read, and L1 DMA engine 36 may then load the specified data into the specified BW core.Also, transactions may be sequential, with the L1 DMA engine 36 stuffing data for one BW core into that BW core's L1C, and then stuffing data for another BW core into that BW core's L1C. The L1 DMA engine 36 may also update the L1 DMA table accordingly. Also, TP DMA engine 38 may sequentially load TP vectors into cores, filling the BW core's TP cache with data for one BW core and subsequently filling that BW core with data for another BW core in the TP cache.Referring again to FIG. 5, as shown at block 322, after the global event controller 52 configures the BW core and other components of the BW subsystem 50, the global event controller 52 may then send a BWAXF by sending a BWAXF to each BW core that should be involved instruction to trigger the initiation of the Baum-Welch algorithm. Specifically, in a hypothetical scenario, global event controller 52 sends a first BWAXF instruction to BW core 40A to cause BW core 40A to process slice 63A, and a second BWAXF instruction to BW core 40B to cause BW core 40B to process slice 63B . Global event controller 52 may also send corresponding enable signals to BW cores 40A and 40B.As shown at block 324, each BW core may then process its current input slice, as described in more detail below. As shown at block 330, after the BW core has generated the posterior EP slice and the posterior TP slice, the BW core may determine whether the convergence threshold has been met. If the convergence threshold has not been met, the BW core may save the a posteriori EP slice and a posteriori TP slice from the iteration to use as the current EP slice and the current TP slice for the next iteration, as shown at block 332, and the process May return to block 324: The BW core performs another iteration of the Baum-Welch algorithm.As shown at block 334, once the convergence threshold has been met, the BW core may save the posterior EP slice and the posterior TP slice to the L1C according to the specified "destination" parameter. (Eventually, once all slices for the BW core have been processed, global event controller 52 copies the final EP slice and final TP slice from L1C to RAM 14.)As shown at block 340, the BW core may then determine whether the slice also includes input slices to process. If not all input slices have been processed, as shown at block 342, the BW core may then update all relevant pointers and cause the next input slice to be addressed, and the process may return to block 324 for the BW core Process this next slice.Once all input slices have been processed, the BW core may send a done signal to the global event controller 52, and as shown at block 350, the global event controller 52 may determine whether all slices for the BW core have been processed deal with. If any slices remain to be processed, the process may return to block 320: the global event controller 52 loads the new slices into the BW core for processing as described above. Once all slices have been processed, global event controller 52 may save the a posteriori EP matrix and a posteriori TP matrix to RAM 14 to form BW output data 66 . Thus, as shown in FIG. 2 , the BW output data 66 will contain the final EP matrix 212 and the final TP matrix 222 .7 presents a flowchart illustrating parallel and other operations within data processing system 10 according to the hypothetical scenario discussed above. In other words, FIG. 7 depicts the execution flow. Specifically, in Figure 7, the horizontal axis reflects the passage of time, the vertical axis provides different rows for different components of the data processing system, and the items or operations in those rows are aligned vertically to reflect those items or When operations become active and inactive (or start and stop). Also, TP cache 179 is shown as TP cache A, and the TP cache in BW core 40B is shown as TP cache B. Also, Figure 7 focuses primarily on the hypothetical scenario discussed above, where the "Baum-Welch Execution" box reflects the processing of one slice per BW core. However, Figure 7 also includes a "Baum-Welch Execution" box with a dashed outline to reflect an alternate scenario involving execution of the second slice by BW core 40A.The operation in FIG. 7 begins with host core 20 loading BW input data 64 into RAM 14 . Host core 20A then transfers control to BW subsystem 50 . The global event controller 52 then issues DMA requests across the various memory hierarchies, including (a) requests to load data into L2C 32 as necessary, (b) requests to copy data from L2C 32 to L1C 46A or L1C 46B. request, and (c) a request to copy data into TP Cache A and TP Cache B. As illustrated, BW subsystem 50 may sequentially process transfers into different L1Cs, and BW subsystem 50 may sequentially process transfers into different TP caches. Also, the global event controller 52 may start each BW core asynchronously once the relevant data has been copied into the L1C and TP caches for that BW core. Also, as illustrated, multiple BW cores may execute the Baum-Welch algorithm in parallel.Also, when a BW core processes input slices within a slice, the BW core can fetch slice-by-slice data from the L1C. And if BW is used to execute multiple slices, as indicated by the dashed line labeled "Prefetch for BW Core 40A," BW subsystem 50 may use prefetch to start the next slice before the BW core completes the current slice. The data is loaded into the L1C of the BW core. For example, in one embodiment or scenario, once the last input slice is executing, halfway through the execution of the slice, the BW may set a flag (eg, "ready for L1 prefetch flag") to indicate The current piece is about to be completed. The global event controller 52 can then automatically detect that the flag has been set, and in response, the global event controller 52 can trigger the DMA engine to fetch the data for the next slice from the L2C 32 to the L1C in the BW core. However, if there may be multiple iterations of the BW algorithm, the BW core may delay setting the ready for prefetch flag until the maximization parameter has converged as required by the predetermined convergence threshold, as described above.Also, if the BW core is used to execute multiple slices but not all of those slices fit in the L2C 32, then as indicated by the dashed line labeled "ready for L2 prefetch", the global event controller 52 may prefetch data from RAM 14 to L2C 32. In one embodiment, the BW subsystem 50 uses substantially the same manner as used for prefetching to L1C, except that the global event controller 52 responds to the BW core (or some other component of the BW subsystem 50) Such a prefetch is initiated by setting another hardware flag (eg, "ready for L2 prefetch flag") to indicate that the BW core has started processing the last slice currently residing in L2C 32. Since all slice data from L2C 32 has been copied to L1C, global event controller 52 responds to this flag by copying one or more additional slices from RAM 14 to L2C 32. As indicated by the dashed line labeled "L2 Data Complete", this prefetching may be completed before the core is ready for a new slice, thereby enhancing the efficiency of the BW subsystem 50 .Once all BW cores have completed processing all their corresponding slices, the global event controller 52 sends a complete signal to the host core 20 and releases control to the host core 20 .Microarchitecture overview8-12 present the microarchitectural details of BW core 40A. The BW core 40B has the same kind of details.FIG. 8 is a block diagram with further details of BW core 40A. In particular, Figure 8 shows some of the operational units of BW core 40A from a relatively high-level perspective. As illustrated, those operating units can be organized into two main parts: a control part 50 and a calculation part 90 . The control part 50 and the calculation part 90 may be implemented using corresponding hardware circuits. The control portion 50 is primarily responsible for issuing both memory requests and appropriate commands to the computing portion 90 to configure the computing portion 90 for appropriate operation (eg, forward computation).The calculation section 90 is responsible for performing basic calculation operations based on the configuration set by the control section 50 . For example, the calculation part 90 reads the appropriate data passed by the control part 50, and operates on the data. Specifically, the calculation section 90 generates a likelihood value (LV) and a TP.The computing section 90 includes various hardware circuits or blocks. One of those main blocks or circuits is the EP generator 74 . Another major block or circuit is the Likelihood and Transition Probability (LVTP) generator 80 . Computing section 90 also includes circuitry for index generator 72 . The LVTP generator includes circuitry for generating LVs (ie, forward and backward values) and TPs. In the embodiment of FIG. 8, the circuitry includes two LVTP complexes 82A-82B. However, in other embodiments, the LVTP generator may include more than two LVTP aggregates. For example, an LVTP generator may include 128 or more LVTP aggregates, and some or all of those LVTP aggregates may work in parallel to compute FV, BV, EP and TP.Each LVTP complex (eg, LVTP complex 82A) includes a set of registers 86 and a plurality of LVTP engines 84 . Specifically, in one embodiment, as shown in FIG. 9, each LVTP complex includes four LVTP engines (LVTP engines 84A-84D). When the BW core 40A is invoked to process an input slice, the BW core 40A can automatically divide the input slice into parts and use an LVTP aggregate 82A to process each of those parts. For example, if the input slice contains 400 observations and there are 40 LVTP aggregates, BW core 40A can automatically divide the slice into 40 parts (eg, each part has 10 observations), and BW core 40A can use one LVTP Aggregates to process each part. Also, the four LVTP engines in the LVTP complex can handle consecutive observations. For example, LVTP engine 84A may process a first observation, LVTP engine 84B may process a second observation, and so on. And, the LVTP engine 84A may then process the fifth observation, and so on.Also, as indicated below with reference to FIG. 9 , each LVTP complex 82A may include a set of registers 86 . And each LVTP engine may generate an LV for each observation processed by that LVTP engine. Register 86 can be used to combine four LVs from four LVTP engines into a row. In one embodiment, LVTP complex 82A uses registers 86 to combine four 32-bit values into 128-bit rows.Further details regarding the LVTP engine are provided below with reference to FIG. 9 . Also, as shown in FIG. 10, each LVTP engine may include an LV generator and a TP generator.Furthermore, the LVTP generator 80 includes a forward write selector 88 that receives the output from each LVTP aggregate, saves the output to the L1C 46A, and (based on whether the LVTP aggregate is in FV generation mode) or BV generation mode) decides whether to forward this output to the TP generator for further processing.In one embodiment, the control portion 50 includes hardware circuitry or logic blocks for monitoring the execution of the six main steps. In step 1, the input read controller 52 issues a read request to the L1C 46A to obtain input data for the current slice from the L1C 46A. The input data may include, for example, observations for the current input slice. In step 2, maximization parameter read controller 54 issues a read request to L1C 46A to retrieve data from Baum- The initial/previous iteration of the Welch algorithm obtains the maximized parameters for the current iteration of the Baum-Welch algorithm. As such, BW core 40A may use the a posteriori EP vector and a posteriori TP vector from the previous iteration as the current EP vector and the current TP vector for the current iteration. Such iterations of the Baum-Welch algorithm may be referred to as "timestamps". Thus, the BW core can use the posterior vector from one timestamp as the current vector in the next timestamp. In step 3, the EP update controller 56 cooperates with the EP generator 74 to control the phase of updating the EP.In step 4, ordering histogram creator 58 collects write requests from various components within BW core 40A and avoids any duplicate requests to L1C 46A. For the purposes of this disclosure, a component of a BW core that issues a write request to that BW core's L1C may be referred to as a "write client" and a component that issues a read request may be referred to as a "read client" . In step 5, the read/write arbiter 60 arbitrates between the read client and the write client. For example, EP generator 74 and LVTP generator 80 may issue read requests or write requests at substantially the same time, and read/write arbiter 60 arbitrates requests from those clients. In step 6, read/write arbiter 60 pipelines read and write requests to L1C 46.FIG. 9 is a block diagram with further details regarding the computation portion 90 from the BW core 40A. Specifically, FIG. 9 focuses primarily on LVTP aggregate 82A, showing that LVTP engine 84 includes four LVTP engines 84A-84D, where each LVTP engine receives LV91 from L1C 46A and an index value from index generator 72 . For example, in one embodiment, components such as L1C 46A and TP cache 179 have a line size of 128 bits (or 16 bytes). Also, items such as the LV in L1C46A and the numerator and denominator in TP cache 179 have a size of 4 bytes. Therefore, a single read returns four consecutive values. Accordingly, when LVTP aggregate 82A reads LVs from L1C 46A, it receives four consecutive LVs. Additionally, each LVTP aggregate and all LVs that have been read into each LVTP engine in each LVTP aggregate. Each LVTP engine may then use some or all of those LVs in the process for generating the current LV for that engine.Also, the index generator 72 generates four consecutive i's and/or four consecutive j's, where one of those i's and/or one of those j's are sent to each LVTP engine in the LVTP complex 82A. Based on the LV and the index from index generator 72, each of those LVTP engines then generates and saves output data to registers 86, to L1C 46A, and/or to TP cache 179, as referenced below Figure 10 is described in more detail. Index generator 72 also generates indexes for LVTP engines in other LVTP complexes, and each of those LVTP engines works like LVTP engine 84A to generate output for its assigned observations.10 is a block diagram with further details regarding the LVTP engine 84A. As indicated above, in one embodiment, each LVTP aggregate includes 4 LVTP engines. LVTP engine 84A is described in more detail below. LVTP engines 84B-D may include the same or similar details. The main components of LVTP engine 84A are LV generator 150 and TP generator 170 . The TP generator may also be referred to as a transition pipeline. Also, 86 , L1C 46A and forward write selector 88 are shown in dashed lines to indicate that those components reside external to LVTP engine 910 .LV generator 150 operates to generate FV and BV according to the Baum-Welch algorithm (eg, according to Equation 1 and Equation 2, respectively). When the LVTP complex 82A is in FV generation mode, the LV generator 150 computes the FVs in order such that each FV is linked to each state that can be addressed with an i-index and a j-index. For example, the FV of the "jth" state from the "ith" state can be addressed with the index "i,j".In one embodiment, to start the Baum-Welch algorithm, the control section 50 sets all of the LVTP aggregates 82A to the FV generation mode. The LVTP engines then operate in parallel as described below to generate the FV matrix 230 . Subsequently, the control section 50 sets all the LVTP aggregates to the BV generation mode. The LVTP engines operate in parallel to generate the BV matrix 240, the final EP matrix 212, and the final TP matrix 222, as described in more detail below.As shown, when generating the LV, the LVTP engine 84A obtains the LV and the so-called "transition*fire probability" as input, and the LVTP engine 84A generates the calculated LV as the output. (For purposes of this disclosure, transition*transmission probability may also be referred to as "(T*E) probability" or simply "T*E".) Depends on the mode of operation of the LVTP engine 84A (ie, FV generation mode or BV generation mode), those LVs are either FV or BV. LV generator 150 may send the calculated LV to forward write selector 88 . The forward write selector 88 may then save the LV to the L1C 46A according to the source 3-F parameters and according to the current i and j indexes from the index generator 72 . Additionally, if the LV is a BV, the forward write selector 88 may send the BV directly to the TP generator 170 for immediate consumption for generating the TP.As far as LVs are concerned, in one embodiment, LV generator 150 reads the LVs that have been calculated from L1C 46A based on the i-index value and the j-index value from index generator 72 . The LV generator 150 also obtains the corresponding T*E value from the TP generator 170 . Specifically, as described in more detail below, TP generator 170 may save the set of T*E values to register 86 and LV generator 150 may read the set of T*E values from register 86 . The circuitry within LV generator 150 then generates a new "calculated LV" based on this input data (eg, depending on whether LVTP engine 84A is in FV generation mode or BV generation mode, according to equation (1) or (2) ). Specifically, the circuitry can continue to "spin" and process as many different T*Es and LVs as possible to generate new LVs for the targeted observations (ie, for the observations dispatched to the LVTP engine).In one embodiment or scenario, the circuitry in LV generator 150 for generating LVs includes dot product tree 152 , accumulator 154 , and reduction tree 156 . Also, when the LV generator 150 obtains the LV, it reads the row of the LV from the L1C 46A based on the i-index value and the j-index value from the index generator 72 . The row contains four consecutive 32-bit LVs. And when the LV generator 150 obtains the T*Es corresponding to those LVs, it obtains them from the registers 86, as indicated above. The LV generator 150 then treats each LV and the corresponding T*E as a pair, using a dot product tree 152 to multiply the values in each pair. In one embodiment, dot product tree 152 performs a dot product operation on four pairs of single-precision floating point ("FP32") to produce a scalar value (eg, an FP32 value) as output. (BW core 40A may handle FP23 variables in accordance with Institute of Electrical and Electronics Engineers (IEEE) standards such as the IEEE 754 standard for floating point arithmetic, IEEE 754-2019, published July 22, 2019. Accumulator 154 receives and accumulates the output from dot product tree 152. Also, the accumulator may use multiple channels to accumulate the output. Reduction tree 156 receives and reduces the output from all of those channels, to generate a single scalar output. This output will be the LV (FV or BV). Accordingly, it is illustrated in Figure 10 as "calculated LV".Specifically, the LV generator 150 sends the calculated LV to the forward write selector 88 . If the LV is an FV, the forward write selector 88 saves the LV to the L1C 46A according to the source 3-F parameters and according to the current i-index and j-index from the index generator 72 . However, if the LV is a BV, the forward write selector 88 sends the BV directly to the TP generator 170 for immediate consumption for generating the TP.The TP generator 170 operates to update the TP according to the Baum-Welch algorithm (eg, according to Equation 3). As indicated above, transition probability generator 170 includes TP cache 179, which is a local memory for storing TPs. Specifically, TP is stored as the TP numerator and TP denominator. (Similarly, EP is stored as EP numerator and EP denominator.)The TP generator 170 also calculates the T*E value for use by the LV generator 150 . To start, TP generator 170 reads the current EP from L1C 46A based on the i and j indexes from index generator 72 . The TP generator 170 also generates the appropriate addresses for the current transition numerator based on those indices. The TP generator 170 then uses this address to read the current transition numerator from the TP cache 179 .Multiplier 172 takes the current transition numerator and the current EP and calculates T*E from those values. TP generator 170 then stores the result in register 86 for use by LV generator 150 to determine the LV.Also, the TP generator 170 includes a multiplier 178 , an adder 176 and a multiplexer 177 . Multiplier 178 helps calculate the numerator of the transition probability (eg, according to equation (3)). Specifically, multiplier 178 reads the current FV from L1C 46A, and multiplier 178 receives the calculated BV directly from LV generator 150 (via forward write selector 88). Multiplier 178 multiplies those two values and then forwards the result to adder 176 . The adder 176 also obtains the previous numerator value from the TP cache 179 . Summer 176 then adds the updated transition numerator to the current transition numerator (eg, according to the summation portion of equation (3)) to generate a posterior transition numerator, which summer 176 sends to the multiplexer. device 177. The multiplexer 177 then saves the a posteriori transition numerator to the TP cache 179 .Also, the numerator and denominator formulas in equation (3) are nearly identical, and the TP generator 170 handles the difference by using hardware flags to indicate when the denominator or numerator is complete. Therefore, the components in the TP generator 170 continue to operate on the outstanding value until it is completed.TP generator 170 also includes a floating point (FP) divide pipeline 174 capable of handling four single precision floating point divisions. For example, FP divide pipeline 174 may include four divide pipelines operating in parallel. Also, when the FP divide pipeline 174 reads from the TP cache 179, the read operation returns 128-bit data containing four FP32 values. Accordingly, a divisor/denominator read returns four divisors, and a dividend/numerator returns four dividends. The FP division pipeline 174 may then perform FP32 division on all four dividend-dividend pairs in parallel.Additionally, the multiplexer 177 selects the output of the FP divide pipeline 174 or the output of the adder 176 to be stored in the TP cache 179 as the previous value to be subsequently used to perform the summation portion in equation (3). However, if neither the numerator or denominator values are ready in the TP cache 179, the FP divide pipeline 174 will not be enabled.FIG. 11 is a block diagram with further details regarding the EP generator 74 . As illustrated, the EP generator 74 includes three smaller blocks: an emission numerator pipeline 110 , an emission denominator pipeline 120 , and an emission division pipeline 130 . EP generator 74 may use those blocks to generate an EP (eg, according to Equation 4).As shown, transmit molecular pipeline 110 includes memory interface 116 , forward address generator 112 , backward address generator 118 , output address generator 114 , multiplier 113 , and adder or incrementer 115 . To generate posterior EP molecules for observations at target location "X" in the current input slice, transmit molecule pipeline 110 needs to sum the current FV and current BV molecules for all positions from position 1 to target location "X". Therefore, the forward address generator 112 obtains from the EP update controller 56 the base address (in L1C 46A) for the first FV molecule in the current slice. Likewise, the backward address generator 118 also obtains from the EP update controller 56 the base address for the first BV molecule in the current slice. Forward address generator 112 and backward address generator 118 then use those base addresses to generate first forward numerator ("FVNUM") and first backward numerator ("BVNUM" for reading from L1C 46A via memory interface 116 ”) the appropriate address.Multiplier 113 then multiplies those two values and sends the resulting "working numerator" to adder 115 . The launch molecule pipeline 110 also saves working molecules to the L1C to reside in a location that will ultimately hold the final result. The transmit molecular pipeline 110 may use the output address generator 110 and the memory interface 116 to determine the address and perform the write.Forward address generator 112 and backward address generator 118 may then increment the read address and read the next forward numerator and the next backward numerator from the L1C. Multiplier 113 then multiplies those two values and sends the resulting "new numerator" to adder 115 . The adder 115 then reads the working numerator "WorkingNUM" from the L1C 46A and adds the new numerator to it. The launch molecule pipeline 110 then saves the new working molecule to the L1C to reside in a location that will ultimately hold the final result. This process may continue until the launch molecule pipeline 110 has completed processing the current molecule at the target location X and has written the resulting working molecule to the L1C 46A. This value will then be the posterior EP molecule.Launch denominator line 120 may have the same or similar design as launch molecule line 110 . However, in addition to generating the transmit denominator, the transmit denominator pipeline 120 also generates the transition denominator. To generate the TP denominator, transmit denominator pipeline 120 uses a process similar to that described above. But when the transmit denominator pipeline 120 has finished calculating the denominator for position X-1, the transmit divide pipeline 130 saves the denominator to the TP cache 179 as the posterior TP denominator. The transmit divide pipeline 130 then runs the process for position X and saves the resulting denominator to L1C as the posterior EP denominator.Also, the EP generator 74 computes the numerator and denominator independently of each other in parallel.FIG. 12 is a block diagram with further details regarding transmit divide pipeline 130 . As shown, transmit divide pipeline 130 includes memory interface 136 , numerator address generator 132 , denominator address generator 138 , output address generator 134 , and FP divide pipeline 139 . To generate an EP for target location "X", issue divide pipeline 130 uses numerator address generator 132 and denominator address generator 138 to determine the addresses for the numerator and denominator of location X, and issue divide pipeline 130 uses memory interface 136 to convert from L1C 46A Read those values.The FP division pipeline 139 then divides the numerator by the denominator and sends the resulting EP to the output address generator 134, which determines the appropriate location in the L1C to hold the EP. The memory interface 136 then writes the EP to this location to serve as a posteriori EP.Furthermore, when components such as transmit divide pipeline 130, transmit numerator pipeline 110, transmit denominator pipeline 120, and TP generator 170 generate values such as TP, EP, EP denominator, etc., those values may actually include values of A set or vector with a different value for each possible observed state. For example, if there are four possible observation states, the BW subsystem 50 maintains a set or vector of four probabilities for these terms, such as posterior EP, posterior TP, and so on.Additionally, the BW subsystem described in this paper is very flexible in that it allows applications to specify many different parameters, including the number of possible observed states.The disclosed BW subsystem is also efficient in terms of execution time since the BV phase, the EP update phase and the TP update phase all work in parallel after the FV phase is complete. Furthermore, the BV data from the LV generator is captured and used directly in the EP update phase and the TP update phase. Also, FVs are read from L1C, where these FVs are stored prior to execution of other phases. And the EP update and TP update read the FV from the L1C, thereby avoiding costly access to RAM.One advantage of making BV data readily available within the BW subsystem is that doing so enables the BW subsystem to avoid redundant computations (such as multiplications). Additionally, making BV data readily available within the BW subsystem reduces the required communication bandwidth between the BW core and other components relative to other approaches. For example, the current approach makes BV data available to the TP update phase without the need to re-compute the BV data or retrieve the BV data from RAM during the TP update phase.And the current teaching further enhances bandwidth efficiency by providing a TP cache in the BW core to save the TP. Therefore, no external bandwidth is required to read those values.The current teaching also enables the BW subsystem to start updating TP and EP while the BV phase is still in progress by using data generated during the BV phase as it becomes available. This approach may be referred to as a "partial computation approach". In contrast, other approaches can only start updating EP and TP after the FV and BV phases are completed.Relative to other approaches, the fractional computation approach can reduce overall memory access and storage requirements, and can improve parallelism and reduce the execution time of the Baum-Welch algorithm without increasing the utilization of computing resources outside the BW subsystem.T*E lookup tableFigure 13 is a block diagram with details regarding an alternative embodiment of a BW accelerator. In particular, Figure 13 primarily focuses on an LVTP engine 910 with features for efficiently processing observations according to a profile hidden Markov model (PHMM). For example, observations in the biological field can be amenable to processing according to PHMM.In one embodiment or scenario, LVTP engine 910 resides in a BW accelerator in a data processing system that includes the same kinds of components as data processing system 10, except for certain aspects of the LVTP engine and the BM core Some aspects of the control section were changed. In the embodiment of Figure 13, LVTP engine 910 includes features that enable BW accelerator 41 to efficiently analyze observations according to PHMM. Specifically, the LVTP engine 910 may be configured with a PHMM using a maximum of 36 elements of the T*E value. For the purposes of this disclosure, such a PHMM may be referred to as a "generic PHMM." Accordingly, BW subsystems, BW accelerators, and BW cores having features like those in FIG. 13 may be referred to as "generic BW subsystems," "generic BW accelerators," and "generic BW cores," respectively.As shown, LV generator 920 and TP generator 930 are included in LVTP engine 910 . Like LV generator 150 , LV generator 920 includes dot product tree 922 , accumulator 924 , and reduction tree 926 . However, unlike LV generator 150 , LV generator 920 also includes TELUTs stored in T*E look-up table (TELUT) storage 980 . LV generator 920 may operate as LV generator 150, except that when TELUT is enabled, dot product tree 922 obtains T*E from TELUT in TELUT store 980 rather than from the TP generator via registers. Specifically, when TELUT is enabled, LV generator 920 uses the i-index and j-index from the index generator to determine one or more TELUT units to read from TELUT store 980 to obtain one or more current TELUT units *E value.In dot product tree 922, one TELUT is utilized for each multiplier. Thus, the TELUT store 980 in FIG. 13 contains four TELUTs, with one value from each of those tables fed into the dot product tree 922. Also, in one embodiment or example scenario, each of those four TELUTs is organized into a 1x36 table, where each i,j combination is mapped into an entry or cell. Before the LVTP engine 910 begins processing observations, the global event controller may load the TELUT with the appropriate value into the TELUT store 980.In one embodiment or scenario, the TELUT in TELUT store 980 captures all possible combinations of preset (ie, initial) transition probabilities and transmit probabilities. Since these computations are redundant across many timestamps and the product is a common parameter of both the forward and backward computations of the Baum-Welch algorithm, the product of each combination of initial transition probability and emission probability is stored in LUTs.Like TP generator 170 , TP generator 930 includes multiplier 932 , adder 934 , multiplexer 936 , TP cache 940 , FP divide pipeline 942 , and multiplier 944 . However, when TELUT is enabled, the multiplier 944 is disabled or not used. Therefore, the multiplier 944 is shown filled with dots.Since the TELUT can hold up to 36 LUT entries, the LVTP engine 910 can be used effectively and efficiently for different combinations of transition probabilities and transmit probabilities with any application that can fit within the 36 entries. However, for different types of applications, TELUT (or TELUT store 980) may be disabled, LV generator 920 may obtain the calculated T*E value from TP generator 930, and transition probabilities may be read from the TP cache for execution This multiplication, and the TP generator 930 can read the transmit probability from the L1C.The TELUT store 980 and the TELUTs therein enable the TP generator 930 to avoid redundant multiplication of transition probabilities and transmit probabilities. In one embodiment or scenario, by using TELUTs in TELUT storage 980, LVTP engine 910 achieves up to a 66% reduction in processing bandwidth for each LVTP engine. For example, when the read length is about 650 bases, the TP generator 930 can avoid 100 million multiplications in the forward stage and 61 million multiplications in the backward stage.In FIG. 13 , registers 950 , L1C 970 and forward write selector 960 are shown in dashed lines to indicate that those components reside external to LVTP engine 910 . As indicated above, those components may be the same kinds of components discussed with reference to data processing system 10 .Sort and filter LVsFurthermore, the control portion of the BW core may include a sorting histogram manager 990 that uses a histogram-based sorting mechanism to decide whether forward or backward values of states need to be calculated. In some applications, ordering may enable greatly reducing (ie, filtering) the number of states that need to be computed at each timestamp without reducing the accuracy of the Baum-Welch algorithm.In general, when sorting and filtering are enabled, the sorting histogram manager 990 sorts the FV and BV, and then the sorting histogram manager 990 discards values below a certain threshold to reduce computational requirements due to those The value will not contribute significantly to the result. Specifically, if sorting is enabled, sorting histogram manager 990 compares each write value (eg, forward value Ft(i)) to sixteen pre-defined thresholds for The number of written values is counted. In one embodiment or scenario, the sorting histogram manager 990 divides the entire range of single-precision floating point numbers into sixteen equal parts (eg, each group range is 4.25E+37), and the sorting histogram manager 990 Determines which range or threshold probability value will filter out a threshold number of writes. In other words, the sorting histogram manager 990 uses the probability value threshold and the write count threshold to filter out writes.For example, if the FV matrix includes 10,000 FVs, the sorted histogram manager 990 sorts those FVs (sorting the 10,000 values in descending order), and then determines which probability value threshold can be used to reduce the number of values to Not more than the write count threshold. For example, if the write count threshold is 1000 writes or 10% of the writes, the sorting histogram manager 990 determines which probability value threshold can be used to reduce the number of writes to be processed to 1000. Then, during the next timestamp, the sorting histogram manager 990 filters all writes that fall below the calculated probability value threshold. Therefore, 1000 FVs will be used instead of 10,000 FVs to perform the overall calculation. For some applications (eg for genome refinement), this kind of approximation will not affect the overall accuracy of the Baum-Welch algorithm.Additionally, the ranking histogram manager 990 may be configured to change the threshold for each group as required. The count of states exceeding a certain threshold enables the sorted histogram manager 990 to pick a certain threshold to efficiently discard states falling below the selected threshold in the next timestamp.Convert Standard PHMM to Generic PHMMConverting any standard PHMM to a generic PHMM is theoretically possible, but this may not always be feasible for accuracy or computational resource purposes. For discussion purposes, a "standard HMM" is denoted by "G1(V1,A1)" and a "generic HMM" is denoted by "G(V,A)". Also, assume that both graphs are constructed to represent a single sequence "SG" of length "N=nSG", and have a source state "v0" and a sink state "vN+1", which are located at the beginning and end of the graph end. It is also assumed that there is an input sequence "S" for the training or inference step. Matching states in both standard PHMM and generic PHMM performs match and replace events in exactly the same way. Based on these assumptions, it can be shown that a general-purpose PHMM can both (a) insert as many characters as a standard PHMM can insert, and (b) delete as many characters as a standard PHMM can, thereby demonstrating that when a general-purpose PHMM is considered The generic PHMM does not have any theoretical limitations on the combination of modifications that can be made by the PHMM compared to the standard PMM.First, it is stated that the maximum number of characters that a standard PHMM can insert between two characters SG[t] and SG[t+1] is nS. This statement is proved by the following lemma: the number of visits to insert state vtI,1 never reaches nS+1. This is known because accessing the insertion state vtI,1 consumes characters (ie, emissions) from the input sequence S. Thus, it is only possible to access vtI,1 not more than nS times. Thus, it has been shown that a generic PHMM can insert as many characters as a standard PHMM can, if the maximum insertion state parameter is set to l=nS.Second, it is stated that the maximum number of characters that a standard PHMM can remove from a SG is nSG. This statement is proved by the following lemma: There are no more than nSG deleted states in a standard PHMM, since (a) there are only as many deleted states as there are matching states, and (b) for every deletion state in the SG characters, only a single matching state exists. Thus, there cannot be more than nSG deletion states. Therefore, it is not possible to delete more than nSG characters. If then a generic PHMM can delete as many characters as a standard PHMM can.However, there are practical limitations when implementing a general-purpose PHMM with a general-purpose BM accelerator.Additional Embodiments14 is a block diagram of a system 1200 in accordance with one or more embodiments. System 1200 may include one or more processors 1210 , 1215 coupled to controller hub 1220 . In one embodiment, controller hub 1220 includes graphics memory controller hub (GMCH) 1290 and input/output hub (IOH) 1250 (which may be on separate chips); GMCH 1290 includes memory controller and graphics controller, Memory 1240 and coprocessor 1245 are coupled to the memory controller and graphics controller for controlling operations within the coupled memory; IOH 1250 couples input/output (I/O) devices 1260 to GMCH 1290. Alternatively, one or both of the memory and graphics controller are integrated within the processor, the memory 1240 and coprocessor 1245 are directly coupled to the processor 1210, and the controller hub 1220 and IOH 1250 are in a single chip.Optional properties of the additional processor 1215 are represented in FIG. 14 by dashed lines. Each processor 1210 , 1215 may include one or more processing cores and may be some version of processor 12 .Memory 1240 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1220 communicates with the processing(s) 1295 via a multidrop bus such as a front side bus (FSB), a point-to-point interface such as a quick path interconnect (QPI), or similar connections 1295 1210, 1215 to communicate.In one embodiment, coprocessor 1245 is a special purpose processor, such as, for example, a high throughput MIC processor, network or communications processor, compression engine, graphics processing unit (GPU), general purpose GPU (GPGPU), embedded processing Accelerator, BW Accelerator, etc. In one embodiment, the controller hub 1220 may include an integrated graphics accelerator.Various differences may exist between physical resources 1210, 1215 in a range of quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, and the like.In one embodiment, processor 1210 executes instructions that control general types of data processing operations. Embedded within these instructions may be coprocessor instructions. The processor 1210 identifies these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1245 . Accordingly, processor 1210 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 1245 over a coprocessor bus or other interconnect. Coprocessor(s) 1245 accept and execute received coprocessor instructions.15 is a block diagram of a first more specific exemplary system 1300 in accordance with one or more embodiments. As shown in FIG. 15 , the multiprocessor system 1300 is a point-to-point interconnect system and includes a first processor 1370 and a second processor 1380 coupled via a point-to-point interconnect 1350 . Each of processors 1370 and 1380 may be some version of processor 12 . In one embodiment, processors 1370 and 1380 are processors 1210 and 1215, respectively, and coprocessor 1338 is coprocessor 1245. In another embodiment, processors 1370 and 1380 are processor 1210 and coprocessor 1245, respectively. Alternatively, the processor 1380 may be a BW accelerator.Processors 1370 and 1380 are shown including integrated memory controller (IMC) units 1372 and 1382, respectively. Processor 1370 also includes point-to-point (P-P) interfaces 1376 and 1378 as part of its bus controller unit; similarly, second processor 1380 includes P-P interfaces 1386 and 1388 . Processors 1370, 1380 may exchange information via P-P interface 1350 using P-P interface circuits 1378, 1388. As shown in Figure 15, IMCs 1372 and 1382 couple the processors to respective memories, namely memory 1332 and memory 1334, which may be portions of main memory locally attached to the respective processors.Processors 1370 , 1380 may each exchange information with chipset 1390 via respective P-P interfaces 1352 , 1354 using point-to-point interface circuits 1376 , 1394 , 1386 , 1398 . Chipset 1390 may optionally exchange information with coprocessor 1338 via high performance interface 1339 . In one embodiment, coprocessor 1338 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like.A shared cache (not shown) can be included in either processor, or external to both processors but connected to the processors via a P-P interconnect, so that if the processor is placed in a low power mode, either Local cache information for one or both processors may be stored in a shared cache.Chipset 1390 may be coupled to first bus 1316 via interface 1396 . In one embodiment, the first bus 1316 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, although the scope of the invention is not limited in this regard .As shown in FIG. 15 , various I/O devices 1314 may be coupled to the first bus 1316 along with a bus bridge 1318 that couples the first bus 1316 to the second bus 1320 . In one embodiment, processors such as co-processors, high-throughput MIC processors, GPGPUs, accelerators (such as, for example, graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays (FPGAs), or any other processor One or more additional processors 1315 are coupled to the first bus 1316. In one embodiment, the second bus 1320 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1320, including, for example, a keyboard and/or mouse 1322, a communication device 1327, and a storage unit 1328, such as a device that may include instructions/code and data 1330 disk drive or other mass storage device. Additionally, audio I/O 1324 may be coupled to second bus 1320 . Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 15, the system may implement a multidrop bus or other such architecture.16 is a block diagram of a second more specific example system 1400 in accordance with one or more embodiments. Certain aspects of FIG. 15 have been omitted from FIG. 16 so as not to obscure other aspects of FIG. 16 .16 illustrates that processors 1370, 1380 may include integrated memory and I/O control logic ("CL") 1372 and 1382, respectively. Thus, the CLs 1372, 1382 include integrated memory controller units and include I/O control logic. 16 illustrates that not only memory 1332, 1334 is coupled to CL 1372, 1382, but I/O device 1414 is also coupled to control logic 1372, 1382. Conventional I/O devices 1415 are coupled to chipset 1390 .17 is a block diagram of a system on a chip (SoC) 1500 in accordance with one or more embodiments. The dashed boxes are optional features on more advanced SoCs. In FIG. 17, interconnect unit(s) 1502 are coupled to: an application processor 1510 that includes a set of one or more cores 1102A-N (including constituent cache units 1104A-N) and a share(s) of cache unit 1106; system agent unit 1110; bus controller unit(s) 1116; integrated memory controller unit(s) 1114; one or more coprocessor(s) 1520, which may include integrated graphics logic, Image processor, audio processor, video processor and/or BW accelerator; static random access memory (SRAM) unit 1530; direct memory access (DMA) unit 1532; and display unit 1540 for coupling to one or more external monitor. In one embodiment, coprocessor(s) 1520 include special purpose processors such as, for example, network or communication processors, compression engines, GPGPUs, high throughput MIC processors, embedded processors, security processors, etc. Wait.Embodiments also include the following examples:Example A1 is a processor package including at least one BW core and an LV generator in the BW core. The LV generator is used to generate FV and BV for the observation set. The BW core also includes a TP generator for generating TPs for the set of observations. The BW core also includes an EP generator for generating EPs for observation sets. The BW core is used to generate at least two types of probability values in parallel from the group consisting of FV, BV, EP and TP.Example A2 is the processor package of Example A1, further comprising an LVTP engine in a BW core. The LVTP engine includes an LV generator and a TP generator. The LV generator is used to make the completed BV available to the TP generator in response to generating the completed BV. The TP generator is used to generate at least one of the TPs using the completed BV before the LV generator has completed generating the BV.Example A3 is the processor package of example A2, wherein the EP generator is to generate at least one EP for the observation set before the LV generator has finished generating the BV.Example A4 is the processor package of Example A1, further comprising at least a first LVTP engine and a second LVTP engine in the BW core. The first LVTP engine includes a first LV generator for generating FVs for a first subset of observations from the set of observations. The second LVTP engine includes a second LV generator for generating FVs for a second subset of observations from the set of observations. The first LVTP engine and the second LVTP engine are used to work in parallel to generate the FV. Example A4 may also include the features of any one or more of Examples A2-A3.Example A5 is the processor package of Example A4, further comprising at least a first LVTP complex and a second LVTP complex in a BW core. The first LVTP complex includes a first LVTP engine and a second LVTP engine, and the second LVTP complex includes a third LVTP engine and a fourth LVTP engine. Also, the first LVTP engine, the second LVTP engine, the third LVTP engine and the fourth LVTP engine are used to work in parallel to generate the FV.Example A6 is the processor package of Example Al, further comprising a global event controller in communication with the BW core. A global event controller is used to enable an application to specify parameters for applying the Baum-Welch algorithm to a set of observations, where these parameters include numerical parameters that specify how many possible states are available for observation. Example A6 may also include the features of any one or more of Examples A2-A5.Example A7 is the processor package of Example A1, wherein the BW core supports a BW acceleration instruction that includes a first parameter specifying a number of view slices to be processed, and specifying that for each view slice A second parameter for the number of observations to be processed. Example A7 may also include the features of any one or more of Examples A2-A6.Example A8 is the processor package of Example Al, further comprising a TP cache in the BW core, wherein the TP generator is to save the generated TP to the TP cache and read TP data from the TP cache. Example A8 may also include the features of any one or more of Examples A2-A7.Example A9 is the processor package of Example A8, further comprising a global event controller in communication with the BW core. The global event controller is used to copy the TPs of the initial TP matrix into the TP cache.Example A10 is the processor package of Example A1, further comprising an L1C in a BW core. Also, the EP generator is used to save the generated EP to the L1C. Example A10 may also include the features of any one or more of Examples A2-A9.Example B1 is a data processing system that includes a host processor, a RAM in communication with the host processor, and at least one BW core in communication with the host processor. The BW core includes: an LV generator for generating FV and BV for observation sets; a TP generator for generating TP for observation sets; and an EP generator for generating TP for observation sets ep. Also, the BW core is used to generate at least two types of probability values from the group consisting of FV, BV, EP and TP in parallel.Example B2 is the data processing system of Example B1, further comprising an LVTP engine in the BW core. The LVTP engine includes an LV generator and a TP generator. Also, the LV generator is used to make the completed BV available to the TP generator in response to generating the completed BV, and the TP generator is used to: use the completed BV to generate the TP before the LV generator has completed generating the BV of at least one TP.Example B3 is the data processing system of Example B1, further comprising a processor package including a host processor, a BW core, and a global event controller. The global event controller is used to enable the application to specify parameters for applying the Baum-Welch algorithm to the set of observations, wherein the parameters include a first parameter for specifying how many possible states are available for observation. Example B3 may also include the features of Example B2.Example B4 is the data processing system of Example B1, wherein the at least one BW core includes a first BW core and a second BW core. Also, the data processing system further includes a global event controller. The global event controller is used to (a) automatically divide the set of raw observations from the application into a first subset and a second subset, (b) cause the first BW core to generate TPs for the first subset, and (c) cause the The second BW core generates TPs for the second subset. Example B4 may also include the features of any one or more of Examples B2-B3.Example B5 is the data processing system of Example B4, further comprising a first L1C in the first BW core and a second L1C in the second BW core. Furthermore, the global event controller is configured to (a) automatically generate a first slice comprising a first subset of observations and a first set of filters, (b) automatically generate a first slice comprising a second subset of observations and a second set of filters Two pieces, (c) loading the first piece into the first L1C, and (d) loading the second piece into the second L1C.Example B6 is the data processing system of Example B1, further comprising at least a first LVTP engine and a second LVTP engine in the BW core. The first LVTP engine includes a first LV generator for generating FVs for a first subset of observations from the set of observations. The second LVTP engine includes a second LV generator for generating FVs for a second subset of observations from the set of observations. The first LVTP engine and the second LVTP engine are used to work in parallel to generate the FV. Example B6 may also include the features of any one or more of Examples B2-B5.Example B7 is the data processing system of Example B6, wherein the at least one BW core includes a first BW core and a second BW core; the first BW core includes a plurality of LVTP aggregates, each LVTP aggregate including a plurality of LVTPs engine; a second BW core includes a plurality of LVTP complexes, each LVTP complex includes a plurality of LVTP engines; and LVTP from all of the LVTP aggregates in the LVTP aggregates in all the BW cores in the BW core The engine is used to work on generating the FV in parallel.Example C1 is an apparatus including a computer-readable medium and instructions in the computer-readable medium that, when executed by a host core in a data processing system including a BW subsystem including at least one BW core, cause the BW core Subsystems: (a) generate FV and BV for the observation set; (b) generate TP for the observation set; (c) generate EP for the observation set; wherein the instructions, when executed, cause the BW subsystem to The set of , EP, and TP generates at least two types of probabilities.Example C2 is the apparatus of example C1 , wherein the instructions, when executed, cause a global event controller in the BW subsystem to configure the BW subsystem based on parameters provided by the application, wherein the parameters include instructions for specifying how many possible Numerical parameters that state can be used to observe.Example C3 is the apparatus of example C2, wherein the instructions, when executed, further cause the global event controller to (a) automatically divide the set of raw observations from the application into a first subset and a second subset, (b) ) using the first BW core in the BW subsystem to generate the FV for the first subset, and (c) using the second BW core in the BW subsystem to generate the FV for the second subset.Example D1 is a processor package comprising: at least one BW core; an LV generator in the BW core that applies FV and BV generation for an observation set; an EP generator in the BW core, the The EP generator is used to generate EPs for the observation set; the TP generator in the BW core is used to generate the TP for the observation set; and the TELUT store in the BW core is used for storage by the LV generator in the Preconfigured T*E values to use when generating FV and BV.Example D2 includes the processor package of Example D1, wherein the TELUT storage enables a TP generator to complete Baum-Welch without computing T*E values for at least some of the observations in the set of observations Iteration of the algorithm.Example D3 is the processor package of example D1, further comprising at least a first LVTP engine and a second LVTP engine in the BW core. The first LVTP engine includes a first LV generator for generating an FV for a first subset of observations from the set of observations and a first TELUT store. The second LVTP engine includes a second LV generator for generating FVs for a second subset of observations from the set of observations and a second TELUT store. The first LVTP engine and the second LVTP engine are used to work in parallel to generate the FV. The first LV generator and the second LV generator are used to use the T*E value stored from the first TELUT and the T*E value stored from the second TELUT, respectively, in generating the FV and BV. Example D3 may also include the features of Example D2.Example D4 is the processor package of Example D1, further comprising a control portion in the BW core for comparing the FV to a threshold and for discarding FVs having values below the threshold. Example D4 may also include the features of any one or more of Examples D2-D3.Example D5 is the processor package of Example D4, wherein the control portion is further for: sorting the FV during the first timestamp; comparing the FV to a threshold probability value; and discarding the FV during the second timestamp FV for values below this threshold.Example D6 is the processor package of Example D4, wherein the control portion is further for: sorting the FVs during the first timestamp; determining a threshold probability value for sorting out a threshold amount of FVs to be retained; and discarding FVs with values below the threshold probability value during the second timestamp. Example D6 may also include the features of Example D5.Example D7 is the processor package of Example D1, further comprising a global event controller in communication with the BW core, the global event controller for generating a predetermined T*E value before the LV generator begins to generate the FV and BV. Configure TELUT storage. Example D7 may also include the features of any one or more of Examples D2-D6.Example D8 is the processor package of example D7, wherein the TELUT stores at least one TELUT for storing 36 entries.Example D9 is the processor package of example D1, wherein a BW core is used to generate at least two types of probability values in parallel from the group consisting of FV, BV, EP, and TP. Example D9 may also include the features of any one or more of Examples D2-D8.Example D10 is the processor package of example D9, wherein the EP generator is to generate at least one EP for the observation set before the LV generator has finished generating the BV.Example D11 is the processor package of example D1, further comprising a host core in communication with the BW core.Example El is a data processing system that includes a host processor, a RAM in communication with the host processor, at least one BW core in communication with the host processor, and an LV generator in the BW core. The LV generator is used to generate FV and BV for the observation set. The BW core also includes an EP generator and a TP generator. EP generation is used to generate EPs for observation sets, and TP generator is used to generate TPs for observation sets. The BW core also includes a TELUT store for storing including pre-configured T*E values for use by the LV generator in generating the FV and BV.Example E2 is the data processing system of example E1, wherein the TELUT storage enables the TP generator to complete Baum-Welch without computing T*E values for at least some of the observations in the set of observations Iteration of the algorithm.Example E3 is the data processing system of example El, further comprising at least a first LVTP engine and a second LVTP engine in the BW core. The first LVTP engine includes a first LV generator for generating an FV for a first subset of observations from the set of observations and a first TELUT store. The second LVTP engine includes a second LV generator for generating FVs for a second subset of observations from the set of observations and a second TELUT store. Also, the first LVTP engine and the second LVTP engine are used to be engaged in generating the FV in parallel, and the first LV generator and the second LV generator are used to use the T*E stored from the first TELUT in generating the FV and BV, respectively value and the T*E value stored from the second TELUT. Example E3 may also include the features of Example E2.Example E4 is the data processing system of example El, further comprising a control portion in the BW core for comparing the FV to a threshold and for discarding FVs having values below the threshold. Example E4 may also include the features of any one or more of Examples E2-E3.Example E5 is the data processing system of example E4, wherein the control portion is further for: sorting the FV during the first timestamp; comparing the FV to a threshold probability value; and discarding the FV during the second timestamp FV for values below this threshold.Example E6 is the data processing system of Example E4, wherein the control portion is further for: sorting the FVs during the first timestamp; determining a threshold probability value for sorting out a threshold amount of FVs to be retained; and discarding FVs with values below the threshold probability value during the second timestamp. Example E6 may also include the features of Example E5.Example E7 is the data processing system of example El, further comprising a global event controller in communication with the BW core, the global event controller for generating a predetermined T*E value before the LV generator begins to generate the FV and BV. Configure TELUT storage. Example E7 may also include the features of any one or more of Examples E2-E6.Example E8 is the data processing system of example El, wherein a BW core is used to generate at least two types of probability values in parallel from the group consisting of FV, BV, EP, and TP. Example E8 may also include the features of any one or more of Examples E2-E7.Example F1 is an apparatus including a computer-readable medium and instructions in the computer-readable medium that, when executed by a host core in a data processing system including a BW subsystem including at least one BW core, cause The BW subsystem (a) generates FV and BV for the observation set based at least in part on preconfigured T*E values stored from the TELUT in the BW core; (b) generates the EP for the observation set; and (c) generates the EP for the observation set Generate TP.Example F2 is the apparatus of example F1, wherein the TELUT storage enables the TP generator to complete the Baum-Welch algorithm without computing T*E values for at least some of the observations in the set of observations iterate.In view of the principles and example embodiments described in this disclosure by text and/or illustrations, those skilled in the art will recognize that the described embodiments may be modified in arrangement and detail without departing from the principles described herein. Furthermore, this disclosure uses expressions such as "one embodiment" and "another embodiment" to describe embodiment possibilities. Those expressions, however, are not intended to limit the scope of the present disclosure to the particular embodiment configurations. For example, as used herein, those expressions may refer to the same embodiment or different embodiments, and those different embodiments may be combined into other embodiments.Additionally, the current teachings can be used to advantage in many different kinds of data processing systems. Such data processing systems may include, but are not limited to, mainframe computers, minicomputers, supercomputers, high performance computing systems, computing clusters, distributed computing systems, personal computers (PCs), workstations, servers, client-server systems, portable Computers, laptops, tablets, entertainment devices, audio devices, video devices, audio/video devices (eg, televisions and set-top boxes), handheld devices, smart phones, telephones, personal digital assistants (PDAs), wearable devices , in-vehicle processing systems, accelerators, system-on-a-chip (SoC), and other devices used to process or transmit information. Accordingly, references to any particular type of data processing system (eg, a PC) should be construed to encompass other types of data processing systems as well, unless expressly specified otherwise or required by context. A data processing system may also be referred to as an "apparatus." Components of a data processing system may also be referred to as "devices."Also, in accordance with the present disclosure, a device may include instructions and other data that, when accessed by a processor, cause the device to perform particular operations. For the purposes of this disclosure, instructions or other data that cause a device to perform operations may generally be referred to as "software" or "control logic." The software used during the boot process may be referred to as "firmware". Software stored in non-volatile memory may also be referred to as "firmware". The software may be organized using any suitable structure or combination of structures. Thus, terms like program and module are generally used to cover a broad range of software constructs including, but not limited to, applications, subprograms, routines, functions, procedures, drivers, libraries, data structures, processes, microcode, and other types of software components. Furthermore, it should be understood that a software module may include more than one component, and those components may cooperate to accomplish the operation of the module. Furthermore, the operations that the software causes the device to perform may include creating operational contexts, instantiating particular data structures, and the like. Furthermore, embodiments may include software implemented using any suitable operating environment and programming language (or combination of operating environment and programming language). For example, the program code may be implemented in a compiled language, in an interpreted language, in a procedural language, in an object-oriented language, in an assembly language, in a machine language, or in any other suitable language.A medium that contains data and allows another component to obtain this data may be referred to as a "machine-accessible medium" or "machine-readable medium." Accordingly, embodiments may include a machine-readable medium containing instructions for performing some or all of the operations described herein. Such media may be referred to generally as an "apparatus", and in particular as a "program product." In one embodiment, the software for the various components may be stored on one machine-readable medium. In other embodiments, two or more machine-readable media may be used to store software for one or more components. For example, instructions for one component may be stored in one medium, and instructions for another component may be stored in another medium. Alternatively, portions of the instructions for one component may be stored in one medium, while the remainder of the instructions for that component (as well as the instructions for other components) may be stored in one or more other media. Similarly, software described above as residing on a particular device in one embodiment may reside on one or more other devices in other embodiments. For example, in a distributed environment, some software may be stored locally and some software may be stored remotely. The machine-readable medium of some embodiments may include, but is not limited to, tangible non-transitory storage components such as magnetic disks, optical disks, magneto-optical disks, and processors, controllers, and other components including data storage facilities. , Dynamic Random Access Memory (RAM), Static RAM, Non-Volatile RAM (NVRAM), Read Only Memory (ROM), Solid State Drive (SSD), Phase Change Memory (PCM), etc. For the purposes of this disclosure, the term "ROM" may be used generically to refer to non-volatile memory devices such as erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash ROM, flash memory Wait.Furthermore, operations described in one embodiment as being performed on one particular device may be performed by one or more other devices in other embodiments. Additionally, although one or more example processes have been described with reference to specific operations performed in a specific order, numerous modifications may be applied to those processes to yield numerous alternative embodiments of the invention. For example, alternative embodiments may include procedures that employ fewer operations than all disclosed, procedures that employ additional operations, and procedures in which various operations disclosed herein are combined, subdivided, rearranged, or otherwise altered.It should also be understood that the hardware and software components depicted herein represent functional elements that are reasonably self-contained such that each functional element can be designed, constructed, or updated substantially independently of the other functional elements. In alternative embodiments, components may be implemented as hardware, software, or a combination of hardware and software for providing the functionality described and illustrated herein. For example, in some embodiments, some or all of the control logic used to implement the described functions may be implemented using hardware logic circuitry, such as, for example, application specific integrated circuits (ASICs) or using programmable gate arrays (PGAs) to implement accomplish. Similarly, some or all of the control logic may be implemented as microcode in an integrated circuit chip. Also, terms such as "circuit" and "circuitry" may be used interchangeably herein. Those terms, as well as terms like "logic," may be used to refer to analog circuitry, digital circuitry, processor circuitry, microcontroller circuitry, hardware logic circuitry, hardwired circuitry, programmable circuitry, State machine circuitry, any other type of hardware component, or any suitable combination of hardware components.Additionally, components described as being coupled to each other, in communication with each other, responsive to each other, etc. need not be in constant communication with each other and need not be directly coupled to each other unless explicitly specified otherwise. Similarly, when a component is described as receiving data from or sending data to another component, unless expressly specified otherwise, the data may be sent or received through one or more intervening components. Additionally, some components of a data processing system may be implemented as adapter cards with interfaces (eg, connectors) for communicating with the bus. Alternatively, a device or component may be implemented as an embedded controller using components such as programmable or non-programmable logic devices or arrays, ASICs, embedded computers, smart cards, and the like. For the purposes of this disclosure, the term "bus" includes paths that can be shared by more than two devices, as well as point-to-point paths. Similarly, terms such as "wire", "pin" and the like should be understood to refer to a wire, set of wires, or any other suitable conductor or set of conductors. For example, a bus may include one or more serial links, the serial links may include one or more lanes, the lanes may be composed of one or more differential signaling pairs, and the altered characteristics of the electricity carried by those conductors may called "signals". Also, for the purposes of this disclosure, the term "processor" refers to a hardware component capable of executing software. For example, a processor may be implemented as a central processing unit (CPU), or as any other suitable type of processing element. A CPU may include one or more processing cores. A processor package may also be referred to as a "processor." And the device may include one or more processors.Other embodiments may be implemented in data and may be stored on a non-transitory storage medium that, if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform operations in accordance with the present disclosure one or more operations. Still further embodiments may be implemented in a computer-readable storage medium comprising information, when fabricated in a SoC or other processor, for configuring the SoC or other processor to perform execution according to the present invention. One or more operations that are exposed. One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium representing various logical units within a processor and which, when read by a machine, cause the machine to manufacture Logic unit for performing the techniques described herein. The instructions representing the various logic units may be referred to as "IP cores" and they may be stored on tangible machine-readable media and supplied to various consumers or manufacturing facilities for loading into the logic unit or processor that makes the in the manufacturing machine. One or more aspects of at least one embodiment may include a machine-readable medium containing instructions or design data defining the structures, circuits, devices, processors, and/or design data described herein system characteristics. For example, design data can be formatted in a hardware description language (HDL).In view of the wide variety of useful arrangements that can be readily derived from the example embodiments described herein, this detailed description is intended to be illustrative only and should not be construed as limiting the scope of coverage. |
Technologies for managing partially synchronized writes include a managed node. The managed node is to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network, pause execution of the workload, receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block, and resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices. Other embodiments are also described and claimed. |
WHAT IS CLAIMED IS:1. A managed node to manage partially synchronized writes, the managed node comprising:a network communicator to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network; anda data manager to pause execution of the workload;wherein the network communicator is further to receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block; andthe data manager is further to resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices.2. The managed node of claim 1, wherein:the network communicator is further to receive a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; and the data manager is further to determine an elapsed time period between the initial acknowledgement and the subsequent acknowledgement.3. The managed node of claim 2, wherein:the data manager is further to determine whether the elapsed time period satisfies a predefined threshold time period; andthe network communicator is further to send, in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, a request to at least one network device to increase a priority of write requests relative to other network traffic.4. The managed node of claim 3 wherein the data manager is further to:receive an assignment of the workload from an orchestrator server; and receive, with the assignment, an indication of the predefined threshold time period.5. The managed node of claim 2, wherein the data manager is further to:determine whether the elapsed time period satisfies a predefined threshold time period; and determine to await at least two acknowledgements in response to future write requests before resumption of the workload.6. The managed node of claim 1, wherein the data manager is further to:receive an assignment of the workload from an orchestrator server; receive, with the assignment, an indication of whether to enable partially synchronized writes; andwherein to resume execution of the workload comprises to:determine whether the assignment indicates to enable partially synchronized writes; andresume execution in response to a determination that the assignment indicates to enable partially synchronized writes.7. The managed node of claim 1 , wherein to issue the write request to multiple data storage devices comprises to issue the write request to multiple data storage devices in different failure domains.8. The managed node of claim 1 , wherein to issue the write request to write a data block comprises to send a key associated with the data block, wherein the key uniquely identifies the data block.9. The managed node of claim 1, wherein to issue the write request to multiple storage devices comprises to issue the write request to one or more data storage devices of a different managed node.10. The managed node of claim 1, wherein to resume execution of the workload comprises to:determine a number of partially synchronized write requests that have been issued, wherein each partially synchronized write request is a write request for which only one acknowledgement has been received;determine whether the number of partially synchronized write requests satisfies a threshold number of allowable partially synchronized write requests; andresume, in response to a determination that the number of partially synchronized write requests satisfies the threshold number, execution of the workload.11. The managed node of claim 10, wherein the data manager is further to receive an indication of the threshold number from an orchestrator server.12. The managed node of claim 11, wherein the network communicator is further to: receive a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; andthe data manager is further to determine an elapsed time period between the initial acknowledgement and the subsequent acknowledgement, determine whether the elapsed time period satisfies a predefined threshold time period, and reduce, in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, the threshold number of allowable partially synchronized write requests.13. A method for managing partially synchronized writes, the method comprising: issuing, by a managed node, a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network;pausing, by the managed node, execution of the workload;receiving, by the managed node, an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block; andresuming, by the managed node, execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices.14. The method of claim 13, further comprising:receiving, by the managed node, a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; anddetermining, by the managed node, an elapsed time period between the initial acknowledgement and the subsequent acknowledgement.15. The method of claim 14, further comprising:determining, by the managed node, whether the elapsed time period satisfies a predefined threshold time period; andsending, by the managed node and in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, a request to at least one network device to increase a priority of write requests relative to other network traffic.16. The method of claim 15, further comprising:receiving, by the managed node, an assignment of the workload from an orchestrator server; andreceiving, by the managed node, with the assignment, an indication of the predefined threshold time period.17. The method of claim 14, further comprising:determining, by the managed node, whether the elapsed time period satisfies a predefined threshold time period; anddetermining, by the managed node, to await at least two acknowledgements in response to future write requests before resumption of the workload.18. The method of claim 13, further comprising:receiving, by the managed node, an assignment of the workload from an orchestrator server;receiving, by the managed node, with the assignment, an indication of whether to enable partially synchronized writes; andwherein resuming execution of the workload comprises:determining whether the assignment indicates to enable partially synchronized writes; andresuming execution in response to a determination that the assignment indicates to enable partially synchronized writes.19. The method of claim 13, wherein issuing the write request to multiple data storage devices comprises issuing the write request to multiple data storage devices in different failure domains.20. The method of claim 13, wherein issuing the write request to write a data block comprises sending a key associated with the data block, wherein the key uniquely identifies the data block.21. The method of claim 13, wherein issuing the write request to multiple storage devices comprises issuing the write request to one or more data storage devices of a different managed node.22. The method of claim 13, wherein resuming execution of the workload comprises:determining, by the managed node, a number of partially synchronized write requests that have been issued, wherein each partially synchronized write request is a write request for which only one acknowledgement has been received;determining, by the managed node, whether the number of partially synchronized write requests satisfies a threshold number of allowable partially synchronized write requests; andresuming, by the managed node and in response to a determination that the number of partially synchronized write requests satisfies the threshold number, execution of the workload.23. The method of claim 22, further comprising receiving, by the managed node, an indication of the threshold number from an orchestrator server.24. One or more computer-readable storage media comprising a plurality of instructions that, when executed by a managed node, cause the managed node to perform the method of any of claims 13-23. managed node comprising means for performing the method of any of claims |
TECHNOLOGIES FOR PERFORMINGPARTIALLY SYNCHRONIZED WRITES CROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present application claims priority to U.S. Utility Patent Application SerialNo. 15/396,284, entitled "TECHNOLOGIES FOR PERFORMING PARTIALLY SYNCHRONIZED WRITES," which was filed on December 30, 2016, and which claims priority to U.S. Provisional Patent Application No. 62/365,969, filed July 22; 2016, U.S. Provisional Patent Application No. 62/376,859, filed August 18, 2016; and U.S. Provisional Patent Application No. 62/427,268, filed November 29, 2016.BACKGROUND[0002] In a data center, a compute node executing a workload may request to write a data block to data storage. To provide redundancy in the data storage, the compute node may send the data block through a network to multiple data storage devices that may be located in different locations, such that if any one of the data storage devices becomes disconnected from the network or otherwise unavailable, the compute node may rely on one of the other data storage devices to access the data. In doing so, the compute node typically pauses execution of the workload until the compute node has received a confirmation from the multiple networked data storage devices that the data has been successfully committed to non-volatile storage. At that point, the compute node determines that the data has been made "durable" (e.g., able to withstand at least one failure at one of the locations). However, while the process enhances the resiliency of the data, the time consumed in receiving the acknowledgements may adversely affect the quality of service (e.g., latency, etc.) of the workload.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0004] FIG. 1 is a diagram of a conceptual overview of a data center in which one or more techniques described herein may be implemented according to various embodiments;[0005] FIG. 2 is a diagram of an example embodiment of a logical configuration of a rack of the data center of FIG. 1 ; [0006] FIG. 3 is a diagram of an example embodiment of another data center in which one or more techniques described herein may be implemented according to various embodiments;[0007] FIG. 4 is a diagram of another example embodiment of a data center in which one or more techniques described herein may be implemented according to various embodiments;[0008] FIG. 5 is a diagram of a connectivity scheme representative of link-layer connectivity that may be established among various sleds of the data centers of FIGS. 1, 3, and 4;[0009] FIG. 6 is a diagram of a rack architecture that may be representative of an architecture of any particular one of the racks depicted in FIGS. 1-4 according to some embodiments;[0010] FIG. 7 is a diagram of an example embodiment of a sled that may be used with the rack architecture of FIG. 6;[0011] FIG. 8 is a diagram of an example embodiment of a rack architecture to provide support for sleds featuring expansion capabilities;[0012] FIG. 9 is a diagram of an example embodiment of a rack implemented according to the rack architecture of FIG. 8;[0013] FIG. 10 is a diagram of an example embodiment of a sled designed for use in conjunction with the rack of FIG. 9;[0014] FIG. 11 is a diagram of an example embodiment of a data center in which one or more techniques described herein may be implemented according to various embodiments;[0015] FIG. 12 is a simplified block diagram of at least one embodiment of a system for performing partially synchronized writes among a set of managed nodes;[0016] FIG. 13 is a simplified block diagram of at least one embodiment of a managed node of the system of FIG. 12;[0017] FIG. 14 is a simplified block diagram of at least one embodiment of an environment that may be established by a managed node of FIGS. 12 and 13; and[0018] FIGS. 15-16 are a simplified flow diagram of at least one embodiment of a method for managing partially synchronized writes that may be performed by a managed node of FIGS. 12-14.DETAILED DESCRIPTION OF THE DRAWINGS[0019] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0020] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one A, B, and C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).[0021] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).[0022] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0023] FIG. 1 illustrates a conceptual overview of a data center 100 that may generally be representative of a data center or other type of computing network in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 1, data center 100 may generally contain a plurality of racks, each of which may house computing equipment comprising a respective set of physical resources. In the particular non- limiting example depicted in FIG. 1, data center 100 contains four racks 102A to 102D, which house computing equipment comprising respective sets of physical resources 105A to 105D. According to this example, a collective set of physical resources 106 of data center 100 includes the various sets of physical resources 105A to 105D that are distributed among racks 102A to 102D. Physical resources 106 may include resources of multiple types, such as - for example - processors, co-processors, accelerators, field-programmable gate arrays (FPGAs), memory, and storage. The embodiments are not limited to these examples.[0024] The illustrative data center 100 differs from typical data centers in many ways.For example, in the illustrative embodiment, the circuit boards ("sleds") on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. In particular, in the illustrative embodiment, the sleds are shallower than typical boards. In other words, the sleds are shorter from the front to the back, where cooling fans are located. This decreases the length of the path that air must to travel across the components on the board. Further, the components on the sled are spaced further apart than in typical circuit boards, and the components are arranged to reduce or eliminate shadowing (i.e., one component in the air flow path of another component). In the illustrative embodiment, processing components such as the processors are located on a top side of a sled while near memory, such as dual in-line memory modules (DIMMs), are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.[0025] Furthermore, in the illustrative embodiment, the data center 100 utilizes a single network architecture ("fabric") that supports multiple other network architectures including Ethernet and Omni-Path. The sleds, in the illustrative embodiment, are coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center 100 may, in use, pool resources, such as memory, accelerators (e.g., graphics accelerators, FPGAs, application specific integrated circuits (ASICs), etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. The illustrative data center 100 additionally receives usage information for the various resources, predicts resource usage for different types of workloads based on past resource usage, and dynamically reallocates the resources based on this information.[0026] The racks 102A, 102B, 102C, 102D of the data center 100 may include physical design features that facilitate the automation of a variety of types of maintenance tasks. For example, data center 100 may be implemented using racks that are designed to be robotically- accessed, and to accept and house robotically-manipulatable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive a greater voltage than is typical for power sources. The increased voltage enables the power sources to provide additional power to the components on each sled, enabling the components to operate at higher than typical frequencies.[0027] FIG. 2 illustrates an exemplary logical configuration of a rack 202 of the data center 100. As shown in FIG. 2, rack 202 may generally house a plurality of sleds, each of which may comprise a respective set of physical resources. In the particular non-limiting example depicted in FIG. 2, rack 202 houses sleds 204-1 to 204-4 comprising respective sets of physical resources 205-1 to 205-4, each of which constitutes a portion of the collective set of physical resources 206 comprised in rack 202. With respect to FIG. 1, if rack 202 is representative of - for example - rack 102A, then physical resources 206 may correspond to the physical resources 105 A comprised in rack 102A. In the context of this example, physical resources 105A may thus be made up of the respective sets of physical resources, including physical storage resources 205-1, physical accelerator resources 205-2, physical memory resources 205-3, and physical compute resources 205-5 comprised in the sleds 204-1 to 204-4 of rack 202. The embodiments are not limited to this example. Each sled may contain a pool of each of the various types of physical resources (e.g., compute, memory, accelerator, storage). By having robotically accessible and robotically manipulatable sleds comprising disaggregated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate.[0028] FIG. 3 illustrates an example of a data center 300 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. In the particular non-limiting example depicted in FIG. 3, data center 300 comprises racks 302-1 to 302-32. In various embodiments, the racks of data center 300 may be arranged in such fashion as to define and/or accommodate various access pathways. For example, as shown in FIG. 3, the racks of data center 300 may be arranged in such fashion as to define and/or accommodate access pathways 311A, 31 IB, 311C, and 31 ID. In some embodiments, the presence of such access pathways may generally enable automated maintenance equipment, such as robotic maintenance equipment, to physically access the computing equipment housed in the various racks of data center 300 and perform automated maintenance tasks (e.g., replace a failed sled, upgrade a sled). In various embodiments, the dimensions of access pathways 311A, 311B, 311C, and 311D, the dimensions of racks 302-1 to 302-32, and/or one or more other aspects of the physical layout of data center 300 may be selected to facilitate such automated operations. The embodiments are not limited in this context.[0029] FIG. 4 illustrates an example of a data center 400 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 4, data center 400 may feature an optical fabric 412. Optical fabric 412 may generally comprise a combination of optical signaling media (such as optical cabling) and optical switching infrastructure via which any particular sled in data center 400 can send signals to (and receive signals from) each of the other sleds in data center 400. The signaling connectivity that optical fabric 412 provides to any given sled may include connectivity both to other sleds in a same rack and sleds in other racks. In the particular non-limiting example depicted in FIG. 4, data center 400 includes four racks 402A to 402D. Racks 402A to 402D house respective pairs of sleds 404 A- 1 and 404 A-2, 404B-1 and 404B-2, 404C-1 and 404C-2, and 404D- 1 and 404D-2. Thus, in this example, data center 400 comprises a total of eight sleds. Via optical fabric 412, each such sled may possess signaling connectivity with each of the seven other sleds in data center 400. For example, via optical fabric 412, sled 404A-1 in rack 402A may possess signaling connectivity with sled 404A-2 in rack 402A, as well as the six other sleds 404B- 1, 404B-2, 404C-1, 404C-2, 404D-1, and 404D-2 that are distributed among the other racks 402B, 402C, and 402D of data center 400. The embodiments are not limited to this example.[0030] FIG. 5 illustrates an overview of a connectivity scheme 500 that may generally be representative of link-layer connectivity that may be established in some embodiments among the various sleds of a data center, such as any of example data centers 100, 300, and 400 of FIGS. 1, 3, and 4. Connectivity scheme 500 may be implemented using an optical fabric that features a dual-mode optical switching infrastructure 514. Dual-mode optical switching infrastructure 514 may generally comprise a switching infrastructure that is capable of receiving communications according to multiple link-layer protocols via a same unified set of optical signaling media, and properly switching such communications. In various embodiments, dual- mode optical switching infrastructure 514 may be implemented using one or more dual-mode optical switches 515. In various embodiments, dual-mode optical switches 515 may generally comprise high-radix switches. In some embodiments, dual-mode optical switches 515 may comprise multi-ply switches, such as four-ply switches. In various embodiments, dual-mode optical switches 515 may feature integrated silicon photonics that enable them to switch communications with significantly reduced latency in comparison to conventional switching devices. In some embodiments, dual-mode optical switches 515 may constitute leaf switches 530 in a leaf-spine architecture additionally including one or more dual-mode optical spine switches 520.[0031] In various embodiments, dual-mode optical switches may be capable of receiving both Ethernet protocol communications carrying Internet Protocol (IP packets) and communications according to a second, high-performance computing (HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infmiband) via optical signaling media of an optical fabric. As reflected in FIG. 5, with respect to any particular pair of sleds 504A and 504B possessing optical signaling connectivity to the optical fabric, connectivity scheme 500 may thus provide support for link-layer connectivity via both Ethernet links and HPC links. Thus, both Ethernet and HPC communications can be supported by a single high-bandwidth, low- latency switch fabric. The embodiments are not limited to this example.[0032] FIG. 6 illustrates a general overview of a rack architecture 600 that may be representative of an architecture of any particular one of the racks depicted in FIGS. 1 to 4 according to some embodiments. As reflected in FIG. 6, rack architecture 600 may generally feature a plurality of sled spaces into which sleds may be inserted, each of which may be robotically-accessible via a rack access region 601. In the particular non-limiting example depicted in FIG. 6, rack architecture 600 features five sled spaces 603-1 to 603-5. Sled spaces 603-1 to 603-5 feature respective multi-purpose connector modules (MPCMs) 616-1 to 616-5.[0033] FIG. 7 illustrates an example of a sled 704 that may be representative of a sled of such a type. As shown in FIG. 7, sled 704 may comprise a set of physical resources 705, as well as an MPCM 716 designed to couple with a counterpart MPCM when sled 704 is inserted into a sled space such as any of sled spaces 603- 1 to 603-5 of FIG. 6. Sled 704 may also feature an expansion connector 717. Expansion connector 717 may generally comprise a socket, slot, or other type of connection element that is capable of accepting one or more types of expansion modules, such as an expansion sled 718. By coupling with a counterpart connector on expansion sled 718, expansion connector 717 may provide physical resources 705 with access to supplemental computing resources 705B residing on expansion sled 718. The embodiments are not limited in this context. [0034] FIG. 8 illustrates an example of a rack architecture 800 that may be representative of a rack architecture that may be implemented in order to provide support for sleds featuring expansion capabilities, such as sled 704 of FIG. 7. In the particular non- limiting example depicted in FIG. 8, rack architecture 800 includes seven sled spaces 803-1 to 803-7, which feature respective MPCMs 816- 1 to 816-7. Sled spaces 803-1 to 803-7 include respective primary regions 803-1A to 803-7A and respective expansion regions 803-1B to 803- 7B. With respect to each such sled space, when the corresponding MPCM is coupled with a counterpart MPCM of an inserted sled, the primary region may generally constitute a region of the sled space that physically accommodates the inserted sled. The expansion region may generally constitute a region of the sled space that can physically accommodate an expansion module, such as expansion sled 718 of FIG. 7, in the event that the inserted sled is configured with such a module.[0035] FIG. 9 illustrates an example of a rack 902 that may be representative of a rack implemented according to rack architecture 800 of FIG. 8 according to some embodiments. In the particular non-limiting example depicted in FIG. 9, rack 902 features seven sled spaces 903- 1 to 903-7, which include respective primary regions 903-1 A to 903-7A and respective expansion regions 903-1B to 903-7B. In various embodiments, temperature control in rack 902 may be implemented using an air cooling system. For example, as reflected in FIG. 9, rack 902 may feature a plurality of fans 919 that are generally arranged to provide air cooling within the various sled spaces 903-1 to 903-7. In some embodiments, the height of the sled space is greater than the conventional "1U" server height. In such embodiments, fans 919 may generally comprise relatively slow, large diameter cooling fans as compared to fans used in conventional rack configurations. Running larger diameter cooling fans at lower speeds may increase fan lifetime relative to smaller diameter cooling fans running at higher speeds while still providing the same amount of cooling. The sleds are physically shallower than conventional rack dimensions. Further, components are arranged on each sled to reduce thermal shadowing (i.e., not arranged serially in the direction of air flow). As a result, the wider, shallower sleds allow for an increase in device performance because the devices can be operated at a higher thermal envelope (e.g., 250W) due to improved cooling (i.e., no thermal shadowing, more space between devices, more room for larger heat sinks, etc.).[0036] MPCMs 916-1 to 916-7 may be configured to provide inserted sleds with access to power sourced by respective power modules 920-1 to 920-7, each of which may draw power from an external power source 921. In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 to 920-7 may be configured to convert such AC power to direct current (DC) power to be sourced to inserted sleds. In some embodiments, for example, power modules 920- 1 to 920-7 may be configured to convert 277-volt AC power into 12-volt DC power for provision to inserted sleds via respective MPCMs 916-1 to 916-7. The embodiments are not limited to this example.[0037] MPCMs 16- 1 to 916-7 may also be arranged to provide inserted sleds with optical signaling connectivity to a dual-mode optical switching infrastructure 914, which may be the same as - or similar to - dual-mode optical switching infrastructure 514 of FIG. 5. In various embodiments, optical connectors contained in MPCMs 916-1 to 916-7 may be designed to couple with counterpart optical connectors contained in MPCMs of inserted sleds to provide such sleds with optical signaling connectivity to dual-mode optical switching infrastructure 914 via respective lengths of optical cabling 922-1 to 922-7. In some embodiments, each such length of optical cabling may extend from its corresponding MPCM to an optical interconnect loom 923 that is external to the sled spaces of rack 902. In various embodiments, optical interconnect loom 923 may be arranged to pass through a support post or other type of load- bearing element of rack 902. The embodiments are not limited in this context. Because inserted sleds connect to an optical switching infrastructure via MPCMs, the resources typically spent in manually configuring the rack cabling to accommodate a newly inserted sled can be saved.[0038] FIG. 10 illustrates an example of a sled 1004 that may be representative of a sled designed for use in conjunction with rack 902 of FIG. 9 according to some embodiments. Sled 1004 may feature an MPCM 1016 that comprises an optical connector 1016A and a power connector 1016B, and that is designed to couple with a counterpart MPCM of a sled space in conjunction with insertion of MPCM 1016 into that sled space. Coupling MPCM 1016 with such a counterpart MPCM may cause power connector 1016 to couple with a power connector comprised in the counterpart MPCM. This may generally enable physical resources 1005 of sled 1004 to source power from an external source, via power connector 1016 and power transmission media 1024 that conductively couples power connector 1016 to physical resources 1005.[0039] Sled 1004 may also include dual-mode optical network interface circuitry 1026.Dual-mode optical network interface circuitry 1026 may generally comprise circuitry that is capable of communicating over optical signaling media according to each of multiple link-layer protocols supported by dual-mode optical switching infrastructure 914 of FIG. 9. In some embodiments, dual-mode optical network interface circuitry 1026 may be capable both of Ethernet protocol communications and of communications according to a second, high- performance protocol. In various embodiments, dual-mode optical network interface circuitry 1026 may include one or more optical transceiver modules 1027, each of which may be capable of transmitting and receiving optical signals over each of one or more optical channels. The embodiments are not limited in this context.[0040] Coupling MPCM 1016 with a counterpart MPCM of a sled space in a given rack may cause optical connector 1016A to couple with an optical connector comprised in the counterpart MPCM. This may generally establish optical connectivity between optical cabling of the sled and dual-mode optical network interface circuitry 1026, via each of a set of optical channels 1025. Dual-mode optical network interface circuitry 1026 may communicate with the physical resources 1005 of sled 1004 via electrical signaling media 1028. In addition to the dimensions of the sleds and arrangement of components on the sleds to provide improved cooling and enable operation at a relatively higher thermal envelope (e.g., 250W), as described above with reference to FIG. 9, in some embodiments, a sled may include one or more additional features to facilitate air cooling, such as a heat pipe and/or heat sinks arranged to dissipate heat generated by physical resources 1005. It is worthy of note that although the example sled 1004 depicted in FIG. 10 does not feature an expansion connector, any given sled that features the design elements of sled 1004 may also feature an expansion connector according to some embodiments. The embodiments are not limited in this context.[0041] FIG. 11 illustrates an example of a data center 1100 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. As reflected in FIG. 11, a physical infrastructure management framework 1150A may be implemented to facilitate management of a physical infrastructure 1100A of data center 1100. In various embodiments, one function of physical infrastructure management framework 1150A may be to manage automated maintenance functions within data center 1100, such as the use of robotic maintenance equipment to service computing equipment within physical infrastructure 1100A. In some embodiments, physical infrastructure 1100A may feature an advanced telemetry system that performs telemetry reporting that is sufficiently robust to support remote automated management of physical infrastructure 1100A. In various embodiments, telemetry information provided by such an advanced telemetry system may support features such as failure prediction/prevention capabilities and capacity planning capabilities. In some embodiments, physical infrastructure management framework 1150A may also be configured to manage authentication of physical infrastructure components using hardware attestation techniques. For example, robots may verify the authenticity of components before installation by analyzing information collected from a radio frequency identification (RFID) tag associated with each component to be installed. The embodiments are not limited in this context. [0042] As shown in FIG. 11, the physical infrastructure 1100A of data center 1100 may comprise an optical fabric 1112, which may include a dual-mode optical switching infrastructure 1114. Optical fabric 1112 and dual-mode optical switching infrastructure 1114 may be the same as - or similar to - optical fabric 412 of FIG. 4 and dual-mode optical switching infrastructure 514 of FIG. 5, respectively, and may provide high-bandwidth, low- latency, multi-protocol connectivity among sleds of data center 1100. As discussed above, with reference to FIG. 1, in various embodiments, the availability of such connectivity may make it feasible to disaggregate and dynamically pool resources such as accelerators, memory, and storage. In some embodiments, for example, one or more pooled accelerator sleds 1130 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of accelerator resources - such as co-processors and/or FPGAs, for example - that is globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114.[0043] In another example, in various embodiments, one or more pooled storage sleds1132 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of storage resources that is available globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114. In some embodiments, such pooled storage sleds 1132 may comprise pools of solid-state storage devices such as solid-state drives (SSDs). In various embodiments, one or more high-performance processing sleds 1134 may be included among the physical infrastructure 1100A of data center 1100. In some embodiments, high-performance processing sleds 1134 may comprise pools of high-performance processors, as well as cooling features that enhance air cooling to yield a higher thermal envelope of up to 250W or more. In various embodiments, any given high- performance processing sled 1134 may feature an expansion connector 1117 that can accept a far memory expansion sled, such that the far memory that is locally available to that high- performance processing sled 1134 is disaggregated from the processors and near memory comprised on that sled. In some embodiments, such a high-performance processing sled 1134 may be configured with far memory using an expansion sled that comprises low-latency SSD storage. The optical infrastructure allows for compute resources on one sled to utilize remote accelerator/FPGA, memory, and/or SSD resources that are disaggregated on a sled located on the same rack or any other rack in the data center. The remote resources can be located one switch jump away or two-switch jumps away in the spine-leaf network architecture described above with reference to FIG. 5. The embodiments are not limited in this context.[0044] In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as a software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support the provision of cloud services 1140. In various embodiments, particular sets of virtual computing resources 1136 may be grouped for provision to cloud services 1140 in the form of SDI services 1138. Examples of cloud services 1140 may include - without limitation - software as a service (SaaS) services 1142, platform as a service (PaaS) services 1144, and infrastructure as a service (IaaS) services 1146.[0045] In some embodiments, management of software-defined infrastructure 1100B may be conducted using a virtual infrastructure management framework 1150B. In various embodiments, virtual infrastructure management framework 1150B may be designed to implement workload fingerprinting techniques and/or machine-learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services 1138 to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B may use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented in order to provide quality of service (QoS) management capabilities for cloud services 1140. The embodiments are not limited in this context.[0046] As shown in FIG. 12, an illustrative system 1210 for performing partially synchronized writes, also referred to herein as semi-sync durable writes, to non-volatile memory (e.g., to multiple data storage devices in different failure domains) among a set of managed nodes 1260 includes an orchestrator server 1240 in communication with the set of managed nodes 1260. Each managed node 1260 may be embodied as an assembly of resources (e.g., physical resources 206), such as compute resources (e.g., physical compute resources 205- 4), storage resources (e.g., physical storage resources 205-1), accelerator resources (e.g., physical accelerator resources 205-2), or other resources (e.g., physical memory resources 205- 3) from the same or different sleds (e.g., the sleds 204-1, 204-2, 204-3, 204-4, etc.) or racks (e.g., one or more of racks 302-1 through 302-32). Each managed node 1260 may be established, defined, or "spun up" by the orchestrator server 1240 at the time a workload is to be assigned to the managed node 1260 or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node 1260. The system 1210 may be implemented in accordance with the data centers 100, 300, 400, 1100 described above with reference to FIGS. 1, 3, 4, and 11. In the illustrative embodiment, the set of managed nodes 1260 includes managed nodes 1250, 1252, and 1254. While three managed nodes 1260 are shown in the set, it should be understood that in other embodiments, the set may include a different number of managed nodes 1260 (e.g., tens of thousands). The system 1210 may be located in a data center and provide storage and compute services (e.g., cloud services) to a client device 1220 that is in communication with the system 1210 through a network 1230. The orchestrator server 1240 may support a cloud operating environment, such as OpenStack, and assign workloads to the managed nodes 1260 for execution.[0047] The managed nodes 1260 may execute the workloads, such as in virtual machines or containers, on behalf of a user of the client device 1220. Managed nodes 1260 executing respective workloads may issue separate requests to write data, referred to herein as data blocks, and/or to read data blocks. To make storage of the data blocks "durable", a managed node 1260 may issue requests to multiple data storage devices (e.g., of the present managed node 1260 and/or of other managed nodes 1260) to store the same data block. By being connected with its own corresponding network interface controller, each data storage device is, in the illustrative embodiment, in a different failure domain, such that an incident that causes a network disconnection or other unavailability of a data storage device in one of the failure domains will not affect the availability of a data storage device in another of the failure domains. The various network interface controllers associated with the data storage devices in the different failure domains each includes a power loss protected buffer, and the received data block is initially written to the power loss protected buffer before it is subsequently written to the corresponding data storage device. After the data block is written to the power loss protected buffer, and before the data block has been stored in the data storage device, the corresponding network interface controller issues an acknowledgement message, indicating successful storage of the data block, through the network 1230 to the network interface controller of the sled on which the compute resources executing the workload are located. Rather than waiting to receive multiple acknowledgement messages (e.g., associated with the various data storage devices), the managed node 1260 executing the workload may resume execution of the workload after receiving only one acknowledgement message. As such, the workload continues operations while further acknowledgement messages are received and the data blocks are written from the power loss protected buffers of the network interface controllers to the corresponding data storage devices. By reducing the amount of time spent waiting for acknowledgments to write requests, the managed node 1260 may improve the quality of service of the workload.[0048] Referring now to FIG. 13, the managed node 1260 may be embodied as any type of compute device capable of performing the functions described herein, including executing a workload, writing data blocks, and reading data blocks. For example, the managed node 1260 may be embodied as a computer, a distributed computing system, one or more sleds (e.g., the sleds 204- 1, 204-2, 204-3, 204-4, etc.), a server (e.g., stand-alone, rack-mounted, blade, etc.), a multiprocessor system, a network appliance (e.g., physical or virtual), a desktop computer, a workstation, a laptop computer, a notebook computer, a processor-based system, or a network appliance. As shown in FIG. 13, the illustrative managed node 1260 includes a central processing unit (CPU) 1302, a main memory 1304, an input/output (I/O) subsystem 1306, communication circuitry 1308, and one or more data storage devices 1314. Of course, in other embodiments, the managed node 1260 may include other or additional components, such as those commonly found in a computer (e.g., display, peripheral devices, etc.). Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, in some embodiments, the main memory 1304, or portions thereof, may be incorporated in the CPU 1302.[0049] The CPU 1302 may be embodied as any type of processor capable of performing the functions described herein. The CPU 1302 may be embodied as a single or multi-core processor(s), a microcontroller, or other processor or processing/controlling circuit. In some embodiments, the CPU 1302 may be embodied as, include, or be coupled to a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the CPU 1302 may include portions thereof located on the same sled or different sled. Similarly, the main memory 1304 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. In some embodiments, all or a portion of the main memory 1304 may be integrated into the CPU 1302. In operation, the main memory 1304 may store various software and data used during operation, such as data blocks, synchronization management data indicative of the number and status of partially synchronized writes at any given time, a map of locations of data blocks among different data storage devices 1314 of the managed node 1260 and/or other managed nodes 1260, operating systems, applications, programs, libraries, and drivers. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the main memory 1304 may include portions thereof located on the same sled or different sled.[0050] The I/O subsystem 1306 may be embodied as circuitry and/or components to facilitate input/output operations with the CPU 1302, the main memory 1304, and other components of the managed node 1260. For example, the I O subsystem 1306 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1306 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the CPU 1302, the main memory 1304, and other components of the managed node 1260, on a single integrated circuit chip.[0051] The communication circuitry 1308 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 1230 between the managed node 1260 and another compute device (e.g., the orchestrator server 1240 and/or one or more other managed nodes 1260). The communication circuitry 1308 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.[0052] The illustrative communication circuitry 1308 includes a network interface controller (NIC) 1310, which may also be referred to as a host fabric interface (HFI). The NIC 1310 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the managed node 1260 to connect with another compute device (e.g., the orchestrator server 1240 and/or physical resources of one or more managed nodes 1260). In some embodiments, the NIC 1310 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 1310 may include a processor (not shown) local to the NIC 1310. In such embodiments, the local processor of the NIC 1310 may be capable of performing one or more of the functions of the CPU 1302 described herein. Additionally, the NIC 1310 includes a power loss protected buffer 1312 which may be embodied as any volatile local memory device that, when a power loss imminent condition is detected, may write any data present in the power loss protected buffer to non-volatile memory (e.g., to one or more of the data storage devices 1314). The power loss protected buffer 1312 may include an independent power supply in some embodiments, such as capacitors or batteries that allow the power loss protected buffer 1312 to operate for a period of time even after power to the managed node 1260 has been interrupted. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the communication circuitry 1308 may include portions thereof located on the same sled or different sled. In the illustrative embodiment, the NIC 1310 in every sled having physical storage resources 205-1 (e.g., data storage devices 1314) includes the power loss protected buffer 1312. [0053] The one or more illustrative data storage devices 1314, may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, solid-state drives (SSDs), hard disk drives, memory cards, and/or other memory devices and circuits. Each data storage device 1314 may include a system partition that stores data and firmware code for the data storage device 1314. Each data storage device 1314 may also include an operating system partition that stores data files and executables for an operating system. In the illustrative embodiment, each data storage device 1314 includes non-volatile memory. Non-volatile memory may be embodied as any type of data storage capable of storing data in a persistent manner (even if power is interrupted to the non-volatile memory). For example, in the illustrative embodiment, the non-volatile memory is embodied as Flash memory (e.g., NAND memory). In other embodiments, the non-volatile memory may be embodied as any combination of memory devices that use chalcogenide phase change material (e.g., chalcogenide glass), or other types of byte-addressable, write-in-place non-volatile memory, ferroelectric transistor random-access memory (FeTRAM), nanowire-based non-volatile memory, phase change memory (PCM), memory that incorporates memristor technology, magnetoresistive random-access memory (MRAM) or Spin Transfer Torque (STT)-MRAM.[0054] Additionally, the managed node 1260 may include one or more peripheral devices 1316. Such peripheral devices 1316 may include any type of peripheral device commonly found in a compute device such as a display, speakers, a mouse, a keyboard, and/or other input/output devices, interface devices, and/or other peripheral devices.[0055] The client device 1220 and the orchestrator server 1240 may have components similar to those described in FIG. 13, with the exception that the power loss protected buffer 1312 may be absent in the client device 1220 and/or the orchestrator server 1240. The description of those components of the managed node 1260 is equally applicable to the description of components of the client device 1220 and the orchestrator server 1240 and is not repeated herein for clarity of the description. Further, it should be appreciated that any of the client device 1220 and the orchestrator server 1240 may include other components, subcomponents, and devices commonly found in a computing device, which are not discussed above in reference to the managed node 1260 and not discussed herein for clarity of the description.[0056] As described above, the client device 1220, the orchestrator server 1240 and the managed nodes 1260 are illustratively in communication via the network 1230, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof.[0057] Referring now to FIG. 14, in the illustrative embodiment, the managed node1260 may establish an environment 1400 during operation. The illustrative environment 1400 includes a network communicator 1420 and a data manager 1430. Each of the components of the environment 1400 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment 1400 may be embodied as circuitry or a collection of electrical devices (e.g., network communicator circuitry 1420, data manager circuitry 1430, etc.). It should be appreciated that, in such embodiments, one or more of the network communicator circuitry 1420 or the data manager circuitry 1430 may form a portion of one or more of the CPU 1302, the main memory 1304, the I/O subsystem 1306, the communication circuitry 1308, and/or other components of the managed node 1260. In the illustrative embodiment, the environment 1400 includes buffer data 1402 which may be embodied as any data (e.g., data blocks) present in the power loss protected buffer 1312 of the NIC 1310 if the managed node 1260 has received a request to write a data block to non-volatile memory (e.g., a data storage device 1314). Additionally, the environment includes synchronization management data 1404 which may be embodied as any data indicative of the number partially synchronized write requests associated with a workload executed by the managed node 1260 at any given time and the status of each partially synchronized write request, such as how many acknowledgements are expected to be received and how many acknowledgements have been received. The synchronization management data 1404 may also include data indicative of measured time periods between received acknowledgements for each write request (e.g., an amount of time that has elapsed between an initial acknowledgement and a subsequent acknowledgement), the number of allowable partially synchronized write requests that may be outstanding at any given time, and/or other measurements and settings. The environment 1400, in the illustrative embodiment, also includes persistent data 1406 which may be embodied as any data that has been written to nonvolatile memory (e.g., one or more data storage devices 1314) of the managed node 1260. Additionally, in the illustrative embodiment, the environment 1400 includes a data map 1408 which may be embodied as any data indicative of locations where data blocks have been stored in the data storage devices 1314 (i.e., non-volatile memory) of the managed node 1260 and/or in one or more other managed nodes 1260. In the illustrative embodiment, each data block is identified by a key (e.g., a unique identifier, such as an alphanumeric code), such that the key and the corresponding data block form a key-value pair. [0058] In the illustrative environment 1400, the network communicator 1420, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the managed node 1260, respectively and to assist in performing partially synchronized writes through the network 1230. To do so, the network communicator 1420 is configured to receive and process data packets from one system or computing device (e.g., the orchestrator server 1240, a managed node 1260, etc.) and to prepare and send data packets to another computing device or system (e.g., another managed node 1260). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1420 may be performed by the communication circuitry 1308, and, in the illustrative embodiment, by the NIC 1310. In the illustrative embodiment, the network communicator 1420 includes a buffer manager 1422, which, in the illustrative embodiment, is configured to store a received data block from a write request in the power loss protected buffer 1312, send an early acknowledgement message through the network 1230 in response to the write request, indicating that the data block has been successfully stored, and coordinate subsequently writing the data block to the non- volatile memory (e.g., from the buffer data 1402 to the persistent data 1406).[0059] The data manager 1430, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to manage writing and reading of data to resources of the managed node 1260 (e.g., the data storage devices 1314) and/or to and from other managed nodes 1260, including managing partially synchronized writes. To do so, in the illustrative embodiment, the data manager 1430 includes a write synchronization manager 1432, a local data servicer 1434, a remote data servicer 1436, and a map manager 1438. The write synchronization manager 1432, in the illustrative embodiment, is configured to manage partially synchronized writes, such as by determining whether the managed node is to perform partially synchronized writes for a given workload, coordinating issuing write requests and pausing execution of the workload until an acknowledgement to the write request has been received, tracking the status of the other write requests, such as by tracking how many partially synchronized write requests are outstanding at any given time and how long the delay is between the first acknowledgement and a second acknowledgement for a given write request, and adjusting thresholds to change the number of concurrent partially synchronized writes that are allowed in the future, based on the delays between the acknowledgements. [0060] The local data servicer 1434, in the illustrative embodiment, is configured to write data blocks and associated keys to the one or more data storage devices 1314 of the managed node 1260 and/or read data blocks from the one or more data storage devices 1314 of the managed node 1260. The remote data servicer 1436, in the illustrative embodiment, is configured to write data blocks and/or read data blocks to and from the data storage devices 1314 of one or more other managed nodes 1260 by issuing corresponding requests and receiving corresponding responses through the network 1230. As such, the local data servicer 1434 and the remote data servicer 1436, in the illustrative embodiment, are configured to intemperate with the write synchronization manager 1432 to coordinate performing partially synchronized writes.[0061] The map manager 1438, in the illustrative embodiment, is configured to track where data blocks are stored among the data storage devices 1314 of the managed node 1260 and/or other managed nodes 1260. In doing so, the map manager 1438 may store keys in association with location identifiers, such as unique identifiers of data storage devices 1314 in which the data blocks are stored, and/or logical block addresses of the data blocks.[0062] It should be appreciated that each of the write synchronization manager 1432, the local data servicer 1434, the remote data servicer 1436, and the map manager 1438 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof and may be distributed across multiple sleds. For example, the write synchronization manager 1432 may be embodied as a hardware component, while the local data servicer 1434, the remote data servicer 1436, and the map manager 1438 are embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.[0063] Referring now to FIG. 15, in use, the managed node 1260 may execute a method1500 for managing partially synchronized writes to improve the quality of service of a workload. The method 1500 begins with block 1502, in which the managed node 1260 determines whether to manage partially synchronized writes. In the illustrative embodiment, the managed node 1260 determines to manage partially synchronized writes if the managed node 1260 is powered on and has access to (e.g., locally and/or through the network 1230) the one or more data storage devices 1314. In other embodiments, the managed node 1260 may determine whether to manage partially synchronized writes based on other factors. Regardless, in response to a determination to manage partially synchronized writes, in the illustrative embodiment, the method 1500 advances to block 1504 in which the managed node 1260 may receive an assignment of a workload from the orchestrator server 1240. In doing so, as indicated in block 1506, the managed node 1260 may receive an indication of whether to allow partially synchronized writes for the workload. For example, the decision of whether to enable the partially synchronized writes may be an option that a customer may select as part of an agreement with an operator of the data center. As such, the orchestrator server 1240 may provide, to the managed node 1260, an indication of the selection when the orchestrator server 1240 assigns the workload to the managed node 1260. As indicated in block 1508, the managed node 1260 may also receive an indication of an allowable time period that may elapse between an initial and a subsequent acknowledgement of a write request that was sent to different data storage devices 1314 connected to the network 1230. The indication of the allowable time period may be a number of microseconds, or any other measure of time. In receiving the assignment of the workload, the managed node 1260 may also receive an indication of a threshold number of partially synchronized writes to allow concurrently, as indicated in block 1510. For example, the managed node 1260 may receive an indication to allow up to five partially synchronized writes at any given time for the workload before requiring any subsequent writes requests to be acknowledged at least twice before resuming execution of the workload.[0064] Subsequently, the method 1500 advances to block 1512 in which the managed node 1260 executes the assigned workload (e.g., using physical compute resources 205-4 of the sled 204-4, such as the CPU 1302). In executing the assigned workload, the managed node 1260 may receive a request from an application associated with the workload to write a data block to non-volatile memory, as indicated in block 1514. In block 1516, the managed node 1260 determines whether a write request has been received. If not, the method 1500 loops back to block 1512 in which the managed node 1260 continues executing the assigned workload. However, referring back to block 1516, if the managed node 1260 instead determines that a request to write a data block was received, the method advances to block 1518 in which the managed node 1260 issues a write request with the data block to multiple data storage devices 1314 connected to the network 1230 (e.g., to physical storage resources 205-1 on different sleds and/or racks). In doing so, the managed node 1260 pauses execution of the workload, as indicated in block 1520. Further, as indicated in block 1522, the managed node 1260, in the illustrative embodiment, issues the write request to data storage devices 1314 that are in different failure domains. For example, the managed node 1260 may send the write request to different data storage devices 1314 that are each connected to the network 1230 by a different network interface controller 1310, such that the disconnection of any one of the data storage devices 1314 from the network 1230 does not affect the availability of any of the other data storage devices 1314. As indicated in block 1524, in issuing the write request, the managed node 1260 may issue the write request to one or more data storage devices 1314 that are part of a different managed node 1260. Additionally, as indicated in block 1526, in sending the data block with the request, the managed node 1260 may send a key that uniquely identifies the data block, so that the combination of the key and the data block form a key-value pair. Subsequently, the method 1500 advances to block 1528 of FIG. 16 in which the managed node 1260 receives (e.g., through the network 1230 to the network interface controller 1310 of the sled on which compute resources 205-4 executing the workload are located) an initial acknowledgement for the write request indicating that the data block was successfully stored.[0065] Referring now to FIG. 16, in receiving the initial acknowledgement for the write request, the managed node 1260 may start a timer to determine the amount of time that elapses until the managed node 1260 receives (e.g., through the network 1230 to the network interface controller 1310 of the sled on which compute resources 205-4 executing the workload are located) a subsequent acknowledgement to the write request. In block 1532, the managed node 1260 determines whether performing a partially synchronized write is allowed (e.g., whether the managed node 1260 should wait until a second acknowledgement is received before resuming execution of the workload). In doing so, the managed node 1260 may determine whether the assignment of the workload allows partially synchronized writes to be performed for the workload (e.g., based on the indicator received in block 1506), as indicated in block 1534. The managed node 1260 may also determine whether the threshold number of partially synchronized writes to allow (e.g., the threshold number received by the managed node 1260 in block 1510) is satisfied, as indicated in block 1536. Initially, if the threshold number is greater than zero, then the threshold will be satisfied, since no partially synchronized writes have been counted by the managed node 1260 yet.[0066] In block 1538, the managed node 1260 takes further actions based on the determination in block 1532 of whether a partially synchronized write is allowed. If the managed node 1260 determines that a partially synchronized write is not allowed, the workload remains paused and the method 1500 advances to block 1544 in which the managed node 1260 receives one or more subsequent acknowledgements for the write request. Referring back to block 1538, if the managed node 1260 instead determines that a partially synchronized write is allowed, the method 1500 advances to block 1540, in which the managed node 1260 resumes execution of the workload. By resuming execution of the workload before receiving a subsequent acknowledgement, the write request becomes a partially synchronized write. As indicated in block 1542, the managed node 1260 may increase the number (e.g., a number in the write synchronization management data 1404) of partially synchronized writes that are presently outstanding for the workload. When execution of the workload is resumed, the workload may request additional writes and the managed node 1260 may, in response, perform the operations described above (e.g., in a separate thread). Afterwards, the method 1500 advances to block 1544 in which the managed node 1260 receives one or more subsequent acknowledgements for the write request.[0067] As indicated in block 1546, in receiving the one or more subsequent acknowledgements, the managed node 1260 may decrease the number of partially synchronized writes that are presently outstanding (e.g., if the acknowledgements are in response to a partially synchronized write). The managed node 1260 may also stop the timer that was started in block 1530 and compare the elapsed time period to the allowable time period that was indicated with the assignment of the workload in block 1508, as indicated in block 1548. If the elapsed time period exceeds the allowable time period (e.g., the elapsed time period does not satisfy the allowable time period), the managed node 1260 may issue a request to one or more network devices (e.g., switches) to increase the priority of write requests over other network traffic, as indicated in block 1550. Additionally or alternatively, the managed node 1260 may decrease the threshold number of partially synchronized writes to allow concurrently (e.g., from five to four).EXAMPLES[0068] Illustrative examples of the technologies disclosed herein are provided below.An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0069] Example 1 includes a managed node to manage partially synchronized writes, the managed node comprising a network communicator to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network; and a data manager to pause execution of the workload; wherein the network communicator is further to receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block; and the data manager is further to resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices.[0070] Example 2 includes the subject matter of Example 1, and wherein the network communicator is further to receive a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; and the data manager is further to determine an elapsed time period between the initial acknowledgement and the subsequent acknowledgement. [0071] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the data manager is further to determine whether the elapsed time period satisfies a predefined threshold time period; and the network communicator is further to send, in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, a request to at least one network device to increase a priority of write requests relative to other network traffic.[0072] Example 4 includes the subject matter of any of Examples 1-3, and wherein the data manager is further to receive an assignment of the workload from an orchestrator server; and receive, with the assignment, an indication of the predefined threshold time period.[0073] Example 5 includes the subject matter of any of Examples 1-4, and wherein the data manager is further to determine whether the elapsed time period satisfies a predefined threshold time period; and determine to await at least two acknowledgements in response to future write requests before resumption of the workload.[0074] Example 6 includes the subject matter of any of Examples 1-5, and wherein the data manager is further to receive an assignment of the workload from an orchestrator server; receive, with the assignment, an indication of whether to enable partially synchronized writes; and wherein to resume execution of the workload comprises to determine whether the assignment indicates to enable partially synchronized writes; and resume execution in response to a determination that the assignment indicates to enable partially synchronized writes.[0075] Example 7 includes the subject matter of any of Examples 1-6, and wherein to issue the write request to multiple data storage devices comprises to issue the write request to multiple data storage devices in different failure domains.[0076] Example 8 includes the subject matter of any of Examples 1-7, and wherein to issue the write request to write a data block comprises to send a key associated with the data block, wherein the key uniquely identifies the data block.[0077] Example 9 includes the subject matter of any of Examples 1-8, and wherein to issue the write request to multiple storage devices comprises to issue the write request to one or more data storage devices of a different managed node.[0078] Example 10 includes the subject matter of any of Examples 1-9, and wherein to resume execution of the workload comprises to determine a number of partially synchronized write requests that have been issued, wherein each partially synchronized write request is a write request for which only one acknowledgement has been received; determine whether the number of partially synchronized write requests satisfies a threshold number of allowable partially synchronized write requests; and resume, in response to a determination that the number of partially synchronized write requests satisfies the threshold number, execution of the workload.[0079] Example 11 includes the subject matter of any of Examples 1-10, and wherein the data manager is further to receive an indication of the threshold number from an orchestrator server.[0080] Example 12 includes the subject matter of any of Examples 1-11, and wherein the network communicator is further to receive a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; and the data manager is further to determine an elapsed time period between the initial acknowledgement and the subsequent acknowledgement, determine whether the elapsed time period satisfies a predefined threshold time period, and reduce, in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, the threshold number of allowable partially synchronized write requests.[0081] Example 13 includes the subject matter of any of Examples 1-12, and wherein the network communicator is further to send, in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, a request to at least one network device to increase a priority of write requests relative to other network traffic.[0082] Example 14 includes a method for managing partially synchronized writes, the method comprising issuing, by a managed node, a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network; pausing, by the managed node, execution of the workload; receiving, by the managed node, an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block; and resuming, by the managed node, execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices.[0083] Example 15 includes the subject matter of Example 14, and further including receiving, by the managed node, a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; and determining, by the managed node, an elapsed time period between the initial acknowledgement and the subsequent acknowledgement.[0084] Example 16 includes the subject matter of any of Examples 14 and 15, and further including determining, by the managed node, whether the elapsed time period satisfies a predefined threshold time period; and sending, by the managed node and in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, a request to at least one network device to increase a priority of write requests relative to other network traffic.[0085] Example 17 includes the subject matter of any of Examples 14- 16, and further including receiving, by the managed node, an assignment of the workload from an orchestrator server; and receiving, by the managed node, with the assignment, an indication of the predefined threshold time period.[0086] Example 18 includes the subject matter of any of Examples 14- 17, and further including determining, by the managed node, whether the elapsed time period satisfies a predefined threshold time period; and determining, by the managed node, to await at least two acknowledgements in response to future write requests before resumption of the workload.[0087] Example 19 includes the subject matter of any of Examples 14- 18, and further including receiving, by the managed node, an assignment of the workload from an orchestrator server; receiving, by the managed node, with the assignment, an indication of whether to enable partially synchronized writes; and wherein resuming execution of the workload comprises determining whether the assignment indicates to enable partially synchronized writes; and resuming execution in response to a determination that the assignment indicates to enable partially synchronized writes.[0088] Example 20 includes the subject matter of any of Examples 14-19, and wherein issuing the write request to multiple data storage devices comprises issuing the write request to multiple data storage devices in different failure domains.[0089] Example 21 includes the subject matter of any of Examples 14-20, and wherein issuing the write request to write a data block comprises sending a key associated with the data block, wherein the key uniquely identifies the data block.[0090] Example 22 includes the subject matter of any of Examples 14-21, and wherein issuing the write request to multiple storage devices comprises issuing the write request to one or more data storage devices of a different managed node.[0091] Example 23 includes the subject matter of any of Examples 14-22, and wherein resuming execution of the workload comprises determining, by the managed node, a number of partially synchronized write requests that have been issued, wherein each partially synchronized write request is a write request for which only one acknowledgement has been received; determining, by the managed node, whether the number of partially synchronized write requests satisfies a threshold number of allowable partially synchronized write requests; and resuming, by the managed node and in response to a determination that the number of partially synchronized write requests satisfies the threshold number, execution of the workload. [0092] Example 24 includes the subject matter of any of Examples 14-23, and further including receiving, by the managed node, an indication of the threshold number from an orchestrator server.[0093] Example 25 includes the subject matter of any of Examples 14-24, and further including receiving, by the managed node, a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; determining, by the managed node, an elapsed time period between the initial acknowledgement and the subsequent acknowledgement; determining, by the managed node, whether the elapsed time period satisfies a predefined threshold time period; and reducing, by the managed node and in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, the threshold number of allowable partially synchronized write requests.[0094] Example 26 includes the subject matter of any of Examples 14-25, and further including sending, by the managed node and in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, a request to at least one network device to increase a priority of write requests relative to other network traffic.[0095] Example 27 includes one or more computer-readable storage media comprising a plurality of instructions that, when executed by a managed node, cause the managed node to perform the method of any of Examples 14-26.[0096] Example 28 includes a managed node comprising means for issuing a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network; means for pausing execution of the workload; means for receiving an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block; and means for resuming execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices.[0097] Example 29 includes the subject matter of Example 28, and further including means for receiving a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; and means for determining an elapsed time period between the initial acknowledgement and the subsequent acknowledgement.[0098] Example 30 includes the subject matter of any of Examples 28 and 29, and further including means for determining whether the elapsed time period satisfies a predefined threshold time period; and means for sending, in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, a request to at least one network device to increase a priority of write requests relative to other network traffic. [0099] Example 31 includes the subject matter of any of Examples 28-30, and further including means for receiving an assignment of the workload from an orchestrator server; and means for receiving with the assignment, an indication of the predefined threshold time period.[00100] Example 32 includes the subject matter of any of Examples 28-31, and further including means for determining whether the elapsed time period satisfies a predefined threshold time period; and means for determining to await at least two acknowledgements in response to future write requests before resumption of the workload.[00101] Example 33 includes the subject matter of any of Examples 28-32, and further including means for receiving an assignment of the workload from an orchestrator server; means for receiving with the assignment, an indication of whether to enable partially synchronized writes; and wherein the means for resuming execution of the workload comprises means for determining whether the assignment indicates to enable partially synchronized writes; and means for resuming execution in response to a determination that the assignment indicates to enable partially synchronized writes.[00102] Example 34 includes the subject matter of any of Examples 28-33, and wherein the means for issuing the write request to multiple data storage devices comprises means for issuing the write request to multiple data storage devices in different failure domains.[00103] Example 35 includes the subject matter of any of Examples 28-34, and wherein the means for issuing the write request to write a data block comprises means for sending a key associated with the data block, wherein the key uniquely identifies the data block.[00104] Example 36 includes the subject matter of any of Examples 28-35, and wherein the means for issuing the write request to multiple storage devices comprises means for issuing the write request to one or more data storage devices of a different managed node.[00105] Example 37 includes the subject matter of any of Examples 28-36, and wherein the means for resuming execution of the workload comprises means for determining a number of partially synchronized write requests that have been issued, wherein each partially synchronized write request is a write request for which only one acknowledgement has been received; means for determining whether the number of partially synchronized write requests satisfies a threshold number of allowable partially synchronized write requests; and means for resuming, in response to a determination that the number of partially synchronized write requests satisfies the threshold number, execution of the workload.[00106] Example 38 includes the subject matter of any of Examples 28-37, and further including means for receiving an indication of the threshold number from an orchestrator server. [00107] Example 39 includes the subject matter of any of Examples 28-38, and further including means for receiving a subsequent acknowledgement associated with one of the other data storage devices after the workload has been resumed; means for determining an elapsed time period between the initial acknowledgement and the subsequent acknowledgement; means for determining whether the elapsed time period satisfies a predefined threshold time period; and means for reducing, in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, the threshold number of allowable partially synchronized write requests.[00108] Example 40 includes the subject matter of any of Examples 28-39, and further including means for sending, in response to a determination that the elapsed time period does not satisfy the predefined threshold time period, a request to at least one network device to increase a priority of write requests relative to other network traffic. |
Embodiments of the invention are generally directed to systems, methods, and apparatuses for using the same memory type in an error check mode and a non-error check mode. In some embodiments, a memory device includes at least one split bank pair of memory banks. If the memory device is in an error check mode, then, in some embodiments, data is stored in one of memory banks of the split bank pair and the corresponding error check bits are stored in the other memory bank of the split bank pair. A register bit on the memory device indicates whether it is in the error check mode or the non-error check mode. Other embodiments are described and claimed. |
1、A memory device includes:A separate memory bank pair of a memory bank, which includes a first memory bank and a second memory bank, wherein if the memory device is in an error correction mode, data will be stored in the first memory bank and the corresponding error correction Bits will be stored in this second bank; andA register bit indicating whether the memory device is in error correction mode or non-error correction mode.2、The memory device of claim 1, further comprising:The mapping logic circuit is configured to map error correction bits corresponding to data to be stored in the first memory bank to the second memory bank.3、The memory device of claim 1, wherein the mapping logic circuit comprises: a mapping logic circuit for mapping the error correction bits to the highest 1 / M of the second memory bank.4、The memory device of claim 3, wherein M is eight.5、The memory device of claim 4, wherein the mapping logic circuit includes a logic circuit for driving a part of a column address related to the data to a logic high bit.6、The memory device of claim 5, wherein the logic circuit driving a part of a column address related to the data to a logic high comprises:Logic circuit for driving column address bit 8 to column address bit 10 to logic high.7、The memory device of claim 5, wherein the register bit indicating whether the memory device is in an error correction mode or a non-error correction mode is a mode register set (MRS) register.8、The memory device of claim 5, wherein the mapping logic circuit further comprises: a mask logic circuit for shielding at least a part of the column address.9、The memory device of claim 1, wherein the memory device comprises a dynamic random access memory device.10、A method including:Determining whether the memory device is in an error correction mode or in a non-error correction mode, the memory device has at least one separate memory bank pair of the memory bank;Writing data to a first bank of the separate bank pair; andAn error correction bit related to the data is written into a second bank of the separate bank pair.11、The method of claim 10, wherein writing the error correction bits related to the data into the second memory bank comprises:Storing the data in a buffer of the storage device;Activating the same row of each of the first bank and the second bank; andA column is selected for the error correction bits based at least in part on the column address received from the host.12、The method of claim 11, wherein selecting a column for the error correction bits based at least in part on a column address received from a host comprises:A prescribed portion of the column address is forced to be a logic high bit to map the error correction bit to the highest 1 / M of the second bank.13、The method of claim 12, wherein forcing a prescribed portion of the column address to be a logic high bit to map the error correction bit to the highest 1 / M of the second bank comprises:The column address bits 8 to 10 are forced to be logic high bits to map the error correction bits to the highest 1/8 of the second bank.14、The method of claim 12, further comprising:Reading the data from the first memory bank; andThe error correction bits related to the data are read from the second memory bank.15、The method of claim 10, wherein the memory device comprises a dynamic random access memory device.16、A system including:A host that controls the storage subsystem; andA memory device, which is coupled to a host through an interconnect, the memory device includes:A separate memory bank pair of a memory bank, including a first memory bank and a second memory bank, wherein if the memory device is in an error correction mode, data will be stored in the first memory bank and the corresponding error correction bit Will be stored in this second bank, andA register bit used to indicate whether the memory device is in an error correction mode or a non-error correction mode.17、The system of claim 16, wherein the interconnect comprises at least one of:Point-to-point interconnection; andMultipoint interconnection.18、The system of claim 16, further comprising:A mapping logic circuit that maps the error correction bits to the second bank.19、The system of claim 18, wherein the mapping logic circuit mapping the error correction bits to the second bank comprises:The error correction bit is mapped to a mapping logic circuit with the highest 1 / M of the second memory bank.20、The system of claim 19, wherein M is eight. |
System, method, and device for using the same memory type to support error detection mode and non-error detection modeTechnical fieldEmbodiments of the present invention generally relate to the field of integrated circuits, and more particularly, to systems, methods, and devices for supporting error correction modes and non-error correction modes using the same memory type.Background techniqueMemory devices are susceptible to errors, such as transient (or soft) errors. If these errors are not handled properly, they can cause the computing system to malfunction. Redundant information in the form of error correction codes (ECC) can be used to improve the reliability of the entire system. However, redundant information increases the storage requirements of the memory system, thereby increasing the cost of the memory system. Therefore, ECC is usually only used for high-end or mission-critical systems. Lower cost (or lesser) systems do not use ECC and provide a level of reliability suitable for their use.In some cases, additional bits of storage are added to the system by adding additional memory devices (eg, dynamic random access memory (DRAM) devices). For example, a system that uses 8 DRAMs to store data can also use additional DRAMs to store check codes. In other cases, the additional bits are stored in a variant DRAM, which is specifically designed for use in ECC systems. For example, non-ECC DRAM can have a capacity of 256M bits and 16 outputs. The ECC variant of this DRAM can have a capacity of 288M bits and 18 outputs. In these two examples, the ECC system has 12.5% more storage capacity than the corresponding non-ECC system.There are many drawbacks to using different DRAM devices in an ECC system. For example, there is an increase in costs associated with the design, manufacture, and inventory of two (or more) variants of a DRAM device. In addition, ECC variant DRAM devices are larger than their corresponding non-ECC counterparts, making them more difficult to manufacture. Adding additional bits to the ECC variant DRAM reduces the yield of the device and therefore increases the cost of the device. Another disadvantage of using two (or more) variants of a DRAM device is that it requires the memory controller connected to the DRAM device to support additional pins (eg, ECC pins). In addition, since the connector of the ECC variant DRAM module is larger than its non-ECC counterpart, it uses more space on the motherboard.BRIEF DESCRIPTION OF THE DRAWINGSIn the drawings, embodiments of the invention are shown by way of example and not by way of limitation, where like reference numerals refer to similar elements.1 is a high-level block diagram illustrating selected aspects of a computing system implemented in accordance with an embodiment of the present invention;2 is a block diagram illustrating selected aspects of a dynamic random access memory (DRAM) implemented in accordance with an embodiment of the present invention;3 is a block diagram illustrating selected aspects of a dynamic random access memory (DRAM) implemented in accordance with an embodiment of the present invention;4 is a block diagram illustrating an example of an address mapping between a data bit and an error correction bit according to an embodiment of the present invention;5 illustrates selected aspects of reading a data frame according to an embodiment of the present invention;6A and 6B illustrate selected aspects of a write data frame sequence according to an embodiment of the present invention;7 is a block diagram showing selected aspects of an electronic system according to an embodiment of the present invention;FIG. 8 is a block diagram illustrating selected aspects of an electronic system according to an alternative embodiment of the present invention.detailed descriptionEmbodiments of the present invention generally focus on systems, methods, and devices for supporting the error correction mode and non-error correction mode using the same memory type. In some embodiments, the memory device includes at least one pair of separate memory banks having a first memory bank and a second memory bank. In the error correction mode, the data bits can be stored in one bank and the corresponding error correction bits are stored in another bank. The memory device can be configured to support any mode that uses register bits (eg, a mode register set (MRS) register bit). In some embodiments, the ability to support error correction mode and non-error correction mode has minimal impact on the interface with the memory controller. That is, basically the same signaling, pin count, and burst length as those of a system supporting only the non-error correction mode can be used.FIG. 1 is a high-level block diagram illustrating selected aspects of a computing system implemented in accordance with an embodiment of the present invention. The computing system 100 includes a requester 102, a storage controller (or host) 110, a storage device 130, and an interconnect 120. The storage controller 110 controls, at least in part, the transfer of information between the requester 102 and the storage device 130. Requester 102 may be a processor (e.g., a central processing unit and / or core), a service processor, an input / output device (e.g., a peripheral component interconnect (PCI) high-speed device), the memory itself, or Any other element of the system 100. In some embodiments, the memory controller 110 and the requester 102 are on the same chip.In the illustrated embodiment, the memory controller 110 includes an error correction logic circuit 112, a mode indicator 114, and a memory device addressing logic circuit 116. The error correction logic 112 uses redundant information to protect the data from specific errors. In some embodiments, the error correction logic circuit 112 is an error correction code (ECC).As discussed further below, in some embodiments, the memory device 130 may operate in an error correction mode or a non-error correction mode. When operating in the error correction mode, the memory device 130 stores both data bits and corresponding error correction bits (eg, ECC bits). When operating in a non-error correction mode, the (substantially) full capacity of the memory device 130 is used to store data bits. The mode indicator 114 provides an indication of whether the memory device 130 is operating in an error correction mode or a non-error correction mode. In some embodiments. The mode indicator 114 includes one or more register bits.In some embodiments, the storage device 130 applies different address mappings to the read / write data according to whether the storage device 130 is in an error correction mode or a non-error correction mode. For example, the address mapping used in the error correction mode results in the mapping of error correction bits (eg, ECC bits). The address mapping logic circuit 116 enables the memory controller 110 to know the address mapping used by the memory device 130. The address mapping logic circuit 116 may be any logic circuit capable of providing address mapping instructions for many memory cells.The memory device 130 may be any one of a wide range of various devices, including dynamic random access memory (or simply DRAM). In some embodiments, the memory devices 130 are organized into one or more separate memory bank pairs 140. A separate bank pair refers to a pair of banks, which can be configured as a single bank or two separate banks. In some embodiments, each bank in a separate bank pair has its own row decoder and column decoder.In some embodiments, each of the separate memory bank pairs is capable of providing one memory page. For example, bank 0A provides page 142 and bank 0B provides page 144. A "bank" refers to an array of memory cells provided by a memory device. The banks 142 and 144 can collectively provide a logical page 146. The term "logical page" refers to a logical combination of two or more physical banks. In some embodiments, pages 142 and 144 each provide 1 kilobyte (K bytes) of memory, and logical page 146 provides a net effective page size of 2 Kbytes.In the illustrated embodiment, the memory device 130 includes a mode indicator 132, a posted write buffer 134, a partial write mask 136, and a column address generation logic circuit 138. The mode indicator 132 provides an indication of whether the memory device 130 is operating in an error correction mode or a non-error correction mode. In some embodiments, the mode indicator 132 includes one or more bits of a register, such as a mode register set (MRS). The registered write buffer 134 is a buffer in which data is registered before being written to the memory core of the memory device 130. The partial write mask 136 provides a write mask for data written to the memory core. In some embodiments, the partial write mask 136 is used to access error correction bits related to the data stored in the memory device 130. In some embodiments, the column address generation logic 138 generates column address information for error correction bits related to the data stored in the memory device 130.2 is a block diagram illustrating selected aspects of a dynamic random access memory (DRAM) implemented in accordance with an embodiment of the present invention. The DRAM 200 includes 16 memory banks (0A to 7B) or 8 separate memory bank pairs (for example, separate memory bank pairs 0A, 0B). In some embodiments, the DRAM 200 may be configured as a × 4 or × 8 DRAM. In × 4 mode, DRAM 200 provides 16 banks (0A to 7B), each bank provides 64-bit data to 4 data (DQ) pins. In × 8 mode, DRAM 200 provides 8 separate memory bank pairs to provide 128-bit data to 8 DQ pins.In some embodiments, the DRAM 200 may be configured to operate in an error correction mode (eg, an ECC mode) or a non-error correction mode. When operating in error correction mode, the DRAM 200 stores data in one component (e.g., bank 0A) in a separate bank and stores it in another part (e.g., bank 0B) in a separate bank. Corresponding error correction bits (e.g., ECC bits) are used to leverage its separate bank structure.3 is a block diagram illustrating selected aspects of storing data bits and error correction bits in separate memory bank pairs according to an embodiment of the present invention. The separate bank pair 300 includes bank 0A and bank 0B. In some embodiments, the data is stored in each bank up to the N / M (eg, 7/8) th unit, while the corresponding error correction bit is stored in the other of the separate bank pair 300 The last 1 / M (eg, 1/8) unit of the part. For example, the error correction bits covering the data stored in bank 0A may be stored in the 1 / 8th memory cell with the highest memory bank 0B (302). Similarly, the error correction bits covering the data stored in the bank 0B can be stored in the 1 / 8th memory cell with the highest bank 0A (304). In some embodiments, the error correction bits are error correction code (ECC) bits.In some embodiments, a host (e.g., memory controller 110 shown in FIG. 1) addresses a particular bank in a separate bank pair to identify a bank that receives / provides data bits. If the memory device is in error correction mode, it uses its internal logic circuits (e.g., part of the write mask 136, column address generation logic circuit 138, etc. shown in Figure 1) to access the error correction corresponding to the data bits Bit. The access to the data bits and corresponding error correction bits is discussed further below with reference to FIGS. 4-6B.FIG. 4 is a block diagram illustrating an example of an address mapping between a data bit and an error correction bit according to an embodiment of the present invention. In the illustrated embodiment, the data bits are stored in a 7/8 page, which is provided by one of a pair of separate banks, as shown at 410. Corresponding error correction bits are stored in the highest 1 / 8th unit of the other bank (430) in the separate bank pair. For example, ECC bits covering bytes 0-7 are stored in a memory cell 896 as shown by reference numeral 432. Similarly, the ECC bits covering the bytes 128-135 are stored in a memory cell 897 as shown by reference numeral 434, and as shown in FIG. In some embodiments, the last byte (eg, unit 903) of the top 1/8 of the bank 430 is not used, as shown by reference numeral 436. In some embodiments, the error correction bits covering bytes 8-15 are stored in unit 905, and the sequence repeats itself.Referring again to FIG. 1, the sequence of events for reading from the DRAM is illustrated. The memory controller 110 provides the row address 124 to the memory device 130. Based at least in part on the row address 124, the memory device 130 activates the same row in two banks of a separate bank pair. For example, based on row address 124, memory device 130 opens rows 150 and 152 of banks 0A and 0B.The memory controller 110 provides the column address 122 to the memory device 130 (eg, using a column address strobe (CAS) frame). The memory device 130 uses the column address 122 to access data bits from an appropriate bank (eg, bank 0A). In some embodiments, based at least in part on the column address 122, the memory device 130 internally generates a column address for the error correction bit. That is, the memory device 130 internally generates a column address for a bank in which an error bit is stored. In some embodiments, the storage device 130 causes a portion of the column address 122 to be high (eg, a logic one) to activate the highest M rows of the bank in which the error correction bits are stored.For example, in some embodiments, the column address 122 includes eight column address (CA) bits CA3 through CA10. In such an embodiment, the memory device 130 may access the error correction bits by forcing the column address bits CA8, CA9, and CA10 high and accessing 8 bytes from an appropriate bank (eg, bank 0B). The memory device 130 may then use the actual values of CA8, CA9, and CA10 to identify one of the eight bytes. For example, if the actual value of CA8 to CA10 is "000", the memory device 130 determines the first of these 8 bytes as a byte containing an error correction bit. Similarly, if CA8 to CA10 are "001", the memory device 130 determines the second byte as a byte containing an error correction bit. The memory device 130 may then provide the read data and its associated error correction bits to the memory controller 110. In some embodiments, CA3 to CA7 from the read CAS frame are used.Figure 5 illustrates selected aspects of reading a data frame according to an embodiment of the invention. If the system (e.g., system 100 shown in FIG. 1) is in an error correction mode, then in some embodiments, the transfer of 64-bit data is performed on two consecutive frames. For example, frames 502 and 504 transmit 64 data bits in unit intervals 0 to 15 and 8 error correction (eg, ECC) bits in unit intervals (UI) 16 and 17.In some embodiments, two reads are performed in parallel, and 128 data bits are passed in four frames. For example, in the illustrated embodiment, frames 502 and 504 pass the first 64 data bits (eg, d0-d63), while frames 506 and 508 pass another 64 data bits (eg, d64-d127). The error correction bits covering the data bits d0-d63 are passed in UI 16 and 17 of frame 506, and the error correction bits covering the data bits d64-d127 are passed in UI 16 and 17 of frame 508. In alternative embodiments, the read frames may have different structures and / or may transmit a different number of frames.Referring again to FIG. 1, a sequence of events for writing data to a memory device (e.g., DRAM) is illustrated. Error correction bits (for example, ECC bit 126) and data bits (for example, data bit 128) are passed from the memory controller 110 and stored in a buffer 134 (for example, a write-behind buffer). In addition, the memory controller 110 also provides a row address 124 and a column address 122 (for example, as part of a write CAS frame).Based at least in part on the row address 124, the memory device 130 activates the same row (eg, rows 150 and 152) in two banks of a separate bank pair 140. Based on the data in the write CAS frame (eg, the bits of the column address 122 and the bank address fields), the data bits 128 are written to one of the bank pairs 140, which is separate. Based at least in part on the column address 122, the memory device 130 internally generates a column address for the error correction bit. In some embodiments, the column addresses of the error correction bits are generated by making CA8 to CA10 high and using CA4 to CA10 from the write CAS frame. In some embodiments, CA2 to CA0 are not used.Generally, the number of error correction bits is only a fraction of the number of data bits. For example, 8 error correction bits can be used to cover 64 data bits. In this way, the memory device 130 can generate a partial write mask to mask 56 bits and write these 8 error correction bits to the column address, which is based on that CA8 to CA10 are high, and CA4 to CA10 are provided by the write CAS frame .6A and 6B illustrate selected aspects of a write data frame sequence according to an embodiment of the present invention. The write sequence shown in FIGS. 6A and 6B can be used for a system having two memory devices with parallel memory channels. Each device sees all four frames, and it is assigned from D0 to D63 or D64 to D127. The allocation of memory devices is discussed further below.In some embodiments, the write sequence includes the following frames: write header (Wh) 602, ECC write frame (We) 604, write data 1 (Wd1) 606, and write data 2 (Wd2) 608. In some embodiments, each frame is a 6-bit frame (labeled 0 to 5), which is 9 bits deep (e.g., UI 0 to 8 or 9 to 17). Wh602 includes header information for writing the sequence and some data bits.We 604 pass error correction bits covering the relevant data bits (for example, ECC bit 610 shown in UI 12-14). In some embodiments, the error correction bits (eg, ECC bits) are passed to the memory device 130 using partial write mask encoding. That is, except that the mask bit is replaced by an error correction bit (for example, ECC bit 610 shown in UI 12-14), We 604 may have the same command encoding as a partially written mask frame (Wm). ECC bits ECC0-ECC7 cover data bits D0-D63, and ECC bits ECC8-ECC15 cover data bits D64-D127. In some embodiments, We frame 604 is required for all write data transfers when the system is operating in error correction mode.Wd1, 606, and Wd2, 608 pass the remaining data bits for the write operation. Data bits D0-D63 are used by one memory device, and D64-D127 are used by another memory device. In some embodiments, register bits within a memory device determine which memory device to pick out which data bit. For example, the MRS register can be used to allocate data bits to a memory device.FIG. 7 is a block diagram illustrating selected aspects of an electronic system according to an embodiment of the present invention. The electronic system 700 includes a processor 710, a memory controller 720, a memory 730, an input / output (I / O) controller 740, a radio frequency (RF) circuit 750, and an antenna 760. In operation, the system 700 uses the antenna 760 to send and receive signals, and these signals are processed by the various elements shown in FIG. 7. The antenna 760 may be a directional antenna or an omnidirectional antenna. As used herein, the term omnidirectional antenna refers to any antenna that has a substantially uniform pattern in at least one plane. For example, in some embodiments, the antenna 760 may be an omnidirectional antenna, such as a dipole antenna or a quarter-wavelength antenna. Further, for example, in some embodiments, the antenna 760 may be a directional antenna, such as a parabolic antenna, a patch antenna, or a Yagi antenna. In some embodiments, the antenna 760 may include multiple physical antennas.The radio frequency circuit 750 is in communication with an antenna 760 and an I / O controller 740. In some embodiments, the RF circuit 750 includes a physical interface (PHY) corresponding to a communication protocol. For example, the RF circuit 750 may include a modulator, a demodulator, a mixer, a frequency synthesizer, a low noise amplifier, a power amplifier, and the like. In some embodiments, the RF circuit 750 may include a heterodyne receiver, and in other embodiments, the RF circuit 750 may include a direct conversion receiver. For example, in an embodiment with multiple antennas 760, each antenna may be coupled to a respective receiver. In operation, the RF circuit 750 receives a communication signal from the antenna 760 and provides an analog or digital signal to the I / O controller 740. Further, the I / O controller 740 may provide signals to the RF circuit 750, which operates on these signals and then transmits them to the antenna 760.The processor 710 may be any type of processing device. For example, the processor 710 may be a microprocessor, a microcontroller, or the like. Further, the processor 710 may include any number of processing cores or may include any number of separate processors.The memory controller 720 provides a communication path between the processor 710 and other elements shown in FIG. 7. In some embodiments, the storage controller 720 is part of a hub device that also provides other functions. As shown in FIG. 7, the memory controller 720 is coupled to the processor 710, the I / O controller 740, and the memory 730.The memory 730 may include a plurality of storage devices. These memory devices can be based on any type of memory technology. For example, the memory 730 may be a random access memory (RAM), a dynamic random access memory (DRAM), a static random access memory (SRAM), a non-volatile memory such as a FLASH memory, or any other type of memory. In some embodiments, the memory 730 may support an error correction mode and a non-error correction mode.The memory 730 may represent a single memory device or multiple memory devices on one or more modules. The storage controller 720 provides data to the memory 730 through the interconnect 722 and receives data from the memory 730 in response to a read request. Commands and / or addresses may be provided to the memory 730 through the interconnect 722 or through a different interconnect (not shown). The storage controller 720 may receive data to be stored in the memory 730 from the processor 710 or from another source. The storage controller 720 may provide data received from the memory 730 to the processor 710 or another destination. The interconnection 722 may be a bidirectional interconnection or a unidirectional interconnection. The interconnect 722 may include a plurality of parallel conductors. The signal can be differential or single-ended. In some embodiments, interconnect 722 operates using a forward polyphase clock scheme.The storage controller 720 is also connected to the I / O controller 740 and provides a communication path between the processor 710 and the I / O controller 740. The I / O controller 740 includes a circuit for communicating with an I / O circuit such as a serial port, a parallel port, a universal serial bus (USB) port, and the like. As shown in FIG. 7, the I / O controller 740 provides a communication path to the RF circuit 750.FIG. 8 is a block diagram illustrating selected aspects of an electronic system according to an alternative embodiment of the present invention. The electronic system 800 includes a memory 730, an I / O controller 740, an RF circuit 750, and an antenna 760, all of which have been described above with reference to FIG. The electronic system 800 further includes a processor 810 and a memory controller 820. As shown in FIG. 8, the memory controller 820 may be on the same chip as the processor 810. In some embodiments, the memory controller 820 includes a playback logic circuit (eg, the playback logic circuit 310 shown in FIG. 3) to detect a prescribed error, perform an automatic quick reset, and replay a specific transaction. As described above with reference to the processor 710 (FIG. 5), the processor 810 may be any type of processor. Example systems represented by Figures 7 and 8 include desktop computers, laptop computers, servers, mobile phones, personal digital assistants, digital home systems, and so on.Elements of embodiments of the present invention may also be provided as a machine-readable medium for storing machine-executable instructions. Machine-readable media can include, but are not limited to, flash memory, compact discs, compact disc read-only memory (CD-ROM), digital versatile / video disc (DVD) ROM, random access memory (RAM), erasable and removable Programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic or optical cards, propagation media, or other types of machine-readable media suitable for storing electronic instructions. For example, embodiments of the present invention may be downloaded as a computer program, which may be received from a remote computer (e.g., a server) as a data signal contained in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection) Passed to the requesting computer (for example, the client).It should be appreciated that "an embodiment" or "an embodiment" mentioned throughout this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Therefore, it is emphasized and should be realized that two or more references to "one embodiment" or "alternative embodiment" in different parts of the specification are not necessarily all referring to the same embodiment. Moreover, the particular features, structures, or characteristics may be combined as appropriate in one or more embodiments of the present invention.Similarly, it should be appreciated that in the foregoing description of embodiments of the present invention, in order to simplify the present disclosure and help understand one or more of the various aspects of the invention, various features are sometimes in a single embodiment, drawing, or description thereof Group together. However, this disclosed method should not be construed as reflecting the intention that the claimed subject matter requires more features than are expressly stated in each claim. Rather, as reflected in the claims that follow, aspects of the invention may exist with less than all features of a single foregoing disclosed embodiment. Accordingly, the claims following the detailed description are hereby expressly incorporated into this detailed description. |
Various embodiments are generally directed to an apparatus, method and other techniques to send a power operation initiation indication to the accelerator device via the subset of the plurality of interconnects, the power operation initiation indication to indicate a power operation to be performed on one or more infrastructure devices, receive a response the accelerator device, the response to indicate to the processor that the accelerator is ready for the power operation, and ucause the power operation to be performed on the accelerator device, the power operation to enable or disable power for the one or more of the infrastructure devices. |
An apparatus to dynamically control accelerator devices, comprising:a multi-chip package (MCP) comprising a processor and an accelerator device, the accelerator device comprising infrastructure devices, and the processor coupled with the accelerator device via a subset of a plurality of interconnects, the processor to:send a power operation initiation indication to the accelerator device via an interconnect of the subset of the plurality of interconnects, the power operation initiation indication to indicate a power operation to be performed on one or more of the infrastructure devices,receive a response from the accelerator device, the response to indicate to the processor that the accelerator device is ready for the power operation, andcause the power operation to be performed on the accelerator device, the power operation to enable or disable power for the one or more of the infrastructure devices.The apparatus of claim 1, the processor to receive a power operation indication from a basic input/output system (BIOS) coupled via a second subset of the plurality of interconnects, the power operation indication to initiate the power operation for the accelerator device.The apparatus of claim 1, the processor to assert a power enable pin to send the power operation initiation indication over the interconnect to the accelerator device to enable power for the one or more infrastructure devices, receive the response comprising a sideband ready message from the accelerator device, and send a power-on configuration message to configure one or more of the infrastructure devices.The apparatus of claim 3, the processor to receive a sideband complete message indicating the configuration of the one or more infrastructure devices completed, de-assert a power reset pin based on the sideband complete message to cause the power operation comprising enabling power for accelerator device, and send an indication to a basic input/output (BIOS) indicating completion of the power operation.The apparatus of claim 1, the processor to send the power operation initiation indication over the interconnect comprising a sideband reset message to disable the power for the one or more infrastructure devices, receive the response comprising a sideband reset acknowledgment message, and cause the power operation via asserting the power reset pin and de-asserting the power enable pin.The apparatus of claim 5, the processor to send an indication to a basic input/output system (BIOS) indicating completion of the power operation based on upon completion of de-asserting the power enable pin and asserting the power reset pin.The apparatus of claim 1, comprising a BIOS coupled with the processor via a second subset of the plurality of interconnects, the BIOS to receive an indication to enable or disable the accelerator device from an operating system or virtual operating system and send a second power operation indication to the processor based on the indication via a mailbox command communicated via the subset of the plurality of interconnects.The apparatus of claim 7, comprising:a management controller coupled with a scheduler, the management controller to receive an indication to enable or disable the accelerator device from the scheduler, and to cause the operating system or virtual operating system to send the indication to enable or disable the accelerator device to the BIOS.The apparatus of claim 1, comprising:a management controller; anda basic input/output system (BIOS), the management controller and the BIOS coupled with the MCP.A computer-implemented method to dynamically control accelerator devices, comprising:sending a power operation initiation indication to an accelerator device via a subset of a plurality of interconnects, the power operation initiation indication to indicate a power operation to be performed on one or more of infrastructure devices of the accelerator device;receiving a response from the accelerator device, the response to indicate to a processor that the accelerator device is ready for the power operation; andcausing the power operation to be performed on the accelerator device, the power operation to enable or disable power for the one or more of the infrastructure devices.The computer-implemented method of claim 10, comprising receiving a power operation indication from a basic input/output system (BIOS) coupled via a second subset of the plurality of interconnects, the power operation indication to initiate the power operation for the accelerator device.The computer-implemented method of claim 10, comprising performing the power operation to enable power for the one or more infrastructure devices comprising:asserting a power enable pin to send the power operation initiation indication to the accelerator device;receiving the response comprising a sideband ready message from the accelerator device; andsending a power-on configuration message to configure one or more of the infrastructure devices.The computer-implemented method of claim 12, comprising:receiving a sideband complete message indicating the configuration of the one or more infrastructure devices completed;de-asserting a power reset pin based on the sideband complete message to cause the power operation comprising enabling power for accelerator device; andsending an indication to the BIOS indicating completion of the power operation.The computer-implemented method of claim 10, comprising performing the power operation to disable the one or more infrastructure devices comprising:sending the power operation initiation indication comprising a sideband reset message;receiving the response comprising a sideband reset acknowledgment message; andcausing the power operation via asserting the power reset pin and de-asserting the power enable pin.The computer-implemented method of claim 14, comprising sending an indication to a BIOS indicating completion of the power operation based on upon completion of de-asserting the power enable pin and asserting the power reset pin. |
TECHNICAL FIELDEmbodiments described herein generally include techniques to dynamically enable and disable accelerator devices in compute environments.BACKGROUNDAs markets progress towards machine learning, artificial intelligence, perceptual computing, etc., processing products become more specialized and are tailored to these market segments. One of the current silicon solutions to enable promote this trend is the integration of accelerators into a traditional processor die to create Multi-Chip Package (MCP) solutions. Usage of these accelerators are workload dependent, and there have been proven real-time use cases where the accelerators are in an idle state and consume unnecessary power from the overall platform budget.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1Aillustrates a first example of a node.FIG. 1Billustrates a second example of a node.FIG. 2Aillustrates an example of a processing flow.FIG. 2Billustrates an example of a second processing flow.FIG. 3illustrates an example of a system.FIG. 4Aillustrates an example of a third processing flow.FIG. 4Billustrates further operations of the third processing flow.FIG. 5illustrates an example of a third logic flow.FIG. 6illustrates an example embodiment of a computing architecture.DETAILED DESCRIPTIONAs previously discussed usage of these accelerator devices are workload dependent, and there have been proven real-time use cases where the accelerators are in an idle state and consume unnecessary power from the overall platform budget. Discussed herein may be related to dynamically enabling and disabling power for accelerator devices and/or one or more infrastructure devices of the accelerator devices.The goal of shifting power from an inactive component of a CPU to an active one has gotten a lot of attention recently. Prior solutions include enabling power balancing between CPU cores and sometimes across CPU sockets through tailored power management algorithms using telemetry information from thermal sensors, performance monitors, etc. These solutions are typically hardware autonomous with some control to an operating system (OS) or OS driven. While some of these dynamic power management solutions can be extended to accelerator devices, it requires tailoring of an accelerator die to have these advance power management features and does not eliminate the idle power consumption. In addition to the power issue, to achieve reconfiguration of the MCP (disabling the accelerator as a whole due to workload or change accelerator internal configuration), the platform including the MCP would have to recycle the through warm reset and sometimes cold reset, which is not an acceptable solution from a server or processing node where customers want 99.99% up these platforms).Thus, embodiments are directed to dynamically controlling the accelerator devices based on workload resource requirements, which may be provided and controlled by a scheduler device. For example, embodiments include the scheduler reading workload metadata that contains workload resource requirements based on workloads for processing by a data center. The workload resource requirements may specify which processing and memory requirements are needed to process the workloads, and the scheduler may determine accelerator devices, resources, and circuitry to enable and/or disable based on workload resource requirements. Moreover, the scheduler may determine which nodes include the accelerator devices, resources, and circuitry are available and to perform power operations, e.g., enabling and disabling power for accelerator devices. In some instances, the scheduler may determine which nodes to perform the power operations based on load balancing and/or on a service level agreement (SLA) that is associated with a workload.In one example, the scheduler may communicate with a node to cause performance of a power operation. More specifically, the scheduler directs a management controller and an operating system of the node to perform a power operation for one or more accelerator device(s). The management controller utilizing an operating system or may perform one or more operations in preparation to perform the power operation. For example, when disabling accelerator devices, the operating system may offload processes and workloads from the accelerator devices to be disabled. Similarly, in preparation of enabling accelerator devices, the operating systems determine workloads and processes to execute on the accelerator devices once the accelerator devices are enabled.Further, either the operating system or a basic input/output system (BIOS) performs a hot-plug flow and the BIOS determines the CPU and the accelerator device to perform the power operation. The BIOS may issue a power operation indication to the CPU associated with the accelerator device. Embodiments include the CPU and accelerator device performing the power operation, as will be discussed in more detail. The CPU may also notifying the BIOS of completion of the power operation.Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding thereof. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives consistent with the claimed subject matter.FIG. 1Aillustrates an example embodiment of a node 101 in which aspects of the present disclosure may be employed to process data and perform dynamic enablement and disablement of accelerator devices. The node 101 is a computing device, such as a personal computer, desktop computer, tablet computer, netbook computer, notebook computer, laptop computer, a mobile computing device, a server, server farm, blade server, a rack-based server, a rack-based processing board, and so forth. In embodiments, the node 101 includes devices, circuitry, memory, storage, and components to process data and information. In the illustrated example, the node 101 includes a multi-chip package (MCP) 102 having a central processing unit (CPU) 110, one or more accelerator devices 112-x, wherexmay be any positive integers, and package memory 114-z, wherezmay be any positive integer. The package memory 114 may be volatile memory, such as cache that can be used by the other components of the MCP 102 to process information and data. The MCP 102 may include additional circuitry and registers. Moreover, the MCP 102 may include more than one CPU 110, and each of the CPUs 110 may include a number of processing cores to process data. The node 101 further includes a basic input/output system (BIOS) 120, a management controller 122, memory 124, storage 132 having an operating system 130, and interface(s) 134.In embodiments, the CPU 110 is implemented using any processor or logic device and may be one or more of any type of computational element, such as but not limited to, a microprocessor, a processor, central processing unit, digital signal processing unit, dual core processor, mobile device processor, desktop processor, single core processor, a system-on-chip (SoC) device, complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or any other type of processor or processing circuit on a single chip or integrated circuit. The CPU 110 may be connected to and communicate with the other elements of the MCP 102 via interconnects 103, as will be discussed in more detail below with respect to FIG. 1B . The CPU 110 may also include cache, registers, and other circuitry. Embodiments are not limited in this manner.In embodiments, the MCP 102 includes one or more accelerator device(s) 112-x,wherexmay be any positive integer. A accelerator device 112 may be hardware (processor) accelerator device designed to provide hardwired logic to accelerate specific processing tasks, such as graphics, mathematical operations, cryptographic operations, media processing, image processing, and so forth. Examples of an accelerator device 112 includes a graphics processing unit (GPU), a cryptographic unit, a physics processing unit (PPU), a fixed function unit (FFU), and the like. Embodiments are not limited in this manner.In embodiments, the node 101 includes a memory 124 coupled with the MCP 102 via one or more interconnects. The memory 124 may be one or more of volatile memory including random access memory (RAM) dynamic RAM (DRAM), static RAM (SRAM), double data rate synchronous dynamic RAM (DDR SDRAM), SDRAM, DDR1 SDRAM, DDR2 SDRAM, SSD3 SDRAM, single data rate SDRAM (SDR SDRAM), DDR3, DDR4, and so forth. Embodiments are not limited in this manner, and other memory types may be contemplated and be consistent with embodiments discussed herein. For example, the memory 124 may be a three-dimensional crosspoint memory device, or other byte addressable write-in-place nonvolatile memory devices. In embodiments, the memory devices may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin-transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin-Orbit Transfer) based device, a thyristor-based memory device, or a combination of any of the above, or other memory.In embodiments, the node 101 includes storage 132, which can be implemented as a non-volatile storage device such as, but not limited to, a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up SDRAM (synchronous DRAM), and/or a network accessible storage device. In embodiments, storage 132 may include technology to increase the storage performance enhanced protection for valuable digital media when multiple hard drives are included, for example. Further examples of storage 132 may include a hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of DVD devices, a tape device, a cassette device, or the like. The embodiments are not limited in this context.The storage 132 may include an operating system 130 or system software that manages the node's 101 hardware and software resources and to provide common services for computer programs, software applications, and hardware components. The operating system 130 may be a Windows® based operating system, an Apple® based on operating system, a Unix® based operating system, and so forth. In some embodiments, the operating system 130 may not be stored in storage 132, but may be loaded and run from the memory 124. In another example, the operating system 130 may load from a network and run from memory 124. In some embodiments, the node 101 may not include storage 132 and embodiments are not limited in this manner.The operating system 130 also performs operations to enable performance of the power operation for the accelerator device(s) 112. For example, the operating system 130 may receive a notification and data from a scheduler notifying the operating system 130 of the power operation. The operating system 130 may perform one or more operations in preparation of the power operation. For example, when disabling an accelerator device 112 the operating system 130 may offload processes and workloads from the accelerator devices 112 to be disabled. Similarly, in preparation of enabling an accelerator device 112 the operating system 130 determines which workloads and processes to execute on the accelerator device 112 once the accelerator devices are enabled.In some instances, the operating system 130 performs a hot-plug flow in preparation of performing a power operation. The operating system 130 may cause a hot-plug flow if the input/output (I/O) link between the CPU 110 and an accelerator device 112 is managed by the operating system 130,e.g.,the operating system can control the I/O link. If the power operation enables power to an accelerator device 112, the hot-plug flow includes initializing any interconnects or buses, reserving memory, configuring/setting registers, and loading any additional software (drivers) to support the accelerator device 112, for example. If the power operation disables power to an accelerator device 112, the hot-plug flow includes configuring registers, enabling memory to be released, and so forth. The operating system 130 also indicates to the BIOS 120 to enable or disable the accelerator devices 112. In one example, the operating system 130 may utilize Advanced Configuration and Power Interface (ACPI) message or interrupt to notify the BIOS 120 to cause the power operation.In embodiments, the node 101 includes one or more interface(s) 134 to communicate data and information with other nodes and compute systems, for example. An interface 134 may be capable of communicating via a fabric network or an Ethernet network, optically and/or electrically. The node 101 may receive data from one or more of a data center, a management server, a scheduler, and so forth. The received data and information may cause a power operation to be performed on the node 101. Other examples of an interface 134 include a Universal Serial Bus (USB) ports/adapters, IEEE 1394 Firewire ports/adapters, and so forth. Examples of other interfaces 134 include parallel interfaces, serial interfaces, and bus interfaces. Embodiments are not limited in this manner.In embodiments, the node 101 also includes a BIOS 120 which is firmware stored in non-volatile memory. The BIOS 120 controls and performs hardware initialization during the booting process of the node 101, and provides runtime services for operating systems and programs. The BIOS 120 may be an Extensible Firmware Interface (EFI) device, a Unified EFI (UEFI) device, a platform-independent Open Firmware Institute of Electrical and Electronics Engineers 1275 (IEEE-1275), and so forth.In embodiments, the BIOS 120 performs one or more operations to enable the power operation. For example, if the I/O link between the CPU 110 and the accelerator devices 112 is BIOS 120 managed or not visible to the operating system 130,e.g.,the operating system 130 cannot control the I/O link, the BIOS 120 receives control of the power operation processing flow from the operating system 130 and the BIOS performs the hot-plug flow for the accelerator device(s) 112.The BIOS 120 may determine which CPU 110, accelerator devices 112, and infrastructure devices on which to perform the power operation by reading one or more registers, such as the CAPID register(s). Moreover, the BIOS 120 may issue a power operation indication to the CPU 110 associated with the accelerator device 112 to perform the operation. In embodiments, the power operation indication may be a mailbox command, such as a BIOS2Pcode mailbox command, to cause the power operation. Embodiments are not limited in this manner.In embodiments, the BIOS 120 may perform one or more provisioning operations during boot time to support dynamic enablement/disablement of the accelerator device 112. For example, the BIOS 120 and CPU 110 may treat each of the accelerator devices 112 like a Peripheral Component Interconnect (PCI) host bridge and perform following provisioning operations including, but not limited, to padding of PCI bus ranges, padding of memory mapped I/O low (MMIOL) range and memory mapped I/O high (MMIOH) range, pre-allocating credits/buffers in the CPU 110 for the accelerator device 112, disabling the I/O link 151 connecting to the accelerator device 112, generating a system management interrupt (SMI) when accelerator device is enabled, and performign master aborting to allow access to the padded PCI Bus, MMIOL and MMIOH ranges without affecting system operations. Embodiments are not limited in this manner.FIG. 1Billustrates a second example of node 101 detailing interconnects 103 between the components including the BIOS 102, the CPU 110, and one or more accelerator devices 112-xhaving infrastructure device(s) 114-x. As will be discussed in more detail below, the components may communicate data with each other via one or more of the interconnects 103 to perform power operations to enable and disable accelerator devices 112 and/or the one or more infrastructure devices 114.The one or more infrastructure devices 114 include one or more of integrated processors (IPs), field-programmable gate array(s) (FPGA(s)), one or more cores, calculation units, registers, application-specific integrated circuits (ASICs), and other processing circuitry. In embodiments, the power operation may dynamically enable and/or disable accelerator devices 112 and/or infrastructure devices 114. For example, one or more of the multiple accelerator devices 112 may be enabled, while one or more other multiple accelerator devices 112 may be disabled. Similarly, each of the accelerator devices 112 may include one or more infrastructure devices 114 that can be individually and dynamically controlled. One or more the infrastructure devices 114 of a particular accelerator device 112 may be enabled, while one or more other infrastructure devices 114 of the accelerator device 112 may be disabled.As previously mentioned, the BIOS 120 initiates the power operation for the accelerator devices 112 based on data received from the operating system 130 and/or the management controller 122. In embodiments, the BIOS 120 may receive control from the operating system 130 to perform the power operation, which may be triggered by an ACPI interrupt or message. The BIOS 120 determines the power operation to perform, e.g., which accelerators 112 and/or infrastructure devices 114 to enable/disable, based on a reading of one or more registers (CAPID registers). The BIOS 120 may issue a power operation indication to the CPU 110 associated with the accelerator. In embodiments, the power operation indication may be a mailbox command, such as a BIOS2Pcode mailbox command, sent to the CPU 110 via a sideband link 165 to cause the power operation. The power operation indication may initiate the power operation on the CPU 110.The CPU 110 may determine the power operation to perform, e.g. enable/disable one or more accelerator devices 112 and/or infrastructure devices 114 based on the power operation indication. In one example, the power operation indication may indicate enabling power for accelerator device(s) 112 and infrastructure device(s) 114. In another example, the power operation indication may indicate disabling power for accelerator device(s) 112 and infrastructure device(s) 114, as will be discussed in more detail below. In some embodiments, the CPU 110 also determines which of one or more accelerator devices 112-xand/or infrastructure(s) 114-xon which to perform the power operation.The CPU 110 may send a power operation initiation indication to the accelerator device 112, via one or more of a plurality of interconnects 103. Moreover, the power operation initiation indication indicates a power operation to the accelerator device 112 including infrastructure devices 114. In one example, when enabling power for an accelerator device 112, the CPU 110 asserts a power enable pin 153, such as a CD_PWRGOOD pin, to send the power operation initiation indication to the accelerator device 112. Note that embodiments may include opposite logic,e.g.,de-assertion of a pin to indicate enabling of power, and be consistent with embodiments discussed herein.The accelerator device 112 may receive the power initiation indication or detect the assertion of the power enable pin 153. In embodiments, the accelerator device 112 sends, and the CPU 110 receives a response to the power initiation indication to indicate that the accelerator device 112 is ready for configuration and the power operation. The response may be sideband ready message including an infrastructure ready parameter, such as CDRDY2CPU [INFRA], from the accelerator device 112 communicated via a sideband link 157. The infrastructure ready parameter indicates that the infrastructure devices 114 are ready for configuration and enablement of power.The CPU 110 may receive the sideband ready message from the accelerator device 112 and determine the accelerator device 112 is ready for configuration and enablement of power. The CPU 110 sends a power-on configuration message to configure one or more of the infrastructure devices 114 via the sideband link 157. The power-on configuration message includes which of one or more infrastructure device 114 of the accelerator device 112 to enable power and a configuration for the infrastructure devices 114. The configuration may be specified by the BIOS 120 and provided in the power operation indication.The accelerator device 112 receives the power-on configuration message and performs one or more configuration operations for the infrastructure devices 114,e.g.,ensure registers are cleared and/or set with proper starting values, circuitry is in the proper state, and so forth. The accelerator device 112 sends and the CPU 110 receives a sideband complete message, such as CDRDY2CPU [HOST], indicating the configuration of the one or more infrastructure devices completed. The sideband complete message may be sent via the sideband link 157 between the accelerator device 112 and the CPU 110. Moreover, the CPU 110 receives the sideband complete message and causes the power operation,e.g.,enablement of power in this example. In embodiments, the CPU 110 may cause the power operation by de-asserting a power reset pin 155, such as the CD_RESET_N pin. The accelerator device 112 may detect the de-assertion of the power reset pin 155, which causes power to be applied to the one or more infrastructure devices 114. The CPU 110 sends an indication to the BIOS 120 indicating completion of the power operation. In some embodiments, the BIOS 120 may poll the mailbox command to detect completion of the power operation sent via the sideband link 165.In another example, the power operation indication may indicate disabling power for accelerator device(s) 112 and infrastructure device(s) 114. To disable power, the BIOS 120 communicates, and the CPU 110 receives a power operation indication via at least one of a plurality of interconnects 103. The power operation indication initiates the power operation for the accelerator device 112. In some embodiments, the power operation indication may be a mailbox command, such as be a BIOS2Pcode mailbox command, communicated via a sideband link 165 from the BIOS 120 to the CPU 110. The power operation indication may have been provisioned by the operating system and may be based on resource requirements provided by a data center. In this example, the power operation indication indicates disabling power for the accelerator device 112 including infrastructure devices 114.In embodiments, the CPU 110 may receive the power operation indication and process the power operation indication. More specifically, the CPU 110 including the firmware and circuitry, such as power management firmware, determines the power operation indiction is to disable power for the accelerator device 112 and infrastructure devices 114. In some embodiments, the CPU 110 may determine which of one or more accelerator devices 112-xand/or infrastructure devices 114-xto disable power. The CPU 110 may send a power operation initiation indication to the accelerator device 112, via one or more of a plurality of interconnects. In one example, when disabling power for an accelerator device 112, the CPU 110 may send a sideband reset message, such as Sx_Warn, via a sideband link 157 to the accelerator device 112.The accelerator device 112 may receive the power operation initiation indication and perform a number of operations including conducting an internal reset sequence to reset the infrastructure devices 114, enable any context flushing, cause debug mode quiescing, and power gate any integrated processors. The accelerator device 112 sends and the CPU 110 receives a response to indicate that the accelerator device 112 is ready for the power operation. In embodiments, the response is sideband reset acknowledgment message, such as Sx_Warn_Ack, from the accelerator device 114 communicated via a sideband link 157.The CPU 110 may receive the sideband reset acknowledgment message from the accelerator device 112 and determine the accelerator device 112 is ready for disablement of power. The CPU 110 asserts the power reset pin 155,e.g.,CD_RESET_N, which is detected by the accelerator device 112. Asserting the power reset pin causes the infrastructure devices 114 to be under reset, and phase-locked loop devices are shutdown. The CPU 110 also de-asserts the power enable pin 153,e.g.,CD_PWRGOOD, causing the accelerator device's 112 fully integrated voltage regulators to shutdown and the accelerator device 112 to be in a low power level. The CPU 110 may send an indication to the BIOS 120 indicating completion of the power operation via the sideband link 165. In some embodiments, the BIOS 120 may poll the mailbox command to detect completion of the power operation via the sideband link 165.FIG. 2Aillustrates an example of a processing flow 200 that may be representative of some or all of the operations executed by one or more embodiments described herein. For example, the processing flow 200 may illustrate operations performed by a CPU 110, an accelerator device 112, and a BIOS 120. However, embodiments are not limited in this manner, and one or more other components may perform operations to enable and support the operations discussed in this processing flow 200.At line 202, the BIOS 120 communicates, and the CPU 110 receives a power operation indication via at least one of a plurality of interconnects. The power operation indication initiates the power operation for the accelerator device. In some embodiments, the power operation indication may be a mailbox command, such as be a BIOS2Pcode mailbox command, communicated via a sideband link from the BIOS 120 to the CPU 110. The power operation indication may have been provisioned by the operating system and may be based on resource requirements provided by a data center. The power operation indication indicates enabling or disabling power for the accelerator device 112 and infrastructure devices. Enabling power for the accelerator device 112 can include causing an exiting a lower power level, such as a sleep state (Sx) as defined by ACPI, into higher power level or operating state. Disabling power for the accelerator device 112 can include causing an entering a lower power level, or a Sx, and exiting a higher power level or operating state. In the illustrated example of FIG. 2A , the power operation includes enabling power for the accelerator device 112 and the power operation indication indicates enabling power for the accelerator device 112.In embodiments, the CPU 110 may receive the power operation indication and process the power operation indication. More specifically, the CPU 110 including firmware and circuitry, such as power management firmware, determines the power operation indiction is to enable power for the accelerator device 112, for example. In some embodiments, the CPU 110 may determine which of one or more accelerator devices 112-xpower to enable power. At line 204, the CPU 110 may send a power operation initiation indication to the accelerator device 112, via one or more of a plurality of interconnects. Moreover, the power operation initiation indication to indicate a power operation to be performed on the accelerator device 112 including infrastructure devices 114. In one example, when enabling power for an accelerator, the CPU 110 asserts a power enable pin, such as a CD_PWRGOOD pin, to send the power operation initiation indication to the accelerator device 112.The accelerator device 112 may receive the power initiation indication or detect the assertion of the power enable pin. At line 206, the accelerator device 112 sends, and the CPU 110 receives a response to indicate that the accelerator device 112 is ready for configuration and the power operation. In embodiments, the response is sideband ready message includes an infrastructure ready parameter, such as CDRDY2CPU [INFRA], from the accelerator device communicated via a sideband link between the accelerator device 112 and the CPU 110. The infrastructure ready parameter to indicate that the infrastructure devices 114 are ready for configuration and enablement of power.The CPU 110 may receive the sideband ready message from the accelerator device 112 and determine the accelerator device 112 is ready for configuration and enablement of power. At line 208, the CPU 110 sends a power-on configuration message to configure one or more of the infrastructure devices 114. The power-on configuration message includes which of one or more infrastructure device 114 of the accelerator device 112 to enable power and a configuration for the infrastructure devices 114. The configuration may be specified by the BIOS 120 and provided in the power operation indication. In embodiments, the power-on configuration message is communicated via a sideband link between the CPU 110 and the accelerator device 112.The accelerator device 112 receives the power-on configuration message and performs one or more configuration operations for the infrastructure devices 114,e.g.,ensure registers are cleared and/or set with proper starting values, circuitry is in the proper state, and so forth. At line 210, the accelerator device 112 sends and the CPU 110 receives a sideband complete message, such as CDRDY2CPU [HOST], indicating the configuration of the one or more infrastructure devices completed. The sideband complete message may be sent via the sideband link between the accelerator device 112 and the CPU 110. At line 212, the CPU 110 receives the sideband complete message and causes the power operation,e.g.,enablement of power in this example. In embodiments, the CPU 110 may cause the power operation by de-asserting a power reset pin, such as CD_RESET_N. The accelerator device 112 may detect the de-assertion of the power reset pin and power the one or more infrastructure devices 114. At line 214, the CPU 110 sends an indication to the BIOS 120 indicating completion of the power operation. In some embodiments, the BIOS 120 may poll the mailbox command to detect completion of the power operation.FIG. 2Billustrates an example of a processing flow 250 that may be representative of some or all of the operations executed by one or more embodiments described herein to disable/reduce power for an accelerator device 112 and infrastructure devices 114. For example, the processing flow 250 may illustrate operations performed by a CPU 110, an accelerator device 112, and a BIOS 120. However, embodiments are not limited in this manner, and one or more other components may perform operations to enable and support the operations discussed in this processing flow 250.At line 252, the BIOS 120 communicates, and the CPU 110 receives a power operation indication via at least one of a plurality of interconnects. The power operation indication initiates the power operation for the accelerator device 112. In some embodiments, the power operation indication may be a mailbox command, such as be a BIOS2Pcode mailbox command, communicated via a sideband link from the BIOS 120 to the CPU 110. The power operation indication may have been provisioned by the operating system and may be based on resource requirements provided by a data center. The power operation indication indicates of enabling or disabling power for the accelerator device 112, as previously discussed. In the illustrated example of FIG. 2B , the power operation includes disabling power for the accelerator device 112, and the power operation indication indicates disabling power for the accelerator device 112.In embodiments, the CPU 110 may receive the power operation indication and process the power operation indication. More specifically, the CPU 110 including the firmware and circuitry, such as power management firmware, determines the power operation indication is to disable power for the accelerator device 112, for example. In some embodiments, the CPU 110 may determine which of one or more accelerator devices 112-xto disable power. At line 254, the CPU 110 may send a power operation initiation indication to the accelerator device 112, via one or more of a plurality of interconnects. In one example, when disabling power for an accelerator device 112, the CPU 110 may send a sideband reset message, such as Sx_Warn, via a sideband link to the accelerator device 112.The accelerator device 112 may receive the power operation initiation indication and perform a number of operations including conducting an internal reset sequence to reset the infrastructure devices 114, enable any context flushing, cause debug mode quiescing, and power gate any integrated processors at line 256. At line 258, the accelerator device 112 sends, and the CPU 110 receives a response to indicate that the accelerator device 112 is ready for the power operation. In embodiments, the response is sideband reset acknowledgment message, such as Sx_Warn_Ack, from the accelerator device 114 communicated via a sideband link.The CPU 110 may receive the sideband reset acknowledgment message from the accelerator device 112 and determine the accelerator device 112 is ready for disablement of power. At line 260, the CPU 110 asserts the power reset pin,e.g.,CD_RESET_N, which is detected the accelerator device 112. Asserting the power reset pin causes the infrastructure devices 114 to be under reset, and phase-locked loop devices are shutdown. At line 262, the CPU 110 may de-assert the power enable pin,e.g.,CD_PWRGOOD, causing the accelerator device's 112 fully integrated voltage regulators to shutdown and the accelerator device 112 to be at a lower power level. At line 264, the CPU 110 may send an indication to the BIOS indicating completion of the power operation. In some embodiments, the BIOS 120 may poll the mailbox command to detect completion of the power operation.FIG. 3illustrates an example of a first system 300 that may be representative of a type of computing system in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 3 , the system may contain a plurality of racks 302-z,wherezmay be any positive integer, each of which may house computing equipment comprising a respective set of nodes 101-y, whereymay be any positive integer, which may be distributed among the racks 302-z.As previously discussed, a node 101 may include resources, such as MCPs including CPUs and accelerator devices, to process data and workloads in the first system 300.In some embodiments, the nodes 101-ymay be circuit boards on which components such as MCPs having CPUs, accelerators, memory, and other components. The nodes 101-ymay be configured to mate with power and data communication cables in each rack 302-zto be coupled with networking 308 to communicate with a scheduler device 306 and a data center 304. The networking 308 may utilize a fabric network architecture that supports multiple other network architectures including Ethernet and Omni-Path. For example, the nodes 101-ymay be coupled switches and other networking equipment via optical fibers. However, other embodiments may utilize twister pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). The data center 304 and scheduler 302 may, in use, pool resources of the nodes 101-y, such as the memory, accelerator devices, CPUs, and data storage drives to process workloads on an as needed basis.In embodiments, the data center 304 includes a number of devices and components to monitor and control processing of workloads for the system 300. The data center 304 includes scheduler 306, which may be implemented in hardware, software, or a combination thereof, to process workload requests and dynamically allocate workloads for processing by the nodes 101. In one example, the scheduler 306 receives usage information from the various nodes 101, predicts resource usage for different types of workloads based on past resource usage, and dynamically reallocates the nodes 101 based on this information. The scheduler 306 determines which accelerator devices and infrastructure devices are required to process a workload and notifies the corresponding nodes 101 with the accelerator devices and infrastructure devices and indication of the power operation. The allocation of workloads may be based on a service level agreement (SLA) associated with the workload, workload resource requirements, and load balancing algorithms.The scheduler 306 can determine workload resource requirements by reading workload metadata that contains the workload resource requirements. A data center component 304 or control system may generate workload metadata and send it to the scheduler 306, for example. The workload resource requirements may specify which processing and memory requirements are needed to process the workload, and the scheduler 306 may determine accelerator devices, resources, and circuitry to enable and/or disable based on requirements. Moreover, the scheduler 306 may determine which nodes 101 including the accelerator devices, resources, and circuitry are available and to perform power operations. The scheduler 306 directs a management controller of a node 101 to perform a power operation to enable or disable one or more accelerator devices and infrastructure devices based on the workload resource requirements.FIG. 4illustrates an example of a processing flow 400 that may be representative of some or all of the operations executed by one or more embodiments described herein. For example, the processing flow 400 may illustrate operations performed to determine power operations to perform on one or more accelerator devices and to cause the power operation to be performed.At block 402, the processing flow 400 includes a scheduler reading workload metadata that contains workload resource requirements based on a workload. For example, a data center component or control system may send workload metadata via a network to a node for processing a workload, which may be read by a scheduler. The workload resource requirements may specify which processing and memory requirements are required to process the workload, and the scheduler may determine accelerator devices, resources, and circuitry to enable and/or disable based on requirements. Moreover, the scheduler may determine which nodes include the accelerator devices, resources, and circuitry are available and to perform power operations. In some instances, the scheduler may determine which nodes to perform the power operations based on load balancing and/or on a service level agreement (SLA) that is associated with a workload.In embodiments, the processing flow 400 includes the scheduler directing a management controller to perform a power operation for one or more accelerator(s) at block 404. The management controller may include a virtual management monitor (VMM) (libvirt), a BMC, a management engine using a serial management interface or out-of-band message system (OOBMSM) to perform the power operation. The power operation may include enabling and/or disabling accelerator devices and infrastructure devices. Further and at block 406, the scheduler notifies the operating system or virtual management monitor (VMM) of the power operation. At block 406, the management controller utilizing an operating system or VMM may perform one or more operations in preparation to perform the power operation. For example, when disabling accelerator devices the operating system or VMM may offload processes and workloads from the accelerator devices to be disabled. Similarly, in preparation of enabling accelerator devices, the operating systems or VMM determines workloads and processes to execute on the accelerator devices once the accelerator devices are enabled.In embodiments, the processing flow 400 includes the operating system or VMM performing a hot-plug flow at block 408 if the input/output (I/O) link between the CPU and accelerator to perform the power operation is operating system managed or visible,e.g.,the operating system can control the I/O link. Further and a block 410, the logic flow includes the operating system or VMM informing the BIOS to perform the power operation at block 410. Alternatively, if the I/O link between the CPU and accelerator is BIOS managed or not visible to the operating system,e.g.,the operating system cannot control the I/O link, the operating system hands control off to the BIOS at block 412. For example, the operating system may send a message to the BIOS via an interconnect or interrupt. At block 414, the BIOS performs the hot-plug flow.At block 416, the processing flow 400 includes the BIOS reading a CPU capability identification (CAPID) to determine the CPU and the accelerator to perform the power operation. The CAPID may be a register associated with the CPU that the BIOS reads to determine the accelerator, for example. At block 418, the BIOS may issue a power operation indication to the CPU associated with the accelerator. In embodiments, the power operation indication may be a mailbox command, such as a BIOS2Pcode mailbox command, to cause the power operation.In embodiments, the logic 400 includes the CPU and accelerator performing the power operation, at block 420. The one or more operations performed at block 420 may be consistent with the operations discussed above with respect to flows 200 and 250 in FIGs. 2A and 2B , respectively. More specifically, if the power operation is to enable an accelerator, one or more operations discussed in processing flow 200 may be performed. If the power operation is to disable an accelerator, one or more operation discussed in processing flow 250 may be performed.At block 422, the BIOS detects completion of the power operation. For example, the CPU may send an indication to the BIOS indicating the power operation has completed. In some instances, the BIOS may poll for the mailbox command completion. Further and a block 424, the BIOS may notify the operating system the power operation has completed for the accelerator. For example, the BIOS may set one or more registers that may be read by the operating system to determine the power operation is complete.In embodiments, the processing flows discussed above with respect to FIGs. 2A , 2B , and 4 and systems 100 and 150 may be utilized to perform the below-illustrated example usages. For example, the above-discussed systems and flows may be utilized for resource biased workload scheduling and platform budget rebalancing to increase MCP and CPU performance. For example, a scheduler may determine the resources required for running the upcoming workloads/jobs. As indicated in the processing flow 200, 250, and 400, the scheduler may notify and send a request to a management controller and operating system. The operating system or VMM, and BIOS may translate the request and issue a mailbox command to perform a power operation. The CPU will enable or disable the accelerator devices based on the resources required. In one example, in the case of an accelerator being offline, the CPU may utilize Serial Voltage Identification (SVID) control to read the power information from the platform voltage regulators. The SVID information indicates the energy that is available for CPU to use based on the accelerator being disabled. The CPU may use this SVID information to rebalance/redistribute the energy to resources, such as cores, graphic processors, and other accelerator devices. Also, any accelerator specific motherboard voltage regulators ( MBVRs) can also be turned on/off using the SVID by the CPU.In another example, the above-discussed processing flow 200, 250, and 400, and systems 100, 150, and 300 enable accelerator configuration changes without platform warm or cold resets. For example, an accelerator device may include a configurable number of infrastructure devices,e.g.,multiple graphic processing devices within a graphics accelerator and only certain number of these are needed for a workload. Previously, power management supported states such as C6 or D3 may save some power, but does not give optimized performance since the credit buffers programmed within these dies still need to account for full resource allocation. This essentially means that the accelerator forgoes some of the performance. To re-adjust this configuration, instead of letting the entire platform go through warm or cold reset cycle as done traditionally, embodiments discussed herein are used to perform a recycle,e.g.,a disable and enable or re-enable, for the particular accelerator and the infrastructure is reprogrammed to achieve optimal performance.In another example, the above-discussed processing flow 200, 250, and 400, and systems 100, 150, and 300 may be used to perform a static accelerator enabling/disabling. In this example, the CPU and MCP hardware mechanism explained above is used by the BIOS to boot the accelerator devices as a part of BIOS start up. Instead of bringing up the accelerator devices along with the CPU, the above-discussed processing flows provide a mechanism to enable the BIOS to make a boot time decision on the accelerator and whether to enable or disable the accelerator.FIG. 5 illustrates an example of a first logic flow 500 that may be representative of some or all of the operations executed by one or more embodiments described herein. For example, the logic flow 500 may illustrate operations performed by a node, as described herein.At block 505, the logic flow 500 may include sending a power operation initiation indication to an accelerator device via the subset of the plurality of interconnects, the power operation initiation indication to indicate a power operation to be performed on one or more infrastructure devices. More specifically and at block 510, the logic flow 500 includes receiving a response the accelerator device, the response to indicate to the processor that the accelerator is ready for the power operation.At block 515, the logic flow 500 includes causing the power operation to be performed on the accelerator device, the power operation to enable or disable power for the one or more of the infrastructure devices. To enable power for one or more accelerator devices and/or infrastructure devices, one or more operations discussed with respect to FIG. 2A may be performed. Similarly, to disable power for one or more accelerator devices and/or infrastructure devices, one or more operations discussed with respect to FIG. 2B may be performed.FIG. 6illustrates an embodiment of an exemplary computing architecture 600 suitable for implementing various embodiments as previously described. In embodiments, the computing architecture 600 may include or be implemented as part of a node, for example.As used in this application, the terms "system" and "component" are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 600. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and thread of execution, and a component can be localized on one computer and distributed between two or more computers. Further, components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the unidirectional or bi-directional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to various signal lines. In such allocations, each message is a signal. Further embodiments, however, may alternatively employ data messages. Such data messages may be sent across various connections. Exemplary connections include parallel interfaces, serial interfaces, and bus interfaces.The computing architecture 600 includes various common computing elements, such as one or more processors, multi-coreprocessors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components, power supplies, and so forth. The embodiments, however, are not limited to implementation by the computing architecture 600.As shown in FIG. 6 , the computing architecture 600 includes a processing unit 604, a system memory 606 and a system bus 608. The processing unit 604 can be any of various commercially available processors.The system bus 608 provides an interface for system components including, but not limited to, the system memory 606 to the processing unit 604. The system bus 608 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. Interface adapters may connect to the system bus 608 via slot architecture. Example slot architectures may include without limitation Accelerated Graphics Port (AGP), Card Bus, (Extended) Industry Standard Architecture ((E)ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI(X)), PCI Express, Personal Computer Memory Card International Association (PCMCIA), and the like.The computing architecture 600 may include or implement various articles of manufacture. An article of manufacture may include a computer-readable storage medium to store logic. Examples of a computer-readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of logic may include executable computer program instructions implemented using any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. Embodiments may also be at least partly implemented as instructions contained in or on a non-transitory computer-readable medium, which may be read and executed by one or more processors to enable performance of the operations described herein.The system memory 606 may include various types of computer-readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. In the illustrated embodiment shown in FIG. 6 , the system memory 606 can include non-volatile memory 610 and volatile memory 612. A basic input/output system (BIOS) can be stored in the non-volatile memory 610.The computer 602 may include various types of computer-readable storage media in the form of one or more lower speed memory units, including an internal (or external) hard disk drive (HDD) 614, a magnetic floppy disk drive (FDD) 616 to read from or write to a removable magnetic disk 616, and an optical disk drive 620 to read from or write to a removable optical disk 622 (e.g., a CD-ROM or DVD). The HDD 614, FDD 616 and optical disk drive 620 can be connected to the system bus 608 by an HDD interface 624, an FDD interface 626 and an optical drive interface 626, respectively. The HDD interface 624 for external drive implementations can include at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies.The drives and associated computer-readable media provide volatile and nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For example, a number of program modules can be stored in the drives and memory units 610, 612, including an operating system 630, one or more application programs 632, other program modules 634, and program data 636. In one embodiment, the one or more application programs 632, other program modules 634, and program data 636 can include, for example, the various applications and components of the system 100.A user can enter commands and information into the computer 602 through one or more wire/wireless input devices, for example, a keyboard 636 and a pointing device, such as a mouse 640. Other input devices may include microphones, infra-red (IR) remote controls, radio-frequency (RF) remote controls, game pads, stylus pens, card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, retina readers, touch screens (e.g., capacitive, resistive, etc.), trackballs, track pads, sensors, styluses, and the like. These and other input devices are often connected to the processing unit 604 through an input device interface 642 that is coupled to the system bus 608, but can be connected by other interfaces such as a parallel port, IEEE 1394 serial port, a game port, a USB port, an IR interface, and so forth.A monitor 644 or other type of display device is also connected to the system bus 608 via an interface, such as a video adaptor 646. The monitor 644 may be internal or external to the computer 602. In addition to the monitor 644, a computer typically includes other peripheral output devices, such as speakers, printers, and so forth.The computer 602 may operate in a networked environment using logical connections via wire and wireless communications to one or more remote computers, such as a remote computer 646. The remote computer 646 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 602, although, for purposes of brevity, only a memory/storage device 650 is illustrated. The logical connections depicted include wire/wireless connectivity to a local area network (LAN) 652 and larger networks, for example, a wide area network (WAN) 654. Such LAN and WAN networking environments are commonplace in offices and companies and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, for example, the Internet.When used in a LAN networking environment, the computer 602 is connected to the LAN 652 through a wire and/or wireless communication network interface or adaptor 656. The adaptor 656 can facilitate wire and/or wireless communications to the LAN 652, which may also include a wireless access point disposed thereon for communicating with the wireless functionality of the adaptor 656.When used in a WAN networking environment, the computer 602 can include a modem 656, or is connected to a communications server on the WAN 654, or has other means for establishing communications over the WAN 654, such as by way of the Internet. The modem 656, which can be internal or external and a wire and/or wireless device, connects to the system bus 608 via the input device interface 642. In a networked environment, program modules depicted relative to the computer 602, or portions thereof, can be stored in the remote memory/storage device 650. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used.The computer 602 is operable to communicate with wire and wireless devices or entities using the IEEE 602 family of standards, such as wireless devices operatively disposed in wireless communication (e.g., IEEE 602.11 over-the-air modulation techniques). This includes at least Wi-Fi (or Wireless Fidelity), WiMax, and Bluetooth™ wireless technologies, among others. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. Wi-Fi networks use radio technologies called IEEE 602.11x (a, b, g, n, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wire networks (which use IEEE 602.3-related media and functions).The various elements of the devices as previously described with reference to FIGS. 1-6 may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processors, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. However, determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.The detailed disclosure now turns to providing examples that pertain to further embodiments. Examples one through thirty-three provided below are intended to be exemplary and non-limiting.In a first example, a system, a device, an apparatus, and so forth may include processing circuitry to a multi-chip package comprising a processor and an accelerator device, the accelerator device comprising infrastructure devices, and the processor coupled with the accelerator device via a subset of a plurality of interconnects, the processor to send a power operation initiation indication to the accelerator device via the subset of the plurality of interconnects, the power operation initiation indication to indicate a power operation to be performed on one or more of the infrastructure devices, receive a response from the accelerator device, the response to indicate to the processor that the accelerator device is ready for the power operation, and cause the power operation to be performed on the accelerator device, the power operation to enable or disable power for the one or more of the infrastructure devices.In a second example and in furtherance of the first example, a system, a device, an apparatus, and so forth to include processing circuitry to receive a power operation indication from a basic input/output system (BIOS) coupled via a second subset of the plurality of interconnects, the power operation indication to initiate the power operation for the accelerator device.In a third example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include processing circuitry to assert a power enable pin to send the power operation initiation indication to the accelerator device to enable power for the one or more infrastructure devices, receive the response comprising a sideband ready message from the accelerator device, and send a power-on configuration message to configure one or more of the infrastructure devices.In a fourth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include processing circuitry to receive a sideband complete message indicating the configuration of the one or more infrastructure devices completed, de-assert a power reset pin based on the sideband complete message to cause the power operation comprising enabling power for accelerator device, and send an indication to the BIOS indicating completion of the power operation.In a fifth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include processing circuitry to send the power operation initiation indication comprising a sideband reset message to disable the power for the one or more infrastructure devices, receive the response comprising a sideband reset acknowledgment message, and cause the power operation via asserting the power reset pin and de-asserting the power enable pin.In a sixth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include processing circuitry to send an indication to the BIOS indicating completion of the power operation based on upon completion of de-asserting the power enable pin and asserting the power reset pin.In a seventh example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include the BIOS coupled with the processor via a second subset of the plurality of interconnects, the BIOS to receive an indication to enable or disable the accelerator device from an operating system or virtual operating and send a second power operation indication to the processor based on the indication via a mailbox command communicated via the subset of the plurality of interconnects.In an eighth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include a management controller coupled with a scheduler, the management controller to receive an indication to enable or disable the accelerator device from the scheduler, and to cause the operating system or virtual operating system to send the indication to enable or disable the accelerator device to the BIOS.In a ninth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include a management controller, and a basic input/output system (BIOS), the management controller and the BIOS coupled with the MCP.In a tenth example and in furtherance of any previous example, a computer-implemented method may include sending a power operation initiation indication to an accelerator device via a subset of a plurality of interconnects, the power operation initiation indication to indicate a power operation to be performed on one or more of infrastructure devices of the accelerator device, receiving a response from the accelerator device, the response to indicate to a processor that the accelerator device is ready for the power operation, and causing the power operation to be performed on the accelerator device, the power operation to enable or disable power for the one or more of the infrastructure devices.In an eleventh example and in furtherance of any previous example, a computer-implemented method may include receiving a power operation indication from a basic input/output system (BIOS) coupled via a second subset of the plurality of interconnects, the power operation indication to initiate the power operation for the accelerator device.In a twelfth example and in furtherance of any previous example, a computer-implemented method may include asserting a power enable pin to send the power operation initiation indication to the accelerator device, receiving the response comprising a sideband ready message from the accelerator device, and sending a power-on configuration message to configure one or more of the infrastructure devices.In a thirteenth example and in furtherance of any previous example, a computer-implemented method may include receiving a sideband complete message indicating the configuration of the one or more infrastructure devices completed, de-asserting a power reset pin based on the sideband complete message to cause the power operation comprising enabling power for accelerator device, and sending an indication to the BIOS indicating completion of the power operation.In a fourteenth example and in furtherance of any previous example, a computer-implemented method may include sending the power operation initiation indication comprising a sideband reset message, receiving the response comprising a sideband reset acknowledgment message, and causing the power operation via asserting the power reset pin and de-asserting the power enable pin.In a fifteenth example and in furtherance of any previous example, a computer-implemented method may include sending an indication to the BIOS indicating completion of the power operation based on upon completion of de-asserting the power enable pin and asserting the power reset pin.In a sixteenth example and in furtherance of any previous example, a computer-implemented method may include receiving, via a basic input/output system (BIOS), an indication to enable or disable the accelerator from an operating system or virtual operating system, and sending, via the BIOS, a second power operation indication to the processor based on the indication via a mailbox command.In a seventeenth example and in furtherance of any previous example, a computer-implemented method may include receiving, via a management controller, an indication to enable or disable the accelerator from a scheduler based on a workload resource requirement, and causing the operating system or the virtual operating system to send the indication to enable or disable the accelerator device.In an eighteenth example and in furtherance of any previous example, a non-transitory computer-readable storage medium, comprising a plurality of instructions, that when executed, enable processing circuitry to send a power operation initiation indication to an accelerator device of a multi-chip package (MCP) via a subset of a plurality of interconnects, the power operation initiation indication to indicate a power operation to be performed on one or more of infrastructure devices of the accelerator device, receive a response from the accelerator device, the response to indicate to a processor of the MCP that the accelerator device is ready for the power operation, and cause the power operation to be performed on the accelerator device, the power operation to enable or disable power for the one or more of the infrastructure devices.In a nineteenth example and in furtherance of any previous example, a non-transitory computer-readable storage medium, comprising a plurality of instructions, that when executed, enable processing circuitry to receive a power operation indication from a basic input/output system (BIOS) coupled via a second subset of the plurality of interconnects, the power operation indication to initiate the power operation for the accelerator device.In a twentieth example and in furtherance of any previous example, a non-transitory computer-readable storage medium, comprising a plurality of instructions, that when executed, enable processing circuitry to assert a power enable pin to send the power operation initiation indication to the accelerator device, receiving the response comprising a sideband ready message from the accelerator device, and sending a power-on configuration message to configure one or more of the infrastructure devices.In a twenty-first example and in furtherance of any previous example, a non-transitory computer-readable storage medium, comprising a plurality of instructions, that when executed, enable processing circuitry to receive a sideband complete message indicating the configuration of the one or more infrastructure devices completed, de-assert a power reset pin based on the sideband complete message to cause the power operation comprising enabling power for accelerator device, and send an indication to the BIOS indicating completion of the power operation.In a twenty-second example and in furtherance of any previous example, a non-transitory computer-readable storage medium, comprising a plurality of instructions, that when executed, enable processing circuitry to send the power operation initiation indication comprising a sideband reset message, receive the response comprising a sideband reset acknowledgment message, and cause the power operation via asserting the power reset pin and de-asserting the power enable pin.In a twenty-third example and in furtherance of any previous example, a non-transitory computer-readable storage medium, comprising a plurality of instructions, that when executed, enable processing circuitry to send an indication to the BIOS indicating completion of the power operation based on upon completion of de-asserting the power enable pin and asserting the power reset pin.In a twenty-fourth example and in furtherance of any previous example, a non-transitory computer-readable storage medium, comprising a plurality of instructions, that when executed, enable processing circuitry to receive, via a basic input/output system (BIOS), an indication to enable or disable the accelerator from an operating system or virtual operating system, and recieve, via the BIOS, a second power operation indication to the processor based on the indication via a mailbox command.In a twenty-fifth example and in furtherance of any previous example, a non-transitory computer-readable storage medium, comprising a plurality of instructions, that when executed, enable processing circuitry to receive, via a management controller, an indication to enable or disable the accelerator from a scheduler based on a workload resource requirement, and cause the operating system or the virtual operating system to send the indication to enable or disable the accelerator device.In a twenty-sixth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include means for sending a power operation initiation indication to an accelerator device via a subset of a plurality of interconnects, the power operation initiation indication to indicate a power operation to be performed on one or more of infrastructure devices of the accelerator device, means for receiving a response from the accelerator device, the response to indicate to a processor that the accelerator device is ready for the power operation, and means for causing the power operation to be performed on the accelerator device, the power operation to enable or disable power for the one or more of the infrastructure devices.In a twenty-seventh example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include means for receiving a power operation indication from a basic input/output system (BIOS) coupled via a second subset of the plurality of interconnects, the power operation indication to initiate the power operation for the accelerator device.In a twenty-eighth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include means for asserting a power enable pin to send the power operation initiation indication to the accelerator device, means for receiving the response comprising a sideband ready message from the accelerator device; and means for sending a power-on configuration message to configure one or more of the infrastructure devices.In a twenty-ninth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include means for receiving a sideband complete message indicating the configuration of the one or more infrastructure devices completed, means for de-asserting a power reset pin based on the sideband complete message to cause the power operation comprising enabling power for accelerator device; and means for sending an indication to the BIOS indicating completion of the power operation.In a thirtieth example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include means for sending the power operation initiation indication comprising a sideband reset message, means for receiving the response comprising a sideband reset acknowledgment message, and means for causing the power operation via asserting the power reset pin and de-asserting the power enable pin.In a thirty-first example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include means for sending an indication to the BIOS indicating completion of the power operation based on upon completion of de-asserting the power enable pin and asserting the power reset pin.In a thirty-second example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include means for receiving an indication to enable or disable the accelerator from an operating system or virtual operating system, and means for sending a second power operation indication to the processor based on the indication via a mailbox command.In a thirty-third example and in furtherance of any previous example, a system, a device, an apparatus, and so forth to include means for receiving an indication to enable or disable the accelerator from a scheduler based on a workload resource requirement, and means for causing the operating system or the virtual operating system to send the indication to enable or disable the accelerator device.Some embodiments may be described using the expression "one embodiment" or "an embodiment" along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment. Further, some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments may be described using the terms "connected" and/or "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.It is emphasized that the Abstract of the Disclosure is provided to allow a reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "including" and "wherein," respectively. Moreover, the terms "first," "second," "third," and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects.What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. |
A touch screen monitor having a display monitor and an ultrasonic device. The ultrasonic device may include a sensor array using piezoelectric sensors to detect the surface topology of a biological or other object that is in contact with a surface of the display monitor. The display monitor may be a LCD or LED monitor. |
What is claimed is: 1 . A touch screen display comprising: a display monitor for providing a visual image; and an u ltrasonic device able to em it an ultrasonic energy wave, and able to detect reflected ultrasonic energy. 2. The touch screen d isplay of claim I , wherein the d isplay monitor includes l ight em itting diodes for providing the visual image. 3. The touch screen display of claim 1 , wherein the display monitor includes a liquid crystal display for providing the visual image. 4. The touch screen d isplay of claim 1 , wherein the ultrasonic device includes a piezoelectric transmitter for emitting the ultrasonic energy wave. 5. The touch screen display of claim 1 , wherein the ultrasonic device includes a piezoelectric hydrophone array for detecting reflected ultrasonic energy. 6. The touch screen display of claim 1 , wherein the ultrasonic device includes a thin-film transistor receiver for detecting reflected ultrasonic energy. 7. The touch screen display of claim 1 , wherein the display monitor is comprised of layers and the ultrasonic device is comprised of at least one layer, and the ultrasonic device is attached to one of the layers of the display monitor. 8. The touch screen display of claim 1 , wherein the ultrasonic device includes a plurality of receivers for detecting reflected ultrasonic energy. 9. The touch screen display of claim 8, wherein each ultrasonic energy receiver is located among elements of the display monitor comprising a pixel. |
ULTRASONIC TOUCH SENSOR WITH Λ DISPLAY MONITOR Cross-Reference to Related Application [000 1 J This application claims the benefit of priority to U .S. provisional patent appl ication serial number 6 1 /594,330, Hied on February 2, 2 1 2. Field of the Invention [0002] The present invention relates to devices and methods of collecting information about an object that is i n contact with a display. Background of the Invention [0003] In the prior art, touch screen monitors are commonly used to assist users with selecting items displayed on a monitor. Selecting items is commonly performed using a pointing object, such as a stylus or a finger. Such touch screen monitors often employ a capacitance sensor to identify the location at which the pointing object touches the display monitor. The identified location is then compared to the location of images displayed on the monitor in order to determine what the user is identi fying. [0004] Although these prior art touch screen monitors have become reliable and inexpensive, the prior art devices do not incorporate any built-in sensing elements suitable for measuring a touch event reliably, and although many of these prior art devices are fine for dry and clean environments, they often fail in dirty, wet or adverse conditions. Summary of the Invention [0005] The invention may be embodied as a touch screen display having a display monitor for providing a visual image, and an ultrasonic device able to emit an ultrasonic energy wave, and able to detect reflected ultrasonic energy. The display monitor may include light emitting diodes for providing the visual image, or a liquid crystal display for providing the visual image.100061 The ultrason ic device may include a piezoelectric transm itter for emitting the ultrasonic energy wave. A lso, the u ltrasonic device may incl ude a piezoelectric detector, such as a hydrophone array, for detecting reflected ultrasonic energy. The detector may include a th in-fi lm transistor receiver for detecti ng reflected u ltrasonic energy. 100071 The disp lay monitor may be comprised of layers of components, and the ultrasonic device may be comprised of at least one layer. The u ltrasonic device may be attached to one or more of the display monitor layers. 1000 1 The ultrasonic device may include a plurality of receivers for detecting re flected ultrasonic energy. In one embodiment of the invention, each ultrason ic energy receiver is located among elements of the display mon itor comprising a pixe l . Brief Description Of The Drawings [ 0009] For a fuller understanding of the nature and objects of the invention, reference should be made to the accompanying drawings and the subsequent description. Briefly, the drawings are: FIG. I , which is an exploded view of a device in which an on-cell ultrasonic device has been integrated into a backlit LCD display monitor to create a touch screen display according to the invention. FIG. 2, which is an exploded view of a device in which an on-cell ultrasonic device has been integrated into an OLED display to create a touch screen display according to the invention. FIG. 3, which is an exploded view of a device in which an in-cell ultrasonic device has been integrated into a backlit LCD display to create a touch screen display according to the invention. FIG. 4, which is an exploded view of a device in which an in-cell u ltrasonic device has been integrated into an OLED display to create a touch screen display according to the invention.100 1 0 ] I n the drawi ngs, the fol lowing reference numbers can be found, and they represent: 1 scu ff resistant glass 2 continuous e lectrode (e.g. TCF (transparent conductive 111m, such as IZO, ΓΓΟ, etc.) 3 PVDF or PVDF-TrFK piezoe lectric polymer 5 color fi lter glass (5Λ, 5 B and 5C are simple the 3 RBG color fi llers with in the glass) 6 TFT (Thin Fi lm Transistor) circuit 7 TFT substrate (e.g. glass) 8 piezoelectric transm itter 9 polarizing filter 1 0 l iqu id crystal 1 1 back lighting panel 12 electrode pad (e.g. TCF) 1 3 continuous electrode (e.g. TCF) 14 optically transparent non-conductive, fi ller material Further Description of the Invention [001 1 ] The present invention relates to ultrasonic scanning devices and display monitors. Information about an object that is in contact with the display monitor is gathered by means of ultrasonic energy. Ultrasonic energy is sent toward a surface of the display monitor where a pointing object may contact the display monitor. When the ultrasonic energy reaches the pointing object, at least some of the ultrasonic energy is reflected toward an ultrasonic energy receiver. The receiver detects the reflected energy, transmits a signal indicating that reflected energy was sensed. Using the transmitted signal, information about the object is determined. That information may include one or more of the following: (a) the location of the pointing object, (b) information about the texture of the surface of the pointingobj ct, and/or in formation about the structure of features present in the pointing object, but wh ich are not on the su rface o f the pointing object. | ()() I 2 | I n one embod iment of thc invention, an u ltrason ic device is attached to a display monitor. For example, the u ltrasonic device may be lam inated to a portion of thc display mon itor. The combination of thc u ltrasonic device and the d isplay monitor is referred to herein as a "touch screen display". The touch screen display may be used to determ ine the location of thc pointing object at a first time, and then determine the location of thc pointing object at a second time, in order to track movement of the pointing object and thereby cause a cursor to be displayed on the display mon itor for purposes of identi fying an image and thereby selecting an option (such as a software appl ication) represented by the identi fied image. 100 1 3 J The pointing object may include identifying characteristics which can be used to identi fy the owner of the pointing object. For example, the pointing object may be a finger, and the identi fying characteristics may be the fingerprint. The touch screen display may be used to detect the fingerprint in order to identi fy the user of the touch screen display. In this manner, the touch screen display may be made available only to authorized users, or the touch screen display may be caused to display images in a manner that is believed to be preferred by that particular user. In this manner, the touch screen display may be personalized to a particular user's preferences. [0014] The phrase "in-cell" touch screen display is used herein to refer to a touch screen display that has the ultrasonic device located within a group of elements that collectively make up a pixel of the display monitor. For example, each ultrasonic receiver is located among elements of the display monitor comprising a single pixel. [0015] The phrase "on-cell" touch screen display is used herein to refer to a touch screen that has the ultrasonic device coupled to a surface of one of the layers that comprise the display monitor. For example, in such an on-cell touch screen display, the layer of the display monitor to which the ultrasonic device is attached may be a layer that is typicallyexposed, or may be an internal layer of the display mon itor. I 'or purposes of th is d isclosure, the phrase "out-cel l" touch screen display is used to refer to a particu lar type of "on-cel l" touch screen display, whereby the ultrasonic device is attached to a layer of the d isplay monitor that is not internal to the display monitor, exclusive of any protective, scu ff resistant surface layer of the d isplay monitor. [00 1 6] The ultrasonic device may be an ultrasonic fingerpri nt imaging system, such as those that use an ultrasonic sensor to capture information about a fingerprint that can then be compared to previously obtained fingerprint in formation for identification purposes, and/or used to d isplay a visible image of the fingerprint. The ultrasonic sensor transmits an ultrasonic pulse or collection of pulses, and then detects a reflected portion of the transm itted pulse(s). Such ultrasonic fingerprint imaging systems are relatively simple and rel iable. An example of one such system is model 203 manufactured by the Ultra-Scan Corporation. [001 7] The ultrasonic device may employ a plurality of detectors to detect the energy reflected by the pointing object. Each detector may be individually calibrated to remove fixed pattern noise effects that may be characteristic of the components that make up the ultrasonic device, the display monitor, or both. These effects may include variations between the detectors that may arise from differences in the amplifiers, as well as variations arising from the manufacturing process (e.g. glue, contaminants, etc.) The variations in ultrasonic attenuation caused by variations between pixels of the display monitor will be detected as a non-changing portion of the fixed pattern noise received by the ultrasonic sensor, and such fixed pattern noise can be removed during analysis of the signals that are transmitted by the receiver to indicate that reflected energy was sensed by the.receiver. Once the fixed pattern noise is removed, a "clean" signal is yielded that is representative of the surface being analyzed by the ultrasonic sensor. [001 8] The ultrasonic device may include an electronic control system that supplies timing signals. Some of these timing signals may be used to cause the ultrasonic device to emit an ultrasonic energy pulse. Others of these timing signals may be used in a process commonly referred to as "range gating" in which the a determination is made regardingwhich of the reflected ultrason ic energy that is detected by the ultrasonic device is related to the surface on which the pointi ng object may be placed. Λ d iscussion of" range gating may be found in many reliable texts on sonar, radar, or ultrason ic non-destructive testing. 100 1 9] The timing signals , pulse generation initiation and TFT sensor signal readout that may then be further processed into an image of an object that is in contact with the protective plastic fi lm platen. 100201 Display monitors currently on the market inc lude those that use l ight em itting diodes and l iquid crystal displays for presenting a visible image to a user. Such d isplay monitors are light-weight, thin, flat, reliable and inexpensive. When such a display monitor is combined with an ultrasonic device, the resulting touch screen display offers the abil ity to use a finger to point to an image on the display, and provide capabi lities like those currently offered by touchpads used in conjunction with personal computers and personal digital assistants. [002 1 ] Having provided an overview of the invention, additional detai ls wi ll now be provided. [0022] There is no requirement that the resolution of the display monitor and the resolution of the ultrasonic device must be the same. This allows for systems where, for example, the resolution of the display monitor may be 100 dots per inch and the ultrasonic device may be 1 0 dots per inch, or any other combination that is convenient to the application. In-Cel l systems, however, put the receivers of the ultrasonic device within the 3- color group that comprises a color display monitor pixel, and thus the addition of an ultrasonic receiver to the 3-color display pixel components normally has a one-to-one receiver-to-pixel group relationship, but a one-to-one association is not required. For example, omission of ultrasonic receiver groups from some display monitor pixels would allow for different pitch spacing for the display monitor and ultrasonic device. [0023] In an embodiment of an on-cell touch display with a piezoelectric imaging system coupled to an LCD display monitor is depicted in Figure 1 . To an edge-litbacklighting panel 1 1 is attached a piezoelectric fi lm transmitter 8. On the surface of the backlighting panel 1 1 opposite to the piezoelectric transm itter 8, a TI 6 on a glass substrate 7 is attached . Above th is is a layer of l iquid crystal material 10. I mmediately above this is a transparent conductive fi lm (TCF) layer 13 affixed to a color fi lter 5, the top o f wh ich also has a layer o f conductive TCF 2. To this TCF layer 2 is a layer o f piezoelectric polymer 3 (or copolymer). Λ pattern of individual TCF pads 2 is appl ied to the piezoelectric polymer layer 3 or alternately a polarizing fi lter 9, and the outer surface receives a layer of scu ff resistant glass or plastic 1. The resulting touch screen display operates in a fashion l ike most LCD displays and a voltage between the TFT patterned TCF electrode | not explicitly shown but part of the TFT itsel f] on the TIT, and the continuous common plane electrode 13 allows each display pixel to turn on or off using l ight polarizers. I f light that has been passed through a polarizing fi lter, then passes through a second polarizer oriented at 90 degrees to the first, the light wi ll be blocked completely and wi ll not pass through the second polarizer. The LCD display uses a fixed polarizing filter that is typically a sheet of plastic, the second polarizing filter is the liquid crystal material itself. If a voltage is applied, it polarizes the light thereby preventing light from being emitted, and if no voltage is applied light is permitted to pass. The ultrasound features come into play when the piezoelectric film transmitter issues an ultrasonic energy pulse. The ultrasonic energy pulse travels through the various layers to the exterior facing surface (in this case, the scuff resistant glass or plastic) where at least part of the ultrasonic energy pulse then reflects down again, bringing with it information about the ultrasonic impedance of the surface and any objects that are in contact with the surface. The reflected ultrasonic energy pulse is detected by the hydrophone array that is made up of the piezoelectric polymer film 3 and the two TCF electrode layers that contact it, both the continuous electrode 2 and the electrode array 12. Trace conductors interconnect the electrode array 2 with electronics (not shown) allowing the ultrasonic device to produce an transmit a signal corresponding to the individual ultrasonic signals associated with each ultrasonic array receiver element of the hydrophone array. [0024] Figure 2 depicts an alternate embodiment of an On-Cell touch screen display. In that embodiment, the display' s liquid crystal layer 10 and associated TCF electrodesassociated with the display mon itor are not needed. The back l ight layer 1 1 is also not needed because the TI display contains OLLD elements that d irectly light up and i l lum inate the display. I n this case the ultrason ic transm itter may be affixed to the back of the TIT substrate glass. [ 0025 ] Figure 3 depicts another embodiment of the invention. In this em bodiment, the touch screen monitor is an in-ccl l touch d isplay. To an edgc-l it backlighting panel 1 1 , is attached a piezoelectric film transmitter 8. On the surface of the backl ighting panel 1 1 opposite to the piezoelectric transmitter 8, a TIT 6 on a glass substrate 7 is attached. T his TFT 6 has many circuits. The individual pixels are groups of three LC D control ampl i fiers and one ultrason ic receiver circuit. The ultrasonic receiver further has a piezoelectric polymer bonded to it. Above this is a layer of liquid crystal material 10. Above this is a continuous electrode (TCF) 2 that is used as the common electrode for the receiver and as the common electrode for the LCD driver circuits. This TCF may be affixed to a color fi lter 5. The next layer up (in Fig. 3) in the stack is the polarizing fi lter 9 and finally the outer surface receives a layer of scuff resistant glass or plastic 1. This display operates in a fash ion like most LCD displays and a voltage between the TFT patterned TCF electrode on the TFT and the continuous common plane electrode 2 allows each display pixel to turn on or off. [0026] Another embodiment of an in-cell touch display according to the invention is depicted in Figure 4. To the back of the substrate of the TFT circuit 7, is attached a piezoelectric film transmitter 8. The TFT circuit 7 may be composed of groups of cells making up individual color pixels, each pixel being comprised of three light emitting cells and one ultrasonic sensor cell. Attached to the ultrasonic sensor cel l TFT may be a three layer laminate that is composed of a TCF electrode 2, a layer of piezoelectric polymer 3, and another TCF film electrode 12 that is continuous across the TFT. Optically transparent insulating material 14 may be used above (in Figure 4) the OLEDs to isolate them from the light emitting display circuits and the TCF 2. Shown in Figure 4 above this is a color filter glass 5 to allow red-green-blue display color. A scuff resistant surface layer 1 protects the stack from physical abrasion and mechanical damage. It should be noted that although a one-to-one relationship between light pixel and ultrasonic sensor pixel is described, it would be easy to omit various sensor pixels to change the resolution ofthe ultrasonic device. |0()27| Although the present invention has been described with respect to one or more particular embodiments, it will be understood that other embodiments ofthe present invention may be made without departing from the spirit and scope ofthe present invention. Hence, the present invention is deemed limited only by the appended claims and the reasonable interpretation thereof. |
Systems and method for operating a low power universal serial bus are described herein. A universal serial bus port includes a link layer and protocol layer that are compatible with a standard USB2 protocol. The link layer and protocol layer to control a physical layer for transmitting and receiving data on a pair of signal lines. The physical layer includes a fully-digital Low-Speed/Full-Speed (LS/FS) transceiver to transmit and receive data on the pair signal lines using single-ended digital communications on the pair of signal lines. |
CLAIMS What is claimed is: 1 . A universal serial bus port comprising: a link layer and protocol layer that are compatible with a standard USB2 protocol; the link layer and protocol layer to control a physical layer for transmitting and receiving data on a pair of signal lines; and wherein the physical layer comprises a fully-digital Low-Speed/Full-Speed (LS/FS) transceiver to transmit and receive data on the pair signal lines using single-ended digital communications on the pair of signal lines. 2. The universal serial port interface of claim 1 , wherein the fully-digital LS/FS transceiver comprises digital CMOS components configured for 1 .0 Volt signaling. 3. The universal serial bus port of claim 1 , wherein the physical layer comprises a high speed (HS) transceiver configured to transmit and receive data on the pair of signal lines using differential signaling. 4. The universal serial bus port of claim 1 , wherein the HS transceiver is to use 0.2 Volt differential signaling. 5. The universal serial bus port of claim 1 , wherein the physical layer is configured to use single-ended communications while operating in a FS/LS mode and is configured to use differential signaling while operating in a HS mode. 6. The universal serial bus port of claim 4, wherein the universal serial bus port includes a repeater to convert signals received from the physical layer into USB2 compatible signals. 7. The universal serial bus port of claim 1 , wherein a device coupled to the universal serial bus port indicates device presence by transmitting a digital ping to the universal serial bus port. 8. The universal serial bus port of claim 7, wherein device disconnect is detected if the universal serial bus port does not receive the digital ping from the device within a specified time period. 9. A method of operating a universal serial bus host, comprising: detecting presence of a device coupled to the universal serial bus host; determining the data rate capabilities of the device; and if the device is Low-Speed (LS) or Full-Speed (FS) capable, communicating with the device using digital, single-ended signaling on a pair of signal lines between the universal serial bus port host and the device. 10. The method of claim 9, comprising using 1 .0 Volt digital signaling for LS and FS communications. 1 1 . The method of claim 9, comprising, if the device is High-Speed (HS) capable, communicating with the device using differential signaling on the pair of signal lines. 12. The method of claim 1 1 , using 0.2 Volt differential signaling for HS communications. 13. The method of claim 9, comprising using single-ended communications while operating in a FS/LS mode and using differential signaling while operating in a HS mode. 14. The method of claim 9, comprising converting signals received from the universal serial bus host into USB2 compatible signals to be sent to the device. 15. The method of claim 9, wherein detecting the presence of the device comprises receiving a digital ping from the device. 16. The method of claim 15, comprising detecting device disconnect if the universal serial bus port host does not receive the digital ping from the device within a specified time period. 17. A computing device comprising: a universal serial bus host comprising: a link layer and protocol layer that are compatible with a standard USB2 protocol, the link layer and protocol layer to control a physical layer for transmitting and receiving data on a pair of signal lines to communicate with a device; and wherein the physical layer comprises a fully-digital Low-Speed/Full-Speed (LS/FS) transceiver to transmit and receive data on the pair signal lines using single-ended digital communications on the pair of signal lines. 18. The computing device of claim 17, wherein the fully-digital LS/FS transceiver comprises digital CMOS components configured for 1 .0 Volt signaling and no analog components. 19. The computing device of claim 17, wherein the physical layer comprises a high speed (HS) transceiver configured to transmit and receive data on the pair of signal lines using 0.2 Volt differential signaling. 20. The computing device of claim 17, wherein the device coupled to the universal serial bus port indicates device presence by transmitting a digital ping to the universal serial bus host and wherein device disconnect is detected if the universal serial bus host does not receive the digital ping from the device within a specified time period. |
A LOW POWER UNIVERSAL SERIAL BUS BACKGROUND The methods and systems disclosed herein relate to an input/output (10) signaling protocol. More specifically, a low-voltage, low-power solution for Universal Serial Bus 2.0 (USB2) is disclosed. USB is an industry protocol designed to standardize the interfaces between computer devices for communication and supplying electrical power. The USB2 protocol has enjoyed widespread adoption in nearly every computing device, and has received tremendous support in terms of technology development with well- established intellectual property (IP) portfolios and standardized software infrastructure. The standard USB2 specification uses 3.3 Volt analog signaling for communications between the two USB2 ports. The 3.3 Volt signal strength tends to introduce integration challenges because some advanced semiconductor processes are moving towards a very low geometry leading to the gate oxide of a CMOS transistor no longer able to tolerate higher voltages, such as 3.3 Volt. In addition, the standard USB2 specification results in relatively high levels of power consumption at both idle and active states. As a result, USB2 may not be suitable for devices that place stringent specifications on I/O power consumption, such as mobile platforms. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a block diagram of a universal serial bus architecture in accordance with embodiments; Fig. 2 is a block diagram of a universal serial bus physical layer with High- Speed (HS), Low-Speed (LS), and Full-Speed (FS) capability.; Fig. 3 is a block diagram of the eUSB2 physical layer with Low-Speed or Full-Speed capability; Fig. 4 is a timing diagram of a SYNC pattern used in Low-Speed or Full- Speed mode; Fig. 5 is a timing diagram of an End-of-Packet (EOP) pattern in Low-Speed or Full-Speed mode; Figs. 6A and 6B are timing diagrams showing an example of eUSB2 signal timing; -l- Fig. 7 is a tinning diagram of a Low-Speed Keep Alive signal; Fig. 8 is a timing diagram of a device disconnect detection technique for Full-Speed or Low-Speed operation during L0; Fig. 9 is a timing diagram of a device disconnect detection technique for High-Speed mode during L0 state; Fig. 10 a timing diagram showing an example of a device connect detection technique; and Fig. 1 1 is a timing diagram showing an example of a device connect detection scheme in which the device declares High-Speed capability. DESCRIPTION OF THE EMBODIMENTS Embodiments described herein relate to improved signaling techniques that provide a lower signal voltage and reduced power consumption compared to standard USB2. The improved signaling techniques may be used in a new USB protocol, which may be referred to herein as embedded USB2 (eUSB2). The signaling techniques described herein can be used to support the standard USB2 operation at the protocol level. Furthermore, the signaling techniques described herein may use simplified physical layer architecture as compared to the standard USB2 physical layer architecture. The simplified physical layer architecture disclosed herein can support Low-Speed (LS) operation, Full-Speed (FS) operation, or High-Speed (HS) operation. During High-Speed operation, the link is operated using low-swing differential signaling, for example, 0.2 Volt differential signaling as opposed to 0.4 Volt differential signaling used in standard USB2. During Low-Speed or Full-Speed operation, the simplified PHY architecture enables the use of a fully digital communication scheme. For example, the simplified PHY architecture can use 1 Volt CMOS circuitry, as opposed to the 3.3 Volts CMOS signaling used in standard USB2. In a fully digital communication scheme, the analog components typically used in standard USB2, such as current sources and operational amplifiers are eliminated. Embodiments can support a native mode and repeater mode. Native mode, as referred to herein, describes operation wherein both the host and device ports implement an eUSB2 PHY and communicate based on eUSB2 signaling. The native mode may be used in cases in which backward compatibility with the standard USB2 is not needed. For example, the native mode may be used for chip to chip communications wherein both chips are soldered to a mother board. The repeater mode allows eUSB2 to support standard USB2 operation with the use of half-duplex repeater device. The repeater mode of operation is described further in relation to co-pending Patent Application Serial Number , filed on June 30, 2012, titled "A Clock-Less Half-Duplex Repeater," which is incorporated by reference herein in its entirety for all purposes. Embodiments described herein support a new device presence detection scheme that can be used for low-voltage signaling protocols and results in very low power consumption while in idle mode. The standard USB2 specifications utilize device passive pull-up and host passive pull-down to detect device connect and determine mode of operation. Thus, the USB2 link maintains a constant direct current (DC) path, formed by device passive pull-up and host passive pulldown, when the link is idle. Wire voltage is read by the host to determine the connect status of the device. Due to the pull-up and pull-down resistors, the standard USB2 consumes approximately 600 μ\Λ of power when the link is in idle mode. The new digital disconnect detection techniques described herein use a device ping to indicate device presence during idle (LPM-L1 or Suspend) rather than device pull-up. By eliminating device pull-up for detecting device presence, the link power consumption while in idle state can be eliminated. For example, the resulting power consumption of the link may be reduced to the power consumption that results from leakage current. Furthermore, the eUSB2 protocol in accordance with embodiments makes use of 1 Volt signaling for Full-Speed and Low-Speed operations instead of 3.3 Volt. The 1 Volt transistors generally have a higher pin leakage current compared to 3.3 Volts transistors, which have thicker gate oxide. To reduce current flow through the pull-up and pull-down resistors, the resistance of the pull-down resistors and pull-up resistors could be increased. However, increasing the resistance of the pull-down resistors and pull-up resistors could result in the active buffer not being able to override the strengthened pull-ups. The new device detection scheme in accordance with embodiments uses an active buffer driver on the downstream device to actively drive the eD+ or eD- signal lines to indicate device presence, instead of the pull-up resistors. Thus, the use of an active buffer to override the strengthened pull-ups can be eliminated. In some embodiments, the pull-up resistors can be eliminated. Present USB2 specifications also make use of a sideband wire to detect an On-The-Go (OTG) device, which is routed to an on-chip general purpose input buffer (GIO). In accordance with embodiments, detection of an OTG device can be accomplished through the use of an inband OTG detect mechanism. Thus, the sideband wire used to detect OTG capability can be eliminated, thus reducing GIO pin count. Fig. 1 is a block diagram of a universal serial bus architecture in accordance with embodiments. The eUSB2 architecture may be used in any suitable electronic device, including desktop computers, laptop computers, tablets, and mobile phones, among others. The eUSB2 architecture 100 may contain a standard USB2 segment 102 and a eUSB2 segment 104 in accordance with embodiments. The standard USB2 segment 102 may include a protocol layer 106 and a link layer 108. The protocol layer 106 is used for managing the transfer of information between a device and a host. For example, the protocol layer 106 is used to determine how to structure information packets. The link layer 108 is used for creating and maintaining a channel of communication (or link) between the device and the host. The link layer 108 also controls the flow of information and power management status of the link. In embodiments, both the protocol layer 106 and the link layer operate in accordance with standard USB2 communication protocols. The eUSB2 segment 104 contains a physical layer (PHY) 1 10 unique to the eUSB2 architecture 100. The physical layer 1 10 can interface with the link layer 108 through any suitable interface 1 12, such as a USB 2.0 Transceiver Macrocell Interface (UTMI), and UTMI with extensions (UTMI+), among others. The physical layer 1 10 may include a pair of eUSB2 data lines 1 14, referred to herein as eD+ 1 16 and eD- 1 18. The data lines are used to transmit signals between an upstream port and a downstream port. Depending on the particular operating mode, the physical layer 1 10 is configured to transmit data on the data lines 1 14 using differential signaling, single ended digital communications, or some combination thereof, as explained further below. For example, while operating in high speed, differential signaling may be used to transmit data, while single-ended digital communications may be used to transmit control signals. While operating in low speed or full speed, single-ended digital communications may be used to transmit data and control signals. The functions and behaviors of eD- and eD+ may vary depending on the data rate of the device. The physical layer 1 10 may also include a Serial Interface Engine (SIE) 120 for translating USB information packets to be used by the protocol layer 106. The Serial Interface Engine 120 includes a Serial-ln, Parallel-Out (SIPO) block 122 for converting incoming serial data received via the signal lines 1 14 into parallel data for transmitting to the link layer 108. The Serial Interface Engine 120 also includes a Parallel-In, Serial-Out (SIPO) block 122 for converting outgoing parallel data received from the link layer 108 into serial data for transmission onto the signal lines 1 14. The physical layer 1 10 can also include a Data Recovery Circuit (DRC) 126 and a Phased Locked Loop (PLL) 128 for recovering data received via the signal lines 1 14. The physical layer 1 10 also includes a number of transmitters 130 and receivers 132 for controlling the signals lines 1 14. For the sake of simplicity, a single transmitter 130 and receiver 132 pair are shown in Fig. 1 . However, it will be appreciated that the physical layer 1 10 can include any suitable number of transmitters 130 and receivers 132 used to implement the various embodiments described herein. The physical layer 100 is described more fully in relation to Figs. 2 and 3 and the accompanying descriptions. Fig. 2 is a block diagram of a universal serial bus physical layer with High- Speed (HS), Low-Speed (LS), and Full-Speed (FS) capability. In embodiments, the HS, FS, and LS data rates correspond to the data rates specified by the USB2 protocol. For example, during LS operation the PHY may provide a data rate of approximately 1 .5 Mbit/s, during FS operation the PHY may provide a data rate of approximately data rate of 12 Mbit/s, and during HS operation, the PHY may provide a data rate of approximately 480 Mbit/s. The eUSB2 PHY 200 can include both a Low-Speed/Full-Speed (LS/FS) transceiver 202 and a High-Speed (HS) transceiver 204. In embodiments, the PHY 200 also includes a pair of pulldown resistors 206 used for device connect detection. The LS/FS transceiver 202 and HS transceiver 204 are communicatively coupled to the eUSB2 signal lines 208, which include eD+ 210 and eD- 212. The HS transceiver 204 and LS/FS transceiver 202 may be configured to selectively take control of the signal lines 208 depending on the data rate capabilities of the upstream device connected to the PHY 200. Techniques for determining the data rate capabilities of the upstream device are described further below. The LS/FS transceiver 202 may include a pair of single-ended digital transmitters 214 and a pair of single-ended digital receivers 216. These components act as the input and output, respectively, for single-ended signaling. In single-ended signaling, each of the signal lines eD+ 210 and eD- 212 can transmit separate signal information. This is in contrast to standard USB2 implementation, in which LS/FS operations use differential signaling. In differential signaling, information is transmitted through two complementary signals transmitted on the pair of signal lines eD+ 210 and eD- 212. The translation of the physical signals transmitted over the signal lines 208 into binary signal data may be accomplished using any suitable techniques, such as Nonreturn-to-zero, inverted (NRZI). The LS/FS transceiver 202 may be fully digital, meaning that the analog components typically present for USB2 LS/FS circuitry, such as operational amplifiers and current sources, are eliminated. The single-ended digital transmitters 214 and the single-ended digital receivers 216 may be digital CMOS (Complementary Metal-Oxide-Semiconductor) components that operate with a signaling voltage of 1 .0 Volts, as compared to the standard 3.3 Volt signaling for USB2. Low-speed/Full-speed idle state (SE0) is maintained by the pull-down resistors 206 implemented at the downstream port. To ensure a swift transition to idle state, the port shall drive the bus to SE0 before disabling its transmitters. The HS transceiver 204 may be an analog transceiver configured for low swing differential signaling. For example, the HS transceiver may operate with a signaling voltage of 0.2 Volts, as compared to the 0.4 Volts used in USB2, thus a reduced power consumption is achieved during data transmission. The HS transceiver 204 can include a High-Speed transmitter 230 for data transmission, a High-Speed receiver 232 for data reception, and a squelch detector 234 for detection of link status, i.e. HS active, and HS idle. Additionally, in some embodiments, the HS transceiver 204 may also include an HS receiver termination 236 to minimize the signal reflection at the receiver leading to improved signal integrity. During the HS operating mode, wherein the HS transceiver 204 is enabled, the PHY 200 communicates data using differential signaling and can also transmit control signals using single-ended communications. The HS transceiver 204 and LS/FS transceiver 202 are both controlled by the link layer 108, which interfaces with the PHY 200 through the interface 1 12. Various data and control lines from the interface 1 12 are coupled to the transceivers 202 and 204. For example, as shown in Fig. 2, enable signals 218, 224, 244, and 238 are used to selectively enable the LS/FS transmitters 214, the LS/FS receivers 216, the HS receiver 232, or the HS transmitter 230, respectively. Complementary driver inputs 240 and 242 are coupled to the HS transmitter 230 for driving the HS transmitter to output data and/or control signals to the signals lines 208. A receiver output 246 is coupled to the HS receiver 232 for receiving data transmitted to the PHY 200 via the signals lines 208. A squelch detector 248, upon detecting the start of HS data packet, disables the SE receiver 216, enables the HS receiver 232, and optionally the receiver termination 236. Positive and negative receiver outputs 226 and 228 are coupled to the LS/FS receivers 216 for receiving data transmitted to the PHY 200 via the signals lines 208. Positive and negative driver inputs 220 and 222 are coupled to the LS/FS transmitters 214 for driving the LS/FS transmitter to output data and/or control signals to the signals lines 208. In embodiments, the device port (not shown) will have an eUSB interface with a physical layer substantially similar to the physical layer 200. In such an embodiment, the host and device both use the eUSB protocol. In embodiments, the device port may be a standard USB2 port with a standard USB2 physical layer. In such an embodiment, a repeater may be used to translate the eUSB signals sent from the host to standard USB2 signals. For example, the repeater may be configured to translate signals, such as device connect, device disconnect, data rate negotiation, and the like. The repeater may also be used to recondition the voltages of the eUSB signals to the voltages used in standard USB2. The operations of the repeater are described further in relation to copending Patent Application Serial Number . Fig. 3 is a block diagram of a universal serial bus physical layer with Low- Speed or Full-Speed capability. As shown in Fig. 3, the eUSB2 physical layer 300 can include a fully digital single-ended transceiver 302 without also including a High-Speed analog transceiver. It may function similarly to the eUSB PHY 200 shown in Fig 2, but does not have the capability to operate at High Speed (HS). The LS/FS PHY 300 may include an SE transceiver 302, a set of pull-down resistors 304, and a pair of eUSB2 data lines 306. Fig. 4 is a timing diagram of a SYNC pattern used in Low-Speed or Full- Speed mode. The SYNC pattern 400 may be used with the PHY 200 (Fig. 2) and the PHY 300 (Fig. 3) to mark the beginning of a packet sent from one port to another. As shown in Fig. 4, the SYNC pattern may use single ended communication, which is suitable for digital CMOS operation. In accordance with embodiments, eUSB2 drives the SYNC pattern on eD- 404 while maintaining logic '0' on eD+ 402 through the pull down resistors 206. As shown in Fig. 4, SYNC is indicated when the data line eD+ 402 is pulled down to logic '0' and during that time the data line eD- 404 transmits a pattern of KJKJKJKK. In High-Speed, the SYNC pattern (not shown) is similar to that of standard USB2, with the voltage swing redefined. In High-Speed, neither data line eD+ 402 or eD- 404 is held at logic '0' as High-Speed utilizes differential signaling. Instead, both data lines may toggle the SYNC pattern, for example, the series KJKJKJKK. Fig. 5 is a timing diagram of an End-of-Packet (EOP) pattern in Low-Speed or Full-Speed mode. The EOP pattern 500 is used to signify the end of the data packet sent from one port to another. In accordance with embodiments, the EOP pattern 500 is indicated by 2 Uls of logic Ύ at eD+ and one Ul of SE0, while eD- maintains logic '0' through the pull-down resistors 304. Single-ended 0 (SE0) describes a signal state in which both eD- and eD+ are at logic Ό'. Sending EOP on eD+, accompanied with SYNC and packet data being transmitted at eD-, allows a three-state (J, K, SE0) representation of a standard USB2 packet possible. The EOP pattern in accordance with embodiments described herein contrasts with standard USB2, in which the EOP pattern would be indicated by 2 Uls of SE0 followed by 1 Ul of J. The High-Speed eUSB2 EOP pattern (not shown) is similar to that of standard USB2 except that the voltage swing is redefined. High-Speed EOP is indicated by 8Uls of consecutive J or K. SOF EOP is indicated by 40Uls of consecutive J or K. Figs. 6A and 6B are timing diagrams showing an example of eUSB2 signal timing. In embodiments, single-ended signaling is used for LS/FS packet transmission in L0 mode. The term L0 describes a mode of operation in which a connection between the host and the device is active, enabling the host to communicate with the device. Single-ended signaling may also be used for interactions between two ports in different link states (not including LO), and for a host to issue control messages at any link state. When an LS/FS packet is transmitted, the SYNC pattern 400 and packet data is transmitted at eD- 604 while eD+ is held at logic Ό', and SE0 of the EOP pattern 500 is transmitted at eD+ while eD- is held at logic Ό'. When host initiates a control message, the control message may begin with SE1 . Single-ended 1 (SE1 ) describes a signal in which both eD- and eD+ are at logic Ύ. The difference in signal timing and format at the beginning between the transmission of data packets versus the transmission of control messages allows a device in L0 to distinguish whether a received packet is a data packet or a control message before proceeding to process the packet. In embodiments, the downstream port interprets the signaling from an upstream port based on its previous state of packet transaction or link state. Fig. 6A is a timing diagram of a LS/FS Start of Packet (SOP) pattern 602 sent from an upstream port (Host) to a downstream port (Device). As shown in Fig. 6A, the SOP pattern 602 is indicated by using eD- 604 to transmit SYNC pattern and packet data, while eD+ 606 remains at logic Ό'. When all of the packets have been transmitted, eD+ 606 may be used to transmit EOP while eD- 604 remains at logic Ό'. Fig. 6B is a timing diagram of a control message pattern 608 sent from an upstream port (Host) to a downstream port (Device). As shown in Fig. 6B, the start of control message (SOC) pattern 608 is indicated when a downstream port drives an SE1 pulse 610 for a definite period of time as a signature for the SOC message. Following the SE1 pulse 610, a control message can be encoded within an active window 612 using a series of pulses. During this active window 612, eD+ 606 may be driven at logic '1 ' while a number of pulses 614 may be activated on eD- 604. The number of pulses 614 may determine the nature of the control message. Control message signaling is described further in co-pending Patent Application Serial Number , filed on June 30, 2012, titled "Explicit Control Message Signaling," which is incorporated by reference herein in its entirety for all purposes. In embodiments, single ended signaling is also used for host and device interactions during power-up, Reset, Suspend, and L1 . Suspend, as used herein, describes a control message sent to the device from the host to temporarily disable link activity in order to limit power consumption. While in Suspend, the device may still accept a Resume control message or a Reset control message from the host. L1 , as used herein, describes a mode that may perform similarly to Suspend in some eUSB2 and USB2 embodiments. Resume, as used herein, describes a control message from the host that signals the device to re-enter LO mode from Suspend or L1 . Reset, as used herein, describes a control message sent from the host to set the device in a default unconfigured state. Fig. 7 is a timing diagram of a Low-Speed Keep Alive signal. LS Keep Alive 700 is a control message sent periodically during L0 to prevent a Low-Speed peripheral device from entering Suspend. As seen in Fig. 7, the Keep Alive signal 700 may include an SE1 pulse 702, an active window 704 on eD + 705 with no pulses on eD- 706, and an EOP signal 708. Device Disconnect Mechanism As explained above, standard USB2 uses a device pull-up and host pulldown mechanism to detect device connect or device disconnect when operating at LS/FS, or in L1 or Suspend. The wire voltage from the voltage divider network formed by the pull-up resistors and pull-down resistors 206 are read by the host to determine device connect status. This results in constant DC power being wasted in LS/FS or in L1 , or in Suspend. The invention eliminates the idle power by having the link in single-ended 0 (SE0) during the idle state, in which case both of the data wires, eD+ and eD-, are held to ground by the downstream port. Accordingly, little or no idle power is consumed during the idle state. During the standard USB2 idle state, referred to as "idle J", both the pull-up and pull down are enabled, resulting is wasted power. In embodiments, the pull-up from the device may be eliminated. Upon resuming from Suspend, the host requests the device to transmit a device ping to re-affirm connectivity. A Disconnect event will be detected if the host does not receive the digital ping signal from the device. Fig. 8 is a timing diagram of a device disconnect detection technique for Full-Speed or Low-Speed operation during L0. As shown in Fig. 8, a digital ping mechanism 800 may be used to accomplish device disconnect detect during L0 at LS/FS operation. The device ping 802 may be defined as a 1 -UI logic Ύ at eD- in FS or LS mode. As shown in Fig. 8, after detecting an EOP signal 806 on eD+ following a packet, the upstream port may transmit the device ping 802 on eD- 804 within a specified time limit (for example, 3 Uls) upon detecting the start of the EOP signal 802. Depending on the phase and frequency offset between the remote bit clock and local bit clock, the device ping 802 may be actually transmitted as early as 1 Ul and as late as more than 2 Uls. After sending the digital ping 802 back to the host, the device may enter Idle mode 812. To confirm connectivity, the upstream port may transmit the device ping 802 periodically on every frame period. Transmitting the device ping 802 in a periodic fashion allows the host to be aware of the device presence even when there is no data traffic between the host and device, thus prevents the device from being disconnected. The downstream port may declare device disconnect during L0 if it has not received any packet, and has not received any device ping for three consecutive frame periods. In embodiments, the downstream (host) port performs disconnect detect during resume from L1 or Suspend. In response, the upstream (device) port sends the digital ping signal upon resume to declare connected state during L1 or Suspend. For a device sending a digital ping to declare connect while in L1 or suspend, the device drives eD+ to send the digital ping. For a device sending a digital ping to declare connect while in L1 or Suspend, the device drives eD- to send the digital ping. Fig. 9 is a timing diagram of a device disconnect detection technique for High-Speed mode during L0 state. Standard USB2 HS uses analog approach to detect device disconnect. Specifically, standard USB2 uses envelope detection during EOP (End of Packet) of SOF (start of frame) for disconnect detection. The use of envelope detection requires an analog comparator and an accurate reference voltage. To facilitate this type of disconnect detection, the EOP of SOF is extended to 40 Uls such that the envelope detector has enough time to detect the disconnect event if the device is disconnected. In embodiments, eUSB uses an analog ping mechanism 900 to accomplish device disconnect detect during L0 at High speed. The device ping 902 may be transmitted periodically by the device during L0 idle to announce its presence and prevent being disconnected. By using a digital ping mechanism rather than envelope detection, various analog components, such as the envelope detector, can be removed, resulting in a simplified physical layer architecture. The mechanism for disconnect detection in L1 or Suspend for a High-Speed device may be the same as Full-Speed. As shown in Fig. 9, a packet of data 904 finishes transmitting at to, and is succeeded by an EOP signal 906. At t1 , the EOP signal 906 has finished. At t2, if no other activity has occurred, the device may send the device ping 902 to announce its presence to the downstream (host) port. The device ping 902 may contain 8 Uls of consecutive J or K. At t3, the device ping 902 has finished transmitting. The upstream port, while in L0, may transmit at least one device ping 902 at specified time intervals (for example, every microframe period of125 s) if the upstream port's transmitter is in L0 idle. The downstream port may declare disconnect of the device if it has not received any packets or pings from the device for three consecutive microframe periods. In native mode, the upstream device may not be required to report device disconnect during the L1 or Suspend. This allows the device to completely power down the transmitter during this power management state and maximize power saving. Upon Resume, the upstream port may send a digital ping and the downstream port may perform disconnect detection routines. When operating in repeater mode, device disconnect is detected by the repeater and reported to the Host. Device disconnect may be reported in Suspend or L1 when operating in repeater mode. When the repeater detects a disconnect event of a standard USB2 device, the repeater will convey the message to the host eUSB2 port through Single-ended Disconnect Signaling (SEDISC), wherein both of the signal lines, eD+ and eD- are both driven to logic Ί ' for a specified amount of time. Once the host observes SEDISC, the link state machine will transition to the Connect link state from the Suspend/L1 link state. The disconnect process used during repeater mode is described further in relation to co-pending Patent Application serial number . It is to be understood that implementation of the device disconnect detection techniques described herein are not restricted to only eUSB2 implementations. In embodiments, the disconnect detection techniques described above can be applied to any Input/Output (I/O) standard used in advanced deep submicron process or any IO standard that supports multiple data rate and modes of operation. Device Connect and Mode of Operation Detection Device connect detection enables the host port to determine when a device has been coupled to the host port. The detection of a device connect also involves a process that enables the host and device to declare their data rate capabilities to one another, for example, whether the host and/or device have LS capability, FS capability, and/or HS capability. As explained above, standard USB2, which uses 3.3V signaling, utilizes device passive pull-up and host passive pull-down to detect device connect. The host port may have 15k Ω pull-down enabled by default. When no device is connected, both data wires D+ and D- are pulled low. When connected, a device will have a 1 .5k Ω pull-up on either wire, depending on the device's data rate. The host can determine the device's data rate by judging which wire is pulled high. Additionally, standard USB2 specifications indicate the ability to detect On-The-Go (OTG) devices through a sideband wire called an ID pin, which is connected to an on-chip GIO. For operations that use lower signaling voltages, the standard connect detection scheme may not be feasible, as the resistance of the pull-down resistors and pull-up resistors would have to be significantly strengthened such that an active buffer may not be able to override the pull-up resistors. In embodiments, the eUSB2 connect event is generated by using the LS/FS transmitters 214 (Fig. 2) of the device port to drive the signal lines, either eD+ 210 or eD- 212, to logic Ί '. Furthermore, during connect and connect detection, eD+ 210 and eD-212 form a dual-simplex link to allow a Host and a device to interact with each other without causing contention. For example, if a FS or HS device is connected, eD+ will be driven to logic '1 ' by the FS transmitter at the device side, while eD- remains pull-down to logic Ό', and the FS receiver at the device side is enabled to detect any state change at eD- driven by the FS transmitter at Host side. In embodiments, the passive pull-up resistors on the device port may be eliminated. Additionally, the device detection scheme 1000 may include an inband mechanism to detect OTG capability without using a sideband wire, thus reducing GIO pin count. Fig. 10 a timing diagram showing an example of a device connect detection technique. In the example shown in Fig. 10, interactions occur between a downstream port and an upstream port in native mode at Full-Speed. Other embodiments considered by this process may include Low-Speed data rates or interactions between a downstream port on peripheral repeater mode and an upstream port on a Dual Role Device. At tO, or power-up, the ports may enable their pull-down resistors. The downstream port may disable its transmitters and enable its receivers at both eD+ and eD-. At t1 , the upstream port may drive eD+ or eD- to logic Ί ', depending on the speed to be declared by the upstream port. For example, as shown in Fig. 10, if a device is Full-Speed or High-Speed capable, it may only drive logic '1 ' at eD+ and enable its receiver at eD-, which is not driven by the upstream port. If the upstream port has only Low-speed capabilities, it may drive logic '1 ' at eD- and enable its receiver at eD+, which is not driven by the upstream port. At t2, the downstream port may declare device connect and acknowledge the device. The acknowledgement process may vary depending on the declared capabilities of the upstream device at time t1 . For example, if the downstream port has detected logic Ί ' at eD+ and logic '0' at eD- for the duration of T ATTDB, as shown in Fig. 10, the downstream port drives logic '1 ' at eD- for TACK- If it has detected logic '0' at eD+ and logic Ί ' at eD- for the duration of TATTDB, it drives logic Ί ' for eD+ for T ACK and declares Low-Speed device connect. In other words, the in-band hand shaking mechanism is configured as a dual-simplex link to ensure that the acknowledgement is driven on the signal line opposite the signal line that was used by the upstream device to declare its presence. In the scenario shown in Fig. 10, the downstream port is receiving a device presence signal on eD+. Thus, the handshake signal transverses through D-. In this way, the link partners do not drive the signal wires simultaneously, thus avoiding wire contention. In standard USB2, the active driver of a host is expected to override the wire state which is held at weak high by a passive pull-up at the upstream device. Also at t2, the upstream port may respond upon receiving acknowledgement from the downstream port. If the upstream port is Full-Speed or High-Speed, it may drive logic '0' at eD+ upon detecting Host acknowledgement at eD-, disable its transmitter, and also enable its receiver at eD+, thus concluding connect. In the case where a Host function is connected by the repeater in the repeater mode, eD+ may be continuously driven to logic '1 ' until the repeater has detected logic '0' at eD-, which is when a dual-role host port has detected a host function connected to its micro-AB receptor. If the downstream port has detected logic Ί ' at eD+ and logic '0' at eD- for the duration of T ATTDB, the downstream port may start acknowledgement by driving logic Ύ at eD- as shown in Fig. 10 at t2. During the time period indicated by, T ACK, the downstream port may continue monitoring eD+. If at the end of acknowledgement at t3, eD+ remains logic Ί ', the downstream port may declare a host function is connected. If the downstream port has detected eD+ transitioned to logic '0' before t3, it may declare a FS or HS device connected. At t4, the downstream port may issue a Reset message. The upstream port may reset its control message decoder upon detection of SE1 . At t5, the downstream port may continue Reset by maintaining SE0 based on pull-down resistors. The upstream port may complete Reset decoding and enter Reset. At t6, the downstream port may drive an EOP to conclude Reset if the device is Low-Speed or Full-Speed. If the device is Low-speed or Full-speed only, the device monitors Reset until its completion. At t7, the downstream port may conclude Reset by driving SE0 and enter Reset recovery. At t8, the ports are ready for initialization. Returning to t6, if the device has declared Full-Speed capability, speed negotiation commences at t6 to determine whether the device is High-Speed capable. High-Speed negotiation is described below in relation to Fig. 1 1 . Fig. 1 1 is a timing diagram showing an example of a device connect detection scheme in which the device declares High-Speed capability. The speed negotiation is accomplished with single-ended signaling from when the device starts indicating High-speed capable, to when the downstream port acknowledges, and to device when its receiver termination is turned on and ready for High-speed operation. Up to t6 of Fig. 1 1 , the device connect detection operations are the same as in Low-speed/Full-speed, which is described in relation to Fig. 10. If the device is High-speed, the following operation takes place. At t6, after an upstream port detects Reset, the device drives logic '1 ' at eD+ to represent device Chirp, if it is High-speed capable. The optional receiver termination 236 (Fig. 2) at both the downstream and upstream ports are disabled until t9. At t7, after the downstream port detects device Chirp, the downstream port starts driving logic Ύ at eD- to represent host Chirp and prepares the downstream PHY 200 for High-speed operation. At t8, the upstream port shall have its High-speed PHY 200 ready for operation after detecting host chirp. To prepare the upstream port for High-Speed operation, the upstream port drives eD+ to logic Ό', disable its single-ended transmitter at eD+ after TSE0_DR, and enable its single-ended receiver at eD+. At t9, the downstream port drives logic '0' at eD- to signal the completion of speed detection, and the PHY is ready for High-speed operation. Also at t9, the upstream port enters L0 by enabling its optional receiver termination and squelch detector. At t10, the downstream port concludes Reset. At this time, the link is in L0 state. It is to be understood that implementation of the device connect and mode of operation detection techniques described herein are not restricted to only eUSB2 implementations. In embodiments, the disconnect detection techniques described above can be applied to any Input/Output (I/O) standard used in advanced deep submicron process or any IO standard that supports multiple data rate and modes of operation. Although some embodiments have been described in reference to particular implementations, other implementations are possible according to some embodiments. Additionally, the arrangement and order of circuit elements or other features illustrated in the drawings or described herein need not be arranged in the particular way illustrated and described. Many other arrangements are possible according to some embodiments. In each system shown in a figure, the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different or similar. However, an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary. In the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. An embodiment is an implementation or example of the inventions. Reference in the specification to "an embodiment," "one embodiment," "some embodiments," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions. The various appearances "an embodiment," "one embodiment," or "some embodiments" are not necessarily all referring to the same embodiments. Not all components, features, structures, characteristics, etc. described and illustrated herein need be included in a particular embodiment or embodiments. If the specification states a component, feature, structure, or characteristic "may", "might", "can" or "could" be included, for example, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to "a" or "an" element, that does not mean there is only one of the element. If the specification or claims refer to "an additional" element, that does not preclude there being more than one of the additional element. Although flow diagrams or state diagrams may have been used herein to describe embodiments, the inventions are not limited to those diagrams or to corresponding descriptions herein. For example, flow need not move through each illustrated box or state or in exactly the same order as illustrated and described herein. The inventions are not restricted to the particular details listed herein. Indeed, those skilled in the art having the benefit of this disclosure will appreciate that many other variations from the foregoing description and drawings may be made within the scope of the present inventions. Accordingly, it is the following claims including any amendments thereto that define the scope of the inventions. |
A digital pen comprising: an electric circuit, an acoustic transmitter, detached from the electric circuit, and configured to transmit acoustic signals, and a resilient holder, configured to mechanically press the electric circuit into electrical contact with the transmitter, so as to electrically connect the electric circuit and the transmitter. |
A digital pen comprising: an electric circuit; an acoustic transmitter, detached from said electric circuit, and configured to transmit acoustic signals; and a resilient holder, configured to mechanically press said electric circuit into electrical contact with said transmitter, so as to electrically connect said electric circuit and said transmitter. The digital pen of claim 1, wherein said resilient holder further comprises: a base and extensions arising perpendicularly therefrom and configured for location of a first electrical circuit and a second electrical circuit thereon within the confines of a housing, and such as to bring about an electrical contact between said first and second electrical circuits due to said confinement within said housing. The digital pen of claim 1 or claim 2, wherein said transmitter is an ultrasound transducer. The digital pen of any one of claims 1 to 3, wherein said resilient holder is electrically conductive. The digital pen of claim 2, wherein said extensions impart a U shape to said resilient holder. 4. The digital pen of any one of the preceding claims, further comprising a housing configured to apply mechanical pressure on said resilient holder, thereby to bring about said electrical contact, or wherein said housing comprises a changeable cover element. The digital pen of any one of the preceding claims, further comprising a plurality of infrared emitters, deployed on a plurality of positions on the digital pen, for emitting infrared light. The digital pen of any one of the preceding claims, further comprising a switch assembly having two switching points for pressing said assembly to achieve first and second switching modes respectively, the assembly further having a third mode selectable upon said two switching points being pressed substantially simultaneously. The digital pen of claim 8, wherein said switch assembly comprises a switching rod balanced about a fulcrum, wherein said fulcrum is resiliency configured to retain said switching rod at either one of a higher levered position and a lower received position and wherein said two switching points being pressed substantially simultaneously has the effect of lowering said lever into said received position. The digital pen of any preceding claim, further comprising: a pen tip, said acoustic transmitter being located in proximity to said pen tip; and a smooth contact switch, configured to smoothly actuate the digital pen upon transmission thereto of a writing pressure from said pen tip. 4. The digital pen of claim 10, wherein said smooth contact switch comprises: a resilient element, mounted on a first side of an open electric circuit, and disconnected from a second side of said electric circuit, said resilient element being compressible into a position where said resilient element contacts said second side of said electric circuit, thereby closing said electric circuit, upon applying said writing pressure compressing said circuit closing element into said position. The digital pen of claim 11, wherein said resilient element comprises a conductive additive, for making said resilient element electrically conductive. The digital pen of any one of the preceding claims comprising: an elongated body terminating in a writing tip; a writing element protruding from said writing tip; and a rotating cover, adjacently mounted to said writing tip, covering said writing element upon being rotated in one direction, and exposing said writing element upon being rotated in a direction opposite to said one direction. The digital pen of any one of the preceding claims, comprising two acoustic signal transmitters, each acoustic signal transmitter being configured to transmit an acoustic signal, and positioned respectively apart on the digital pen. A digital pen of any one of the preceding claims comprising: 42 an acoustic wave guide, positioned adjacent to said acoustic transmitter, said acoustic wave guide comprising a plurality of fins radiating outwardly in a direction away from said acoustic signal transmitter. The digital pen of claim 15, wherein said fins are positioned so as to spatially divide the region about said signal transmitter into a plurality of directional sectors, so as to substantially isolate acoustic signals transmitted by said acoustic transmitter through one of said sectors from acoustic signals transmitted from said acoustic transmitter through remaining ones of said sectors. A digital pen system, comprising: a digital pen having an elongated body terminating in a writing tip, a writing element protruding from said writing tip, an electric circuit, an acoustic signal transmitter detached from said digital pen and deployed adjacent to said writing tip and configured to transmit an acoustic signal, and a resilient holder, configured to mechanically press said electric circuit into electrical contact with said transmitter, so as to electrically connect said electric circuit and said transmitter; at least one receiving unit for receiving said acoustic signal from said digital pen; and a processor, associated with said at least one receiving unit, configured to process said received acoustic signal for determining presence of said digital pen in a predefined area, and to trigger a predefined functionality upon said determining presence; and a map, configured to graphically map a predefined area, so as to assist a user in positioning the digital pen in said predefined area. 4. A digital pen system comprising: a digital pen having an electric circuit, an acoustic transmitter configured to transmit acoustic signals, detached from said electric circuit, and a resilient holder, configured to press said electric circuit into contact with said transmitter upon applying a mechanical pressure to said resilient holder, so as to electrically connect said electric circuit and said transmitter; at least one receiving unit for receiving said acoustic signals from said digital pen; and a processor, associated with said at least one receiving unit, configured to process said received acoustic signals for determining location of said digital pen. A digital pen system comprising: a digital pen according to any one of claims 1 to 16; at least one receiving unit, configured to receive said acoustic signals from said digital pen; and a processor, associated with said at least one receiving unit, configured to process said received acoustic signals for determining location of said digital pen. |
WO 2006/100682 PCT/IL2006/000373 METHOD AND SYSTEM FOR DIGITAL PEN ASSEMBLY FIELD AND BACKGROUND OF THE INVENTION The present invention relates to acoustic positioning methods, and more particularly, but not exclusively to a method and an apparatus for data entry using an acoustic signal transmitting pen input device. Digital writing instruments, interchangeably referred to herein as Digital Pens, regardless of whether they actually write on paper, can be used to capture pen strokes on paper and digitize them. For example, with a digital pen, pen strokes may be converted by handwriting recognition software to a digitally stored record of the writing. In this way, a laborious, tedious step in modem written communication, namely, the manual transcribing of handwriting into a computerized word processor, is eliminated, greatly increasing productivity. Sensing a time-dependent position of the pen and converting the positions to pen strokes may be used for input of digital representations of the pen strokes to a handwriting recognition device. As known in the art, ultrasonic systems can be used in which a special pen generates or alters an ultrasonic signal, as the pen is moved across a piece of paper. The ultrasonic signal is sensed by receivers and correlated to a position vis-A vis each receiver, as the outputs of the receivers is triangulated and correlated to absolute pen positions. A sequence of pen positions can then be digitized for input into handwriting recognition engines. WO 2006/100682 PCT/IL2006/000373 2 An advantage with ultrasonic systems is that the user of the ultrasonic signal emitting device can use the device to write on an ordinary piece of paper that is placed on or nearby a base station, which receives the ultrasonic signals and converts the signals to alpha-numeric characters. There are many currently known in the art methods for data entry using an acoustic impulse transmitting pen input device. US Patent No. 4,814, 552, to Stefik, filed on December 2, 1987, entitled "Ultrasonic position input device", describes an input device, or stylus, for entering hand drawn forms into a computer comprising a writing instrument, a pressure switch for determining whether the instrument is in contact with the writing surface, an acoustic transmitter for triangulating the position of the stylus on the surface, and a wireless transmitter for transmitting data and timing information to the computer. In operation, the stylus described by Stefik transmits an infrared signal which the system receives immediately, and an ultrasound pulse which two microphones receive after a delay which is a function of the speed of sound and the distance of the stylus from each microphone. US Patent No. 6,654,008, to Ikeda, filed on November 27, 2001, entitled "Electronic whiteboard and penholder used for the same", describes an electronic whiteboard capable of being drawn, using marker pens of several colors, and one penholder for use in such an electronic whiteboard. In Ikeda's patent, an infrared light emitting unit emits infrared light containing color information of the marker pen, an ultrasonic wave emitting unit emits the ultrasonic wave, and color information changeover means changes over color information depending on the color of marker pen. The electronic whiteboard main body receives the infrared light and ultrasonic wave emitted from the penholder, and WO 2006/100682 PCT/IL2006/000373 3 issues information about a position of the penholder depending on the reception timing of the infrared light and ultrasonic wave. US Patent No. 6,876,356 to Zloter, filed on March 18, 2002, entitled "Digitizer pen", describes a digitizer pen system including a pen having a means protruding from the pen's writing tip, for preventing fingers blocking communication with a base unit. US Patent No. 6,184,873 to Ward, filed on January 20, 1998, entitled "Pen positioning system", describes a pen positioning system including a pen. The pen has multiple output elements and is adapted to accurately determine the location of the pointing tip of the pen, in relation to an electronic tablet. The output elements, preferably ultrasonic transmitters having distinct frequencies, are located a fixed distance from each other, and are also related in space to the pointing tip of the pen. A detection system is used to receive the output signals from the output elements, isolate the output signals from each other, and process them independently, to determine the location of the output elements and of the pointing tip of the pen. US Patent No. 6,703, 570 to Russel, filed on May 10, 2000, entitled "Digital pen using ultrasonic tracking ", describes a digital pen system. Russel's system includes an elongated pen defining a writing tip, and an ultrasonic transducer oriented on the pen to direct frames of ultrasonic energy outwardly from the pen, with each frame including plural receive pulses. The digital pen system in Russel's patent further includes two or more detectors positioned on a base, such as a laptop computer, for receiving the pulses, with each pulse being associated with at least one pulse time of arrival (TOA) relative to at least one detector. Russel's system further includes a processor positioned on the 4 base, receiving signals from the detectors, and outputting position signals representative of positions of the pen, based on the received signals. However, there are inherent problems in current acoustical technology and in the implementation of the current acoustical technology in digital pens, such as the digital pens described in the patents cited hereinabove. Among the disadvantages of current acoustic technology are: lack of accuracy, lack of multi-devices support, high power consumption, etc. The problems have implications on the mechanical design of existing data entry using an acoustic impulse transmitting devices. Apart from that, there are manufacturing problems related to the assembly of the acoustic transmitter and its incorporation in a digital pen or the like. For instance, such problems may arise in connecting an acoustic transmitter to a flexible printed circuit board (PCB). There are also marketing issues, such as differentiation between products by changing their appearance, while keeping the functional parts the same. There is thus a widely recognized need for, and it would be highly advantageous to have, an apparatus or a method devoid of the above limitations. SUMMARY OF THE INVENTION According to one aspect of the present invention there is provided a digital pen comprising: an electric circuit; an acoustic transmitter, detached from said electric circuit, and configured to transmit acoustic signals; and a resilient holder, configured to mechanically press said electric circuit into electrical contact with said transmitter, so as to electrically connect said electric circuit and said transmitter. Preferably, the resilient holder comprises a base and extensions arising perpendicularly therefore and configured for location of a first electrical circuit and a second electrical circuit thereon within the confines of a housing, and such as to bring about an electrical contact between said first and second electrical circuits due to said confinement within said housing. Preferably, the digital pen further comprises a switch assembly having two switching points for pressing said assembly to achieve first and second switching modes respectively, the assembly further having a third mode selectable upon said two switching points being pressed substantially simultaneously. Preferably, the digital pen further comprises a pen tip, said acoustic transmitter being located in proximity to said pen tip; and a smooth contact switch, configured to smoothly actuate the digital pen upon transmission thereto of a writing pressure from said pen tip. Preferably, the digital pen comprises an elongated body terminating in a writing tip; 6 a writing element protruding from said writing tip; and a rotating cover, adjacently mounted to said writing tip, covering said writing element upon being rotated in one direction, and exposing said writing element upon being rotated in a direction opposite to said one direction. There is also disclosed herein a digital pen comprising: an elongated body terminating in a writing tip; a writing element protruding from said writing tip, an acoustic transmitter deployed adjacent to said writing tip, configured to transmit an acoustic signal, and an elongated housing covering said elongated body, said elongated body being movable inside said elongated housing for exposing and for covering said writing element. Preferably, the digital pen, comprises: an acoustic wave guide, positioned adjacent to said acoustic transmitter, said acoustic wave guide comprising a plurality of fins radiating outwardly in a direction away from said acoustic signal transmitter. There is also disclosed herein a receiving unit for receiving an acoustic signal from a digital pen, comprising: at least two ultrasound receivers, for receiving ultrasound signals from the digital pen; and an electric circuit connected to said ultrasound receivers, and configured to extract ultrasound signals received by said ultrasound receivers, said extraction comprises referencing a reference model comprising data pertaining to expected reference signals. According to another aspect of the present invention there is provided a digital pen system, comprising: 7 a digital pen having an elongated body terminating in a writing tip, a writing element protruding from said writing tip, an electric circuit, an acoustic signal transmitter detached from said digital pen and deployed adjacent to said writing tip and configured to transmit an acoustic signal, and a resilient holder, configured to mechanically press said electric circuit into electrical contact with said transmitter, so as to electrically connect said electric circuit and said transmitter; at least one receiving unit for receiving said acoustic signal from said digital pen; and a processor, associated with said at least one receiving unit, configured to process said received acoustic signal for determining presence of said digital pen in a predefined area, and to trigger a predefined functionality upon said determining presence; and a map, configured to graphically map said predefined area, so as to assist a user in positioning the digital pen in said predefined area. According to yet another aspect of the present invention there is provided a digital pen system comprising: a digital pen having an electric circuit, an acoustic transmitter configured to transmit acoustic signals, detached from said electric circuit, and a resilient holder, configured to press said electric circuit into contact with said transmitter upon applying a mechanical pressure to said resilient holder, so as to electrically connect said electric circuit and said transmitter; at least one receiving unit for receiving said acoustic signals from said digital pen; and a processor, associated with said at least one receiving unit, configured to process said received acoustic signals for determining location of said digital pen. There is also disclosed herein a digital pen system comprising: a digital pen having an acoustic transmitter, configured to transmit acoustic signals and a switch assembly having two switching points for pressing said assembly to achieve 8 first and second switching modes respectively, said assembly further having a third mode selectable upon said two switching points being pressed substantially simultaneously at least one receiving unit, configured to receive said acoustic signals from said digital pen; and a processor, associated with said at least one receiving unit, configured to process said received acoustic signals for determining location of said digital pen. There is also disclosed herein a digital pen system comprising: a digital pen having an acoustic transmitter, configured to transmit acoustic signals, and a smooth contact switch configured to actuate the digital pen upon applying a pressure on said smooth contact switch; at least one receiving unit, configured to receive said acoustic signals from said digital pen; and a processor, associated with said at least one receiving unit, configured to process said received acoustic signals, for determining location of said digital pen, said smooth contact switch comprises a resilient element, mounted on a first side of an open electric circuit and disconnected from a second side of said electric circuit, said resilient element being compressible into a position where said resilient element contacts said second side of said electric circuit, thereby closing said electric circuit, upon applying a writing pressure compressing said resilient element into said position. There is also disclosed herein a digital pen system comprising: a digital pen having an acoustic transmitter, configured to transmit acoustic signals; at least one receiving unit, said receiving unit having at least two ultrasound receivers, configured to receive an ultrasound signal from said digital pen; and 9 a processor, associated with said at least one receiving unit, configured to process said ultrasound signals, for extracting location of said digital pen, said extraction comprises referencing a reference model comprising data pertaining to expected reference signals. There is also disclosed herein a digital pen system comprising: a digital pen having an acoustic transmitter, configured to transmit acoustic signals; at least one receiving unit, said receiving unit having a housing, and at least two acoustic signal receivers positioned inside said housing, less than 65 mm apart from each other, and configured to receive an acoustic signal from said digital pen; and a processor, associated with said at least one receiving unit, configured to process said acoustic signal, for determining location of said digital pen. There is also disclosed herein a smooth contact switch, comprising: a resilient element, mounted on a first side of an open electric circuit and disconnected from a second side of said electric circuit, said resilient element being compressible into a position where said resilient element contacts said second side of said electric circuit, thereby closing said electric circuit, upon applying pressure compressing said resilient element into said position. There is also disclosed herein a digital sleeve, mountable on a writing instrument, the digital sleeve comprising: an acoustic signal transmitter, configured to transmit an acoustic signal; a writing sensor, connected to said acoustic transmitter, configured to detect a predefined movement of the writing instrument in relation to the digital sleeve and to actuate said acoustic signal transmitter upon said detection. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting. Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present invention, several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof. For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system, In any case, selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions. I1 BRIEF DESCRIPTION OF THE DRAWINGS The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in order to provide what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice. In the drawings: Fig. I a simplified block diagram illustrating a digital pen, according to a preferred embodiment of the present invention. Fig. 2a and 2b are exemplary depictions of a resilient holder, deployed inside a digital pen, according to a preferred embodiment of the present invention. Fig. 3 is an exemplary depiction of a digital pen having a switch assembly comprising two switches, according to a preferred embodiment of the present invention. WO 2006/100682 PCT/IL2006/000373 12 Fig. 4a is a simplified block diagram schematically illustrating an exemplary switch assembly mechanical design, according to a preferred embodiment of the present invention. Fig. 4b is a diagram showing a first exemplary cover element for a switch assembly, according to a preferred embodiment of the present invention. Fig. 4c is a diagram showing a second exemplary cover element for a switch assembly, according to a preferred embodiment of the present invention. Fig. 5a-1 and 5a-2 are simplified block diagrams depicting a touch switch, according to a preferred embodiment of the present invention. Fig. 5b is a simplified diagram, illustrating an adhesive having a vent, according to a preferred embodiment of the present invention. Fig. 6a is a simplified diagram illustrating a first digital pen having a changeable cover element, according to a preferred embodiment of the present invention. Fig. 6b is a simplified diagram illustrating a second digital pen having a changeable cover element, according to a preferred embodiment of the present invention. Fig. 6c is a simplified diagram illustrating a third digital pen having a changeable cover element, according to a preferred embodiment of the present invention. Fig. 6d is a simplified diagram illustrating a fourth digital pen having a changeable cover element, according to a preferred embodiment of the present invention. Fig. 7a is a simplified block diagram illustrating a first retractable digital pen according to a preferred embodiment of the present invention. WO 2006/100682 PCT/IL2006/000373 13 Fig. 7b is a simplified diagram illustrating a second retractable digital pen according to a preferred embodiment of the present invention. Fig. 8a is a simplified block diagram illustrating a second retractable digital pen, according to a preferred embodiment of the present invention. Fig. 8b is a simplified diagram illustrating a second retractable digital pen, according to a preferred embodiment of the present invention. Fig. 9 is a simplified block diagram schematically illustrating a digital pen having two acoustic transmitters according to a preferred embodiment of the present invention. Fig. 10 is a diagram schematically illustrating a digital sleeve for a writing instrument, according to a preferred embodiment of the present invention. Fig. 11 a-11e are schematic depictions of a digital pen's grating for a writing instrument, according to a preferred embodiment of the present invention. Fig. 12 is a schematic depiction of a first receiving unit for receiving an acoustic signal from a digital pen, according to a preferred embodiment of the present invention. Fig. 13 is a schematic depiction of a second receiving unit for receiving an acoustic signal from a digital pen, according to a preferred embodiment of the present invention. Fig. 14 is a simplified block diagram illustrating a digital pen system, according to a preferred embodiment of the present invention. Fig. 15 is a simplified block diagram illustrating a decoding unit, according to a preferred embodiment of the present invention. WO 2006/100682 PCT/IL2006/000373 14 Fig. 16 is a simplified block diagram illustrating exemplary components of a mathematical model for incorporating into a maximum likelihood detector, according to a preferred embodiment of the present invention. Fig. 17 is a two-part graph showing an exemplary correlation function, according to a preferred embodiment of the present invention. DESCRIPTION OF THE PREFERRED EMBODIMENTS The present embodiments comprise a digital pen, a digital sleeve, a receiving unit, and a digital pen system. The principles and operation of a digital pen, a digital sleeve, a receiving unit, and a digital pen system according to the present invention may be better understood with reference to the drawings and accompanying description. The present invention attempts to overcome drawbacks of traditional technologies, some of which are described hereinabove in the background and field of invention section. The present invention attempts to improve current technologies by introducing and implementing new ideas into the design of a viable product, be it a digital pen, a digital sleeve, a receiver for acoustic signals transmitted from a digital pen, or a digital pen system. Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. WO 2006/100682 PCT/IL2006/000373 Reference is now made to Fig. 1, which is a simplified block diagram illustrating a digital pen, according to a preferred embodiment of the present invention. A digital pen 1000, according to a preferred embodiment of the present invention includes at least one acoustic transmitter 100, preferably deployed adjacent to the pen's tip, and an electric circuit 110 such as a flexible printed electric circuit board (PCB) 120 which includes a connection to an electric power source, such as a miniature battery. Preferably, the acoustic transmitter 100 is an ultrasound transducer. Optionally, the ultrasound transducer is a piezoelectric transducer which converts electrical energy into ultrasound signals. Piezoelectric crystals have the property of changing size when an electric voltage is applied to them. By applying an alternating electric voltage (AC) on a piezoelectric crystal, the crystal is caused to oscillate at very high frequencies producing ultrasound signals comprised of very high frequency sound waves. Preferably, the ultrasound transducer is made of Polyvinylidene Fluoride (PVDF), which is flexible plastic polymer, bearing piezoelectric properties. The acoustic transmitter 100 is electrically connected to the circuit 110, which may be detached from the transmitter 110, say to allow assembling an ink refill inside in the pen. However, the acoustic transmitter 100 is too sensitive to allow heating for soldering, for electrically connecting the transmitter 100 to the electric circuit 110, or even the attachment of plastic to the transmitter 100. Though screwing the transmitter is optional, it is not suitable for fast high volume production. WO 2006/100682 PCT/IL2006/000373 16 A preferred embodiment of the present invention attempts to overcome the difficulty in electrically connecting the transmitter 100 and the electric circuit 110, using a resilient holder 120. The resilient holder 120 presses the electric circuit 110 into contact with the acoustic transmitter 100, upon applying a mechanical pressure on the resilient holder 120, so as to electrically connect the electric circuit 110 and the acoustic transmitter 100. Optionally, the resilient holder 120 is further configured to push the flexible PCB into a position, in order to allow the placement of components such as IR transmitters in certain positions. Preferably, the resilient holder 120 facilitates the digital pen's 1000 being smaller than known in the art digital pens. Reference is now made to Fig. 2a which is an exemplary depiction of a resilient holder deployed inside a digital pen, according to a preferred embodiment of the present invention. A digital pen 2000 according to a preferred embodiment of the present invention has an elongated body terminating in a writing tip, a writing element 220 protruding from the writing tip, an acoustic transmitter 210 deployed adjacent to the writing tip, and an electric circuit 240 such as a flexible PCB 240 (flexible printed circuit board) or conductors. The digital pen's writing element may be, but is not limited to an ink refill, a pencil tip, a marker, etc. The digital pen may also include an eraser. The digital pen may also allow a user to change the color of writing, say using the switch assembly, described in detail herein below. Optionally, the writing element is rather a sharpened tip which does not physically write. WO 2006/100682 PCT/IL2006/000373 17 Preferably, the digital pen 2000 further includes a resilient holder 250 pressing the flex PCB 240 (or the conductors) into contact with the acoustic transmitter 210, for electrically connecting the flex PCB 240 and the acoustic transmitter 210 (and to the pen body). Preferably, the resilient holder 250 may be made of conductive material, in order to increase electrical conductivity between the Flex PCB 240 and the acoustic transmitter 210. Optionally, the electric conductivity between the Flex PCB 240 and the acoustic transmitter 210 may be increased by deploying gold contacts on the resilient holder 250. The resilient holder 250 secures electrical contact between the Flex PCB 240 and the acoustic transmitter 210. Optionally, as a result of mechanical pressure applied on the resilient holder 250, say from the pen's housing. In a preferred embodiment, the resilient holder 250 is in the shape of a "U": solid above and open below, such that the pen body is kept tight by the bottom part of the resilient holder 250. The "U" shaped resilient holder 250 comprises a base and extensions arising perpendicularly from the base, and is configured -for location of the flex PCB 240 or any other first electric circuit, and a second electric circuit, within the confines of a housing. The resilient holder 250 brings about an electric contact between the two electric circuit, due the confinement within the housing, thereby connecting the two circuits. By connecting the two circuits, the resilient holder 250 electrically connects the flexible PCB 240 and the acoustic transmitter 210. Preferably, the resilient holder 250 has some elasticity so as to enable easy assembly, by putting all parts and sliding the resilient holder 250 to position. The mechanical force is kept by elastic lugs on the upper part consisting of the base WO 2006/100682 PCT/IL2006/000373 18 hereinabove. The elastic lugs push the holder up (while the bottom part is secured to the pen's body as explained hereinabove), as explained hereinabove. Reference is now made to Fig. 2b which shows an exemplary a digital pen having a resilient holder, according to a preferred embodiment of the present invention. A digital pen 2000 has a resilient holder 290, as described hereinabove. The resilient holder 290 further has leaf springs 292. The leaf springs 292 are configured to apply pressure on an acoustic transmitter's ribbon, thereby connecting the acoustic transmitter to the flexible PCB 295, as described hereinabove. Reference is now made to Fig. 3 which an exemplary depiction of a digital pen having a switch assembly comprising two switches, according to a preferred embodiment of the present invention. A digital pen 3000, according to a preferred embodiment of the present invention includes at least one acoustic transmitter 310, preferably an ultrasound transducer. The digital pen 3000 further comprises a switch assembly 320 having at least two switches. The digital pen 3000 has a certain mode which a user may select by pressing at least two of the switches substantially simultaneously. Optionally, the switch assembly 320 is mechanically designed, according to known in the art techniques, with a position associated with the certain mode of the pen. Preferably, the position is accessible only when the user presses the two switches of the switch assembly 320 simultaneously, or almost simultaneously. Reference is now made to Fig. 4a, which is a block diagram schematically illustrating an exemplary switch assembly mechanical design, according to a preferred embodiment of the present invention. WO 2006/100682 PCT/IL2006/000373 19 A switch assembly, according to a preferred embodiment of the present invention, has two switches 410,420, mounted on a switching rod 450. The switching rod is balanced about a fulcrum 470. Preferably, the fulcrum 470 is urged up by a spring. A user may push the first switch 410, thus putting the assembly in a first position (a), or push the second switch 420, thus putting the assembly in a second position (b), or toggle between the two positions (a,b). When a user pushes the two switches 410,420, at the same time, or almost at the same time, the pressure applied on the two switches 410,420 simultaneously pushes the fulcrum 470 against the spring, and puts the assembly in a third position (c) where both switches are pressed. The third position may be associated with a certain mode, as described hereinabove. Optionally, the switch assembly 320 is electrically designed, according to known in the art techniques, with a position associated with the certain mode of the pen. The position is accessible only when the user presses the two switches of the switch assembly 320 substantially simultaneously. Reference is now made to Fig. 4b, which a block diagram showing a first exemplary cover element for a switch assembly, according to a preferred embodiment of the present invention. Switch assembly 320, may further have cover element 4000. The cover element 4000 has right and left protrusions 4100 (or regressions), guiding a user press one of the two switches 410,420, as explained in greater detail hereinabove. The cover element 4000 further has a central protrusion 4200 (or regression), guiding the user, to apply a pressure substantially simultaneously on the two switches 410,420, thus bringing the switch assembly into the third position, described hereinabove. WO 2006/100682 PCT/IL2006/000373 Reference is now made to Fig. 4c, which is a block diagram showing a second exemplary cover element for a switch assembly, according to a preferred embodiment of the present invention. Cover element 4500 is mount on the switch assembly, such that the protrusions (regressions) are positioned above their two switches 4520, described hereinabove. Reference is now made to Fig. 5a, which is a simplified block diagram depicting a touch switch, according to a preferred embodiment of the present invention. A digital pen according to a referred embodiment of the present invention includes a smooth contact switch, configured to actuate the digital pen upon applying a mechanical pressure on the smooth contact switch. The smooth touch switch is assembled inside the digital pen, such that the mechanical pressure is applied on the smooth contact switch when the pen touches a surface, such as a sheet of paper, say when a user writes using the digital pen. Preferably, the applied pressure may be very small, preferably less than twenty five grams. More preferably, the switch activation travel distance is very small (say, less than 0.1mm), and not sensed by the user who uses the pen. The smooth touch switch may be mounted on an open electric circuit, such as a flexible printed circuit board (PCB), a regular circuit, or on two detached circles of conductive material, etc As a result, no wires or ribbons are needed to connect the switch to the electrical circuit. In a preferred embodiment, a concentric adhesive 510 with electrical conduction properties is applied on a resilient element 530, electrically connected to one side of an open electric circuit. WO 2006/100682 PCT/IL2006/000373 21 The upper part of the smooth touch switch is the flat and compressible resilient element 530 having conducting properties, mounted on the adhesive 510, as shown in a cross sectional view (5a-2) along the AA line of the bottom view (5a-1). Optionally, the resilient element is made of a conductive material or the conductive properties are given to the resilient element 530, by a adding an additive, such as a conductive ink or glue to the resilient element 530. A pressure in the center of the upper part of the resilient element 530 of the smooth touch switch compresses the resilient element 530 into a position forming an electric path from a second side of the open electric circuit, through the resilient element 530, through the concentric adhesive 510, and to the first side of the electric circuit, thus closing the electric circuit, thereby actuating the digital pen. The actuating pressure is controlled by the thickness of the concentric adhesive 510, inner diameter of the concentric adhesive 510 and thickness of the resilient element 530. Optionally, the resilient element is made of Polyethylene Terepthalate- (PET) material covered with conductive ink, and the concentric adhesive is a very thin layer, of no more then 0.1 mm, and made of 3MTM Z-Axis, or similar products. Optionally the resilient element 530 is made of conductive metal. Reference is now made to Fig. 5b which is simplified block diagram, illustrating an adhesive having a vent, according to a preferred embodiment of the present invention. Preferably, the adhesive 5100 used in the smooth touch switch, described hereinabove, includes vent holes 5150, for relieving air pressure trapped inside the cavity formed by the resilient element 530, the adhesive, and the electric circuit closed by the smooth touch switch upon compressing the resilient element 530. WO 2006/100682 PCT/IL2006/000373 22 Reference is now made to Fig. 6a, which is a simplified diagram illustrating a first digital pen having a changeable cover element, according to a preferred embodiment of the present invention. Preferably, a digital pen according a preferred embodiment of the present invention, has an inner structure which holds the functional parts together and a housing having a changeable cover element (skin). The inner part may hold an acoustical transducer, IR emitters, an electric circuit such as a flexible PCB, switches, etc. The housing covers the inner part and has some mechanical interfaces which allow its connection to the inner part. The housing may have additional functional properties, such as a battery holder. In a preferred embodiment, there is introduced a variety of colorful and fashioned changeable cover elements, thus providing a range of covers (skins) for the digital pen. Optionally, a manufacturer of the pen assembles the pen with one cover element of a variety of cover elements and the end user does not change the cover element. Preferably, an end-user is allowed to change the cover element of the housing, thus giving the digital pen different appearance and different feel or texture For example, a digital pen 6100 has a housing which includes a central changeable cover element (skin) 610, connected to battery support chassis 611 on one side, and to a pen tip 612, on the other side. Optionally, the central changeable cover element (skin) 610 is connected to the battery support chassis 611 and the pen tip 612, utilizing snap locks 615, visible or hidden, as known in the art. Reference is now made to Fig. 6b, which is a simplified diagram illustrating a second digital pen having a changeable cover element, according to a preferred embodiment of the present invention. WO 2006/100682 PCT/IL2006/000373 23 a digital pen 6200 has a housing which includes a changeable cover element (skin) 6210, connected to a pen tip 6220. Reference is now made to Fig. 6c, which is a simplified diagram illustrating a third digital pen having a changeable cover element, according to a preferred embodiment of the present invention. a digital pen 6300 has a housing which includes a changeable cover element (skin) 6310, connected to a battery cover 6320. Reference is now made to Fig. 6d, which is a simplified diagram illustrating a fourth digital pen having a changeable cover element, according to a preferred embodiment of the present invention. a digital pen 6400 has a housing which includes an upper changeable cover element (skin) 6410, connected to a lower cover 6420. Preferably, the digital pen is a retractable digital pen allowing covering the writing element at the tip of the pen, say at the tip of an ink cartridge deployed inside the pen, (or stylus, or pencil). Reference is now made to Fig. 7a which a simplified block diagram illustrating a first retractable digital pen according to a preferred embodiment of the present invention. According to a preferred embodiment, the digital pen 700 has a rotating part 710 which moves forward or backwards when a user rotates the part 710. The rotating part 710 moves forward and covers a writing element 720, protruding from the tip 715 of the digital pen 700, when the user rotates the part 710 in one direction. The rotating part 710 moves backwards, and exposes the writing element 720, as the user rotates the part 710 in an opposite direction. WO 2006/100682 PCT/IL2006/000373 24 Optionally, the rotational movement of the rotating part 710 is transformed into a linear movement where the rotating part 710 moves forward, for covering the writing element 720, or backwards, for exposing the writing element 720. The transformation may be facilitated by a helical track, guiding the rotating part 710, as known in the art. Reference is now made to Fig. 7b which a simplified diagram illustrating a second retractable digital pen according to a preferred embodiment of the present invention. A digital pen's housing includes a skin 7120 and a retractable tip 7110. The retractable tip 7110 is connected by a spiral mechanism 7100 to the skin 7120. The spiral mechanism 7100 causes a linear movement of the tip in and out. The rotation motion is applied by the user between the tip 7110 and the skin 7120. Reference is now made to Fig. 8a which a simplified block diagram illustrating a third retractable digital pen, according to a preferred embodiment of the present invention. A retractable digital pen 800 comprises an elongated housing 805, covering an elongated body 820 terminating in a writing tip, wherefrom a writing element 810, such as a tip of an ink refill protrudes. The elongated body 820 may be moved forward, to expose the writing element 810, and backwards to cover the writing element inside the housing 805 of the digital pen 800. Optionally, the elongated body 820 is urged backwards by a spring 830, thus pushing the elongated body 820 into a position where the writing element 810 is covered by the housing. WO 2006/100682 PCT/IL2006/000373 Preferably, the elongated body 820 is securable into a position where the writing element 810 is exposed, by a securing means 850. Optionally, there may be used a snap, a lock, etc, for locking the elongated body on the edge of the housing 805. Optionally, a digital pen, according to a preferred embodiment may have moving parts, such a refill, a skeleton, a tip, a battery house, any other part, or a combination thereof. The movement between the moving parts may be facilitated utilizing designs similar to the designed described above, using Fig. 7-8. According to a preferred embodiment of the present invention, there are put several infrared (IR) emitters, on several points of the digital pen for more robustness. As a result, if one of the IR emitters is covered, say by the hand of a user while holding the digital pen, the other parts maintain the link with a receiver. Examples of the possible points on the digital pen where the IR emitters may be deployed include, but are not limited to: the bottom part of the digital pen, the upper part, on the top of the pen, on a flexible PCB installed in the digital pen (as described hereinabove), etc. Preferably, the housing of the digital pen includes a soft material such as rubber, so as to provide better convenience for a user holding the digital pen. Reference is now made to Fig. 8b which a simplified block diagram illustrating a fourth retractable digital pen, according to a preferred embodiment of the present invention A digital pen may have a retractable skeleton 7250, pushed by a button 7270 mounted on top of the digital pen, utilizing and a locking mechanism 7200. WO 2006/100682 PCT/IL2006/000373 26 Reference is now made to Fig. 9 which is a block diagram schematically illustrating a digital pen having two acoustic transmitters according to a preferred embodiment of the present invention. A digital pen 900 may have two acoustic transmitters 930. Installing two acoustic transmitters in a digital pen may have several advantages, which may include, but are not limited to the following: 1) Allowing the receiver to estimate the five dimensional (5D) location of the pen which includes the three dimensional location, and leaning angles of the digital pen, or a six dimensional (6D) location of the pen, which includes the five directional (5D) location as well data relating to rotation of the digital pen. 2) Estimating more accurately the writing element's position and compensating for the distance difference between the transducer and position of the writing element. 3) Allows gaming functions, using the digital pen as a joystick. Reference is now made to Fig. 10 which is a diagram schematically illustrating a digital sleeve for a writing instrument, according to a preferred embodiment of the present invention. The digital sleeve 1000 comprises an acoustic signal transmitter, for transmitting an acoustic signal. The digital sleeve 1000 may also comprise an electric circuit, a power source, or other elements, as described for a digital pen hereinabove. A digital sleeve 1000, according to a preferred embodiment, may be mounted on a regular writing instrument 1100, such as a pen, a pencil, a marker, etc. According to a preferred embodiment of the present invention, the digital sleeve 1000 may be worn on a finger. For example, Epos TechnologieTM provides a stylus-at-your-fingertip product. WO 2006/100682 PCT/IL2006/000373 27 Preferably, the digital sleeve 10000 further includes a writing sensor 10200. The writing sensor 10200 is configured for detecting a movement (or a friction) of the writing device 10100, relative to the digital sleeve 10000 mounted thereon. That is to say, as a user, holding a pen mounted with the sleeve 1000, starts writing with the pen, a relative movement (or friction) occurs between the pen touching a paper and the sleeve 1000. The relative movement (or friction) is sensed by the writing sensor 10200. The writing sensor 10200 in turn, actuates the acoustic transmitter, through electric circuitry. Then, the acoustic transmitter transmits the acoustic signals, say to a receiving unit, as described in greater detail for a digital pen system herein below. Reference is now made to Fig. lla-e, which are schematic depictions of a digital pen's grating for a writing instrument, according to a preferred embodiment of the present invention. Typically, an acoustic transmitter, specifically - an ultra sound transducer has some irregularities. The irregularities make the transducer not entirely omni directional. The irregularities result from a part of the transducer having an inherent defect, because the transducer is made from a rectangular foil laminated to form a cylinder. The lamination forms a passive part which does not radiate acoustic energy. The inherent defect causes the signal in front of the defect to be much weaker than in front of other parts of the ultrasound transducer. Typically, the position of the digital pen is determined utilizing an algorithm, based on a measurement of TOA (time of arrival) of the acoustic signals from the acoustic transmitter. Usually the algorithm compares the TOA of the signals with IR signals transmitted from the digital pen. WO 2006/100682 PCT/IL2006/000373 28 As a result of the inherent defect, the sum of the signals received at a given point in the space surrounding the acoustic transmitter has a phase shift, in comparison to other points at a similar distance away from the acoustic transmitter. A digital pen, according to a preferred embodiment, includes an acoustic wave guide, positioned adjacent to an acoustic transmitter of the digital pen. Preferably, the acoustic wave guide comprises a plurality of fins 1110 radiating outwardly in a direction away from the acoustic signal transmitter. More preferably, the fins 1110 are positioned so as to spatially divide the space surrounding the acoustic transmitter into directional sectors. The fins 1110 substantially isolate acoustic signals transmitted by the acoustic transmitter through one of the sectors from acoustic signals transmitted from the acoustic transmitter through the other sectors. That is to say, to eliminate the shift in location, the fins 1110 are positioned so to as to divide the space around the acoustic transmitter into sectors, such that each sector is decoupled or isolated from the other sectors. As a result of the division of space around the acoustic transmitter into significantly isolated sectors, the phase shift is significantly eliminated. The elimination of phase shift may improve the results of acoustic signal correlation based position decoding techniques. However, the amplitude of the sum of signals transmitted through each point in one of the sectors around the acoustic transmitter is reduced, as signals from the other sectors are significantly eliminated from the sector. Optionally, the grating around the acoustic transmitter may be designed differently than the above described fin design. WO 2006/100682 PCT/IL2006/000373 29 For example, the grating may comprise a spiral opening keeping a single opening, a grating coming upwards combined with a grating coming downwards (keeping an opening to free air in between), etc. According to a preferred embodiment of the present invention, there is provided a receiver configured to receive acoustic signals transmitted from a digital pen, to be used for determining location of the digital pen, say for automatically digitizing hand writing carried out using the digital pen. Reference is now made to Fig. 12 which is a schematic depiction of a first receiving unit for receiving an acoustic signal from a digital pen, according to a preferred embodiment of the present invention. A receiving unit 1200, configured to receive acoustic signals from a digital pen may have a metal plate 1210 mounted on the body 1220 of the receiving unit, for securing the receiving unit 1200 to a sheet of paper. Pressing the metal plate one end 1210-a makes the other end 1210-b open a gap between the other end 1210-b and the body 1220 of the receiving unit 1200. Through the opened gap, a sheet of paper may be inserted between the plate's end 1210-b and the body 1220 of the receiving unit 1200. Releasing the metal plate pressed end 1210-a makes the other end 1210-b get back to its natural position and embed a force on the paper sheet which is pressed between the plate's end 1210-b and the body 1220 of the receiving unit 1210. The metal plate 1210 and the body 1220 of the receiving unit 1200 may have additional non-flat surface properties (such as rubber pads) which allow more friction between the paper and the receiving unit's body 1220. Preferably, the metal plate 1210 may be shaped, so as to cause a slight deformation of the paper, in order to have a better grip of the paper sheet. WO 2006/100682 PCT/IL2006/000373 One or more receiving unit(s) 1200 may be fit on the paper sheet's center, or on the sheet's edges. Preferably, the receiving unit body 1200 and plate have stoppers 1212 that fit the 90 degrees of a paper sheet's corner (and hold the receiving unit at 45 degrees). The placement of the receiving unit 1200 on the corner of the paper sheet instead on the sheet's middle has several benefits, such as: Repeatability , Accuracy a receiving unit placed on the corner has a better perspective, improving its accuracy, Less dead zones - as the operating angle of a receiving unit placed at the corner of the paper sheet is much smaller than when a receiving unit is place in the middle of the paper. Reference is now made to Fig. 13 which is a schematic depiction of a second receiving unit for receiving an acoustic signal from a digital pen, according to a preferred embodiment of the present invention. A receiving unit 1300, according to a preferred embodiment of the present invention includes two microphones 1330. Optionally, the two microphones are ultrasound receivers, as known in the art. Preferably, the two microphones are electret microphones or alternatively MEMS microphones. Electret microphones are miniature microphones that work on condenser microphone principles, as known in the art, but have permanently charged polymer diaphragms. Electret microphones have miniature preamplifiers built in, and require low voltage direct current (DC) power (typically from a 1.5 to 18 volts battery). Electret microphones are widely used in hand held devices - such as mobile computer games, mobile phones, etc. The receiving unit 1300 further includes an electric circuit. WO 2006/100682 PCT/IL2006/000373 31 The electric circuit is configured to extract ultrasound signal, received by the microphones 1330, say by implementing frequency down conversion, signal filtration, signal amplification techniques, or other methods. Some of the methods used by the electric circuit are described in greater detail in the applicant's International Application No. PCT/IL03/00309, entitled "Method and system for obtaining positional data", filed on April 14, 2003. According to a preferred embodiment of the present invention, the two microphones 1330 are positioned in a distance of less then 65 mm from each other. The signals received form the two microphones 1330 positioned less than 65 mm away from one another, may be processed for generating positional data relating to the digital pen. The processing may be carried out using decoding methods, say utilizing models of the transmitted and received signals as described in grater detail herein below. According to a preferred embodiment of the present invention, a processor, connected with one or more receiving unit(s), is configured to process acoustic signal, received at the receiving unit(s), for determining presence of the digital pen in a predefined area. Preferably, the processor may be configured to trigger a predefined functionality when a user places the digital pen in a predefined area. Optionally, the user may be provided a printed map or menus, and position the receiving unit(s) on the map or menus. When the user positions the digital pen on an icon, representing the predefined area, printed on the paper, the digital pen is present in the predefined area. Consequently, the predefined functionality is triggered by the processor. WO 2006/100682 PCT/IL2006/000373 32 For example, the user may be provided a printed menu having drawn icons such as an eraser, a marker, etc. The user may deploy the receiving unit(s) on the printed menus. If the user places the digital pen on the eraser icon, the processor switches into an erasing mode and the digital pen functions as an eraser. If the user places the digital pen on the marker icon, the processor switches into a marker mode and the digital pen functions as a marker. Preferably, the housing of 1320 of the receiving unit 1300 is used as an assembly jig. A worker assembling the receiving unit 1300 may insert the microphones 1330 into their position inside the body 1320, and solder a printed electric circuit board (PCB) into a position inside the body 1320. The worker may then connect the PCB to the microphones 1330. Optionally, the receiving unit 1300 may be removable attached to another item, such as a paper clipboard used by a student, etc. Preferably, the housing 1320 of the receiving unit 1300 includes a changeable cover element. The changeable element may provide a user of the receiving unit 1300, a manufacturer of the receiving unit 1300, or both, with the option to change the color and appearance of the receiving unit 1300. Optionally, the housing 1320 of the receiving unit 1300 may also house a serial interface cable, rolled in and out from the housing. Preferably, a connector at the end of the interface cable may be clipped to the housing 1320. The housed interface cable helps to keep the receiving compact. Reference is now made to Fig. 14 which is a simplified block diagram illustrating a digital pen system, according to a preferred embodiment of the present invention. WO 2006/100682 PCT/IL2006/000373 33 A digital pen system 1400 includes a digital pen 1410, and one or more digital pen receiver(s) 1420, as described in greater detail hereinabove. The system 1400 further includes a processor 1450, communicating with the receiving units(s) 1420. The processor 1450 is configured to process acoustic signals, transmitted from the digital pen 1410 and received by the receiving unit(s) 1420. Through the processing of the received acoustic signals, the processor 1420 determines the location of the digital pen 1410. Optionally, the processing further includes determining the presence of the digital pen 1410 in a predefined area, and triggering a predefined functionality upon the determined presence in the predefined area, as described hereinabove. According to a preferred embodiment of the present invention, the location of the digital pen according to the acoustic signals transmitted from the digital pen is carried out utilizing a decoding algorithm. The decoding algorithm may be implemented in a decoding unit 1470. The decoding unit 1470 may be implemented as a part of the processor 1450, as a part of a device communicating with to the processor 1450, as a part of the receiving unit(s) 1420, etc. Reference is now made to Fig. 15 which is a simplified block diagram illustrating a decoding unit, according to a preferred embodiment of the present invention. A decoding unit 70 includes a maximum likelihood detector 72, which uses a channel mathematical signal model 77, a correlator 71, a maximum likelihood detector, a path estimator and transmitter timing estimator. The maximum likelihood detector 72 generates most likely distance data, relating to the distance of the digital pen from a receiving unit, based on the acoustic WO 2006/100682 PCT/IL2006/000373 34 signals received from the digital pen, and feeds the path estimator 73 with the most likely distance data. The maximum likelihood detector 72 estimates the transmitter position and feeds the path estimator 73 with several options for location of the transmitter, each option having a probability associated therewith. The path estimator 73 further uses previously calculated possible positions from a sampling bank 75 (and their probabilities), provided by a transmitter timing estimator 76, in order to choose the right estimated coordinates 74 of the position of the transmitter The decoding algorithm is used to convert digitized versions of the digital pen's acoustic signals into position coordinates for passing to a local computer operating system, a computer application, or the like. The decoding algorithm preferably takes into account the relatively low sampling frequency capabilities likely to be available, by carrying out frequency down conversion. Preferably, the path estimator 73 uses known in the art methods of interpolation, for compensating for the relatively low sampling rate. In addition, the algorithm preferably includes an ability to handle noise. The algorithm is preferably adapted for other specific issues, involved in the handling of the acoustic signals transmitted from the digital pen. Traditional position location methods concentrate on the use of very short and energetic acoustic signals, as the location signal. In order to achieve good resolution, the traditional methods dictate high sampling frequencies, typically higher than 400KHz, in order to be able to find such short location signals and not miss them entirely. WO 2006/100682 PCT/IL2006/000373 By contrast, the present embodiments preferably do not use sampling rates higher than 44.1 KHz, since such frequencies are incompatible with the installed base of sound processing equipment, such as the electret microphones. Furthennore, it is recommended to keep the beacon signal sound frequency higher than 20KHz, that is within the ultrasonic range, so that users do not hear it. In another preferred embodiment of the invention, the sampling rate may be higher than the 44.1KHz, say 100KHz. This is possible by a receiving unit which is configured for a high sampling rate. The higher sampling rate enables better noise rejection of the audio band and higher bandwidth of the transmitted signal. A preferred embodiment of the present invention uses a solution in which data is modulated over an ultrasonic carrier signal or waveform. The data can be frequency modulated (FM), or phase modulated (PM), onto the carrier comprising the ultrasonic signal. Optionally, other known method may be used. The decoding algorithm preferably decodes the modulated signal and reconstructs the original position-information bearing signal from the results of sampling thereof. In the present embodiment, it is preferred to use band-limited signals in order to achieve a desired resolution level. Preferably, continuous wave (CW) modulations such as spread spectrum and frequency hopping are used in acoustic position finding, to overcome reverberation and multi-path effects. A preferred embodiment of the present invention uses the maximum likelihood detector 72, for decoding the signals received from the receiving units, to determine the distances of the digital pen from the individual receiving unit(s). At the maximum likelihood detector 72, the acoustic signals received from the receiving units are compared to reference signals in a look-up table (LUT) 68. WO 2006/100682 PCT/IL2006/000373 36 The comparison indicates a most likely signal, and from the most likely signal, a distance is determined as the distance from which the signal was most likely transmitted. The maximum likelihood detector 72 preferably uses a full mathematical signal model 77 of the channel, against which to compare received signals, so that a best match distance can be found. As an alternative, the expected waveform can be sampled at the Nyquist rate, and any timing mismatch between the sampling points can be overcome by extrapolation functions, to reveal the distance. Reference is now made to Fig. 16, which is a simplified block diagram illustrating exemplary components of a mathematical model for incorporating into a maximum likelihood detector, according to a preferred embodiment of the present invention. The model 20 comprises an initial signal sequence S(t), generated in the signal generator, which is fed into the transfer function of the acoustic transmitter 26 with its filter 25. The digital pen 14 is followed by the channel 27. The result is then fed to the reception path in the receiver which includes transfer function 29 for the ultrasound receiver, and filtering 30. The full modeling of the channel is useful in the design of the maximum likelihood detector 72, in that it allows accurate expected signals to be constructed against which the received acoustic signals, ideally, differ only in phase. The detector (estimator) 70 is then relatively easily able to distinguish the most likely signal, which in turn corresponds to the most likely distance of the digital pen from the receiving unit. WO 2006/100682 PCT/IL2006/000373 37 Preferably, the infrared (IR) signal transmitted from the IR transmitters, spread on the face of the digital pen, are used to set the start of the delay, and also to synchronize clocks between the digital pen and the receivers. In figure 15, synchronization path 76 is also indicated on the model. A skilled person will appreciate that acoustic signals have differing angular transfer functions. An equalizer may be used in order to compensate for this fact. The skilled person will appreciate that, instead of a model, a look-up table may be used. Furthermore, other detectors may be used, and there are several known decoders of FM signals, such as PLL (An electronic circuit that consists of a phase detector, low pass filter and voltage-controlled oscillator), I/Q demodulation, phase multiplication etc. Reference is briefly made to Fig. 17, which is a two-part graph showing an exemplary correlation function, according to a preferred embodiment of the present invention. The top part 1710 of the graph shows the function, and the lower part 1720 of the graph is an enlarged or zoomed view of the upper central part of the graph. It is expected that during the life of this patent many relevant devices and systems will be developed and the scope of the terms herein, particularly of the terms "Digital", "Pen", "Acoustic transmitter", "Ultrasound transducer", "Microphone", and "Processor" is intended to include all such new technologies a priori. Additional objects, advantages, and novel features of the present invention will become apparent to one ordinarily skilled in the art upon examination of the following examples, which are not intended to be limiting. Additionally, each of the various embodiments and aspects of the present invention as delineated hereinabove 38 and as claimed in the claims section below finds experimental support in the following examples. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. The term "comprise" and variants of that term such as "comprises" or "comprising" are used herein to denote the inclusion of a stated integer or integers but not to exclude any other integer or any other integers, unless in the context or usage an exclusive interpretation of the term is required. Reference to prior art disclosures in this specification is not an admission that the disclosures constitute common general knowledge in Australia. |
In various examples, a test system is provided for executing built-in-self- test (BIST) according to.JTAG and IEEE 1500 on chips deployed in-field. Hardware and software selectively connect onto the IEEE 1500 serial interface for running BIST while the chip is being used in deployment - such as in an autonomous vehicle. In addition to providing a mechanism to connect onto the serial interface, the hardware and software may reduce memory requirements and runtime associated with running the test sequences, thereby making BIST possible in deployment. Furthermore, some embodiments include components configured to store functional states of clocks, power, and input/output prior to running BIST, which permits restoration of the functional states after the BIST. |
CLAIMSWhat is claimed is:1. A test system comprising: a test-sequence register including a plurality of bit subsets and forming a path to receive a serial input of a test sequence; an instruction register to control transmission of the serial input into the test-sequence register; a control register to program the instruction register by receiving a first set of data including a first subset identifier of a bit subset in the plurality of bit subsets; a configuration register to program the instruction register by receiving and communicating a second set of data including a second subset identifier of the bit subset; and a data register to receive the test sequence to be written into the serial input.2. The test system of claim 1, further comprising another register to program the test- sequence register by receiving a skip-bit instruction that specifies one or more bits of the bit subset to retain a predefined reset value.3. The test system of claim 1, wherein the first set of data, the second set of data, or both the first set of data and the second set of data include a burst-mode instruction, the burst-mode instruction programming the instruction register to loop bits back into the path as a serial input, wherein the bits are in the test-sequence register when the test sequence is transferred into the scan path and are not in the bit subset.4. The test system of claim 1, wherein the first set of data, the second set of data, or both the first set of data and the second set of data include an auto-increment instruction, the auto- increment instruction programming the instruction register to control a serial input of another bit subset in a same manner as the bit subset, and wherein the other bit subset follows consecutively after the bit subset in the test sequence.5. The test system of claim 1, wherein the test-sequence register is configured to serially output test results that are shifted into the data register.6. The test system of claim 1, further comprising an access -control application configured to control access to the test-sequence register between the data register and external automatic test equipment.7. The test system of claim 1, wherein the test-sequence register is on a chip, and wherein the test system further comprises a test-sequence-retrieval application on the chip and configured to retrieve the test sequence from on-chip memory.8. The test system of claim 1, wherein the test-sequence register is on a chip, and wherein the test system further comprises a test-sequence-retrieval application maintained off the chip and configured to retrieve the test sequence from off-chip external memory and load the test sequence into on-chip memory.9. The test system of claim 1, wherein the test sequence includes a functional- value-restore instruction configured to, prior to running a logic built-in self-test (LBIST), capture functional values of a clock, a power source, and I/O by: during a captureDR step of a finite state machine (FSM), which controls access to the scan path, capturing the functional values a data chain in a VALS JTAG Register; during a shiftDR stage of the FSM, shifting the data chain in the VALS JTAG Register; during an updateDR step, capturing the functional values in an update latch of the VALS JTAG Register; and after the VALS JTAG Register is programmed, setting all bits in a MASK JTAG Register to a common value.10. The test system of claim 1, wherein the test-sequence register is a first test-sequence register, and the system further comprises: a first chip partition including the first test-sequence register; and a second chip partition daisy chained with the first chip partition and including a second test-sequence register, wherein: the first test-sequence register and the second test- sequence register are both configured to receive a broadcast input of the test sequence for running logic built-in self-test (LBIST); the first chip partition and the second chip partition each include a respective MASK Register having bits that are all changed to a common value prior to running the LBIST when functional values of clocks, power, and I/O states are captured; a first MASK register of the first partition has a first quantity of bits that is larger than a second quantity of bits of a second Mask register of the second partition; and the broadcast input provided to both the first chip partition and the second chip partition includes a quantity of common values equal to the first quantity.11. The test system of claim 1, further comprising a JTAG register that is configurable in either broadcast mode or daisy-chain mode by a user-defined instruction in the test sequence.12. The test system of claim 1, wherein the test-sequence register is configured consistent with IEEE 1500, and wherein the test system further comprises a small logic in the always-on domain for each partition.13. A method comprising: retrieving, from memory on a chip, a JTAG sequence to perform a built-in self-test (BIST) on the chip when a test portion of the JTAG sequence is input into a data register having a first quantity of bits; loading an instruction portion of the JTAG sequence into one or more first instruction registers that program a second instruction register controlling the data register; and loading the test portion of the JTAG sequence into a data shift register on the chip to shift the test portion into a serial data bus to be transmitted to the data register, the test portion of the JTAG sequence including a second quantity of bits less than the first quantity of bits.14. The method of claim 13, wherein the instruction portion includes a burst-mode instruction that programs the second instruction register to retain non-target bits in the first quantity of bits in a predefined value by shifting the predefined value back into the data register when the test portion is input to the data register.15. The method of claim 13, further comprising retaining one or more bits included among the test portion to a predefined value before shifting into the serial data bus.16. The method of claim 13, wherein the instruction portion includes an auto-increment instruction that programs the instruction register to apply a same set of controls to another test portion of the JTAG sequence.17. The method of claim 13, wherein the one or more first instruction registers comprises one or more BIST instruction registers, and the second instruction register comprises a JTAG instruction register.18. A method comprising: prior to running a logic built-in self-test (LBIST) on a chip, copying functional values of at least one of a clock, power source, or I/O state to a first shift register during a capture state of a finite state machine (FSM), the FSM including a first shift state after the capture state; capturing the functional values in update latches of the first shift register during a first update state; in a second shift register, shifting all bits to a respective value during a second shift state, the respective value of all the bits in the second shift register being shifted to a same value that controls an operation of the first shift register; and capturing each of the respective values in an update latch of the second shift register during a second update state, wherein shifting all bits to a common value enables the functional values to be overridden with override data shifted into the first shift register.19. The method of claim 18, wherein the first shift register comprises a VALS JTAG shift register and the second shift register comprises a MASK JTAG shift register.20. The method of claim 18, further comprising skipping the first shift state without executing any cycles by programming an instruction into a BIST instruction register that controls the FSM.21. The method of claim 18, further comprising, broadcasting a JTAG test sequence for running the LBIST to both a first chip partition and a second chip partition by configuring a user - defined JTAG register that is configurable in broadcast mode or daisy-chain mode.22. The method of claim 18, further comprising, broadcasting a JTAG test sequence for running the LBIST to both a first chip partition including the first shift register and the second shift register and a second chip partition including a third shift register corresponding to the second shift register, wherein the third shift register includes a first quantity of bits that is larger than a second quantity of bits in the second shift register, and shifting a number of bits into the second shift register equal to the quantity of bits of the third shift register. |
TEST SYSTEM FOR EXECUTING BUILT-IN SELF-TEST INDEPLOYMENT FOR AUTOMOTIVE APPLICATIONSBACKGROUNDComputing chips are typically tested by manufacturers prior to deployment to verify whether the chips are functioning properly and whether there are any manufacturing defects. For example, the chips may be tested prior to deployment by using Automated Test Equipment (ATE). However, some chips acquire faults after being deployed due to various factors (e.g., environmental hazards, aging, etc.), and in many chip applications it is important to have in-field, fault-detection capabilities. For example, identifying latent faults on a chip after the chip has been deployed in the field is necessary to comply with some industry requirements, such as the IS026262 ASIL-C requirement for automotive components (e.g., a chip supporting an automotive platform). In some instances, these chips may include a chip architecture configured according to both the Joint Test Action Group (JTAG) Standard and the Institute of Electrical and Electronics Engineers (IEEE) 1500 Standard for Embedded Core Test (IEEE 1500 SECT or simply IEEE 1500). For example, these chips may be organized into IEEE 1500 clusters, and each cluster may include partitions. Each partition may include JTAG registers that are daisy chained in a serial manner throughout the partition, and at a higher level, throughout the chip.While conventional systems integrate JTAG with IEEE 1 00, these systems do not support built-in self-test (BIST) capabilities, or they require large storage, unwieldy runtimes, or both, which make BIST less effective in real-time deployment scenarios. For example, in some conventional systems, relying on ATE to timely identify latent faults (e.g., faults occurring after the chip is deployed in the field for automotive platforms) is not effective, since it would require either users to have ATE or the chip to be taken out of use to a facility having the chip-specific tester. Other conventional systems may be executed using BIST, or a combination of BIST and ATE; however, these systems require large storage and execute relatively slowly. That is, a single JTAG chain test sequence can be thousands of bits, and many JTAG registers must be programmed for running tests. As such, the test sequence is very large, requiring large storage in return. In addition, the JTAG registers are on a relatively slow clock (e.g. , one bit accessed per cycle), which results in a long runtime when the test sequence is large - thereby removing the chip from operation in deployment for a period of time that is detrimental to the system.These conventional systems suffer from other drawbacks, in addition to those described above. For instance, BIST is executed in a system where clocks, power, and I/O states are already configured; however these valid functional states may become corrupted during the logic built-in self-test (LBIST) scan operation, which can affect operations separate from the BIST. In other operations, the existing controls for the JTAG/IEEE 1500 interface do not permit some states of a state machine to be skipped, even when the states are not necessary for a given operation - thereby resulting in unnecessary cycles. Further still, in some conventional systems, multiple JTAG registers may be configured with the same JTAG test sequence, in which case multiple copies of that same JTAG test sequence are stored resulting in larger memory requirements solely for storing copies of the same sequence.SUMMARYEmbodiments of the present disclosure relate to a test system for executing in-field BIST on a chip configured according to JTAG and IEEE 1500. For example, the chip may be integrated into a system subject to functional-safety standards (e.g., a chip supporting an automotive platform subject to IS026262 ASIL-C), and the test system may be used to identify latent faults in the chip after deployment. In contrast to conventional systems, such as those described above, the current disclosure describes hardware and software implementations that can be deployed to the chip to selectively connect to the IEEE 1500 serial interface for running BIST while the chip is deployed in the field - such as in an autonomous machine application. In addition to providing a mechanism to connect to the serial interface, deployed hardware and software implementations may reduce memory requirements and runtime associated with running the test sequences, thereby makingBIST possible while in the field. Furthermore, some embodiments of the current disclosure include components configured to store functional states of clocks, power, and I/O prior to running BIST, which permits restoration of the functional states after the BIST while reducing a likelihood of corruption during BIST. Other embodiments of the present disclosure may permit states of a finite state machine to be programmatically skipped to avoid executing unnecessary cycles. Further still, a broadcast mode may be used in some embodiments, which enables a single instance of a JTAG test sequence to be communicated in parallel to multiple partitions or clusters, such that memory requirements are reduced by storing only a single copy of the JTAG test sequence.BRIEF DESCRIPTION OF THE DRAWINGSThe present systems and methods for a test system for executing in-field BIST are described in detail below with reference to the attached drawing figures, which are incorporated herein by reference.FIG. 1 is a block diagram of a computing environment, including a computing platform and automotive test equipment (ATE), in accordance with some embodiments of the present disclosure ;FIG. 2 is a block diagram illustrating additional details of the computing platform depicted in FIG. 1, including additional hardware components of the 1ST module, additional details of the JT AG/IEEE 1500 architecture, and transmission of a BIST test sequence from the off-chip external memory to the registers of the 1ST module, in accordance with some embodiments of the present disclosure;FIG. 3 is a block diagram illustrating additional details of the computing platform depicted in FIG. 1, including additional details of BIST test sequences that may be stored in the off-chip external memory, in accordance with some embodiments of the present disclosure;FIG. 4 illustrates data shifting through JTAG registers when functional states are captured, in accordance with some embodiments of the present disclosure;FIG. 5 is a flow diagram showing a method of performing BIST, in accordance with some embodiments of the present disclosure; and FIG. 6 is a flow diagram showing a method of performing BIST, in accordance with some embodiments of the present disclosure.DETAILED DESCRIPTIONSystems and methods are disclosed related to a test system for executing in-field BIST on a chip configured according to the Joint Test Action Group (JTAG) standard and the Institute of Electrical and Electronics Engineers (IEEE) 1500 standard. In some instances, the chip may be a component of a larger computing platform used in automotive applications (e.g., autonomous machine applications), or other systems subject to functional-safety standards, and the test system may be configured for in-deployment identification of latent faults in the chip. In contrast to conventional systems, such as those described above, the current disclosure describes hardware and software added to the chip to selectively connect onto the IEEE 1500 serial interface for running BIST while the chip is deployed in the field. In addition to providing a mechanism to connect onto the serial interface, the hardware and software may reduce memory requirements and runtime associated with running the test sequences, thereby making BIST possible while in the field. Furthermore, some embodiments of the current disclosure include components configured to store functional states of clocks, power, and I/O prior to running BIST, which permits restoration of the functional states after the BIST. Although primarily described with respect to JTAG and IEEE 1500, this is not intended to be limiting, and the system and processes herein may be applicable to any testing types, configurations, or standards (e.g., combined with JTAG and IEEE 1500 or used in place of JTAG and IEEE 1500) without departing from scope of present disclosure .The system of the present disclosure stores chip-specific JTAG test sequences directly in system memory accessible by components on the chip (e.g., flash memory, embedded Multi-Media Controller (eMMC) memory, off-chip system memory, or external system memory), and these chip-specific JTAG test sequences may be used to run memory BIST (MBIST), logic BIST (LBIST), and/or other BIST types for identifying latent faults. The present disclosure also includes on-chip hardware and software that access the JTAG test sequences stored in the system memory and that connect to a JTAG/IEEE 1500 serial interface. This is in contrast to some conventional systems that store the chip-specific JTAG test sequences externally and apart from the integrated system, such as on Automotive Test Equipment (ATE), or that do not have components for connecting to the JTAG/IEEE 1500 interface to run BIST without ATE. Furthermore, in contrast to conventional systems that may store JTAG test sequences in external system memory, the present disclosure includes additional hardware and software implementations that reduce storage utilization in addition to runtime.The off-chip system components, in combination with the on-chip hardware and software of the present disclosure, may operate in various manners. For example, the present disclosure may include off-chip secured software that accesses the JTAG test sequence from the off-chip system memory and that transmits the JTAG test sequence to the on-chip components - which may then load the JTAG test sequence into the JTAG/IEEE 1500 serial interface. The on-chip components of the present disclosure may include any number of hardware components (e.g., finite state machine, registers, etc.) that may receive the JTAG test sequence prior to translation into the JTAG/IEEE 1500 interface, connect to the JTAG/IEEE 1500 serial interface, and read the test output from the JTAG/IEEE 1500 interface.As indicated herein, the JTAG chain shift register can be very long, in the range of hundreds to several thousand bits, and JTAG chain shift registers are on a relatively slow clock at one bit accessed per cycle. In addition, because of the serial nature of the interface, every time a JTAG chain shift register is shifted in or out, the whole chain must be accessed to avoid corrupting the chain. This is true even when a particular BIST includes writing a new value to only a single bit in the JTAG chain shift register. As such, and in contrast to conventional systems, when a particular BIST is configured, the system of the present disclosure may configure only the bits that need to be overwritten for a particular BIST, and the rest of the bits in the JTAG chain shift register may be retained (or rewritten) at a predefined reset value. Furthermore, the components of the present disclosure provide the ability to program at the per-bit level This selective storage of only target bits that need to be written in order to execute a given BIST - as opposed to storing the entire JTAG chain - results in memory-storage savings and faster runtime than conventional approaches.The present disclosure includes other features that contribute to improved BIST capabilities . For example, as mentioned above, conventional systems do not capture the functional values of clocks, power, and I/O states. In contrast, prior to running LBIST, the system of the present disclosure captures and holds the functional settings - e.g , for configuring clocks, power, and I/O, etc., to avoid corrupting these states during LBIST scan operations. More specifically, the present disclosure includes software that programs JTAG registers to capture the functional settings prior to running the LBIST scan operation and, in further contrast to conventional test systems, the functional settings can be read after LBIST.Furthermore, in some BIST operations, a same BIST test sequence may be input to multiple JTAG registers (e.g., where multiple partitions or clusters include a same configuration) . Conventional systems often store multiple copies of the same BIST test sequence and provide an individual copy to each JTAG register. In contrast, embodiments of the present disclosure include hardware at the partition level and at the cluster level that permit broadcasting a BIST test sequence such that the BIST test sequences may only have to be stored once and loaded once. In addition, using this approach, the BIST test sequence may be sent in parallel to multiple partitions or clusters - thereby decreasing run-time because fewer cycles are used to write the data and further decreasing memory requirements because only a single copy of the JTAG test sequence may need to be stored.In addition, the JTAG/ΊEEE 1500 architecture may include power-gated portions along the daisy chain that may prevent access to the chain by breaking the scan path. As such, an aspect of the present disclosure adds, to each partition and cluster, a small logic (e.g., a single bit) in the always-on state, such that when power is gated, the Wrapper Serial Control (WSC) goes through the single-bit path in that always -on domain. As such, even in the lower power mode, the IEEE 1500 chain remains intact.Referring now to FIG. 1, a block diagram of an example computing platform 100 with a test system is depicted in accordance with an embodiment of the present disclosure. FIG. 1 provides a high-level overview of the computing platform 100 and the test system, and the components will be generally described in the context of FIG. 1 - more specific details are described with respect to some of the other drawing figures. The computing platform 100 may include an integrated system having a first chip 110 and a second chip 112 that include components such as a processor, a central processing unit (CPU), a graphics processing unit (GPU), a system-on-a-chip (SoC), and/or the like. The first chip 110 may include a JTAGTEEE 1500 architecture 114 (alternatively referred to herein as“architecture 114”) and the second chip 112 may include a JTAGTEEE 1500 architecture 112 (alternatively referred to herein as“architecture 112”) that may be configured consistent with the IT AG/IEEE 1500 standards. The first chip 110 and the second chip 112 may be integrated into the larger computing platform 100, such as a functional computing platform configured to execute operations associated with an autonomous machine (e.g., vehicle, drone, aircraft, water vessel, etc.) or other applications subject to functional-safety standards. The computing platform 100 is a non-limiting example of one type of integrated system, and other embodiments may include fewer chips, more chips, different chips, or a combination thereof, without departing from the scope of the present disclosure.Generally, the computing platform 100 includes some components on each chip 110 and 112, and for explanatory purposes, these components are labeled in FIG. 1 as being a part of the“on- chip architecture.” In addition, the computing platform 100 includes other components that are not necessarily on the chip but that are integrated with the chip as part of the platform 100. These other components are labeled in FIG. 1 as being“off-chip architecture” 102 and both the off-chip components and on-chip components may include hardware and/or software forming part of the test system of the computing platform 100. For example, at a high level, the off-chip and on-chip components of the test system may execute operations to retrieve JTAG test sequences from off- chip external memory 104, and translate the JTAG test sequences into an IEEE 1500 bus 126 to be input into JTAG registers.As described herein, the JTAG chain shift register can be very long, in the range of hundreds to several thousand bits, and JTAG chain shift registers are on a relatively slow clock at one bit accessed per cycle. In addition, because of the serial nature of the interface, every time a JTAG chain shift register is shifted in or out, the whole chain must be accessed to avoid corrupting the chain. This is true even when a particular BIST includes writing a new value to only a single bit in the JTAG chain shift register. As such in contrast to conventional systems, the test system of the present disclosure includes various software and hardware components that connect the external memory 104 and the IEEE 1500 bus 126 and that reduce the runtime and storage usage associated with JTAG testing.In one aspect of the disclosure, the first chip 110 and the second chip 112 may include an on- chip in- system-test (1ST) master 118 and 1ST master 120, respectively, that communicates with off-chip components to facilitate in-system testing. For example, as described herein, JTAG test sequences for running BIST, MBIST, and LBIST are stored in the off-chip external memory 104, and these JTAG test sequences may be selectively loaded by an off-chip in-system-test (1ST) controller 106 into the on-chip 1ST masters 118 and 120 when a test is to be performed. In addition, each chip 110 and 112 may include an in- system- test (1ST) module 122 including hardware and/or software that receives the JTAG test sequence from the on-chip 1ST master 120 and translates the JTAG test sequence into the IEEE 1500 bus 126. The 1ST module 122 is one component of an on- chip test master 124, which includes an access control 125 configured to control access to the IEEE 1500 bus 126 between the 1ST module 122 and one or more other test modules 127. For example, the other test modules 127 may facilitate other types of system testing, including from ATE 128.Having provided a high-level overview of some of the components of the platform 100 and the test system, reference is now made to FIG. 2 to describe some components of the test system in more detail· FIG. 2 is another block diagram of a portion of the computing platform 100, including the off-chip architecture 102 and the second chip 112 (alternatively referred to herein as“chip 112”). In addition, the chip 112 may include the architecture 116 that is consistent with the JTAG and IEEE 1500 standards. For example, the architecture 116 may include a series of IEEE 1500 Wrappers/Clusters 132, 134, 136, and 138 (alternatively referred to herein as“clusters 132, 134, 136, and 138”) that are daisy chained through wrapper connectors 140, 142, 144, and 146. In addition, each cluster may include a series of daisy chained partitions, as illustrated by the partitions 148, 150, 152, and 154 of the Cluster A 132. Although not illustrated, the clusters 134, 136, and 138 may also include partitions, similar to those of the cluster 132. Each partition may include one or more JTAG registers (e.g., Instruction Register (IR) 156 and Data Register (DR) 158), as well as an IEEE 1500 client (not shown) controlling the registers in that partition. FIG. 2 also depicts a wrapper serial input (WSI), wrapper serial output (WSO), and wrapper serial control (WSC). Each cluster may be associated with a functional unit (logic block) of the chip 112. The wrapper connectors may comprise IEEE 1500 compliant modules (e.g., Wrapper Bypass Registers (WBY), or Wrapper Instruction Registers (WIR)) that can be used to bypass the corresponding cluster connected to a particular wrapper connector. The IEEE 1500 bus 126 provides a path for sending test instructions and test data (e.g.,“A” 108) stored in the off-chip external memory 104 to the JTAG registers.As described herein, the test system of the present disclosure includes various hardware and software that reduce memory usage and runtime associated with running a JTAG testing sequence. In one aspect of the present disclosure, the JTAG registers (e.g., 158), which can be thousands of bits long, are divided into sub-units of bits, and each sub-unit includes a discrete group of bits. For example, the JTAG registers may be divided into DWORDS (32 bits per DWORD) or other sized groups, which may each include a common quantity of bits. In addition, the 1ST module 122 may include hardware configured to translate JTAG test sequences into the IEEE 1500 interface 126 one bit sub-unit at a time (e.g., one DWORD at a time), which may provide a mechanism (described in more detail herein) by which select bit sub-units can be run without translating the entire JTAG chain test sequence.In one aspect of the present disclosure, the 1ST module 122 may include multiple registers that receive different portions of a JTAG test sequence, such as a JTAG sequence that is stored in the off-chip external memory 104 and is selectively retrieved for BIST. As depicted in FIG. 2, the 1ST module 122 includes a control (CTRL) register 160, a configuration (CFG) register 162, a MASK register 164, and a DATA register 166. In one aspect, each of these registers is a 32-bit register, but other register sizes may be used without departing from the scope of the present disclosure . Generally, the CTRL register 160 and the CFG register 162 are configured to receive portions of the JTAG test sequence (e.g., illustratively depicted by path 168A) including instructions for configuring the JTAG registers (e.g., 156 and 158) for a given BIST. For example, the CTRL register 160 and the CFG register 162 may receive instructions specifying which bit sub-group(s) (e.g. DWORD(s)) is to be written into the IEEE 1500 bus 126. In addition, the MASK register 164 may receive bit-level instructions for programming specific bits within a bit sub-group (e.g., DWORD) specified in the instructions provided to the CTRL register 160 and CFG register 162. The DATA register 166 may be configured to receive the test data to be written into the JTAG chain test sequence (or read the output). Table 1 below provides an example of the types of contents that may be received in the registers when shifting content into the IEEE 1500 bus 126, including the instruction portions for the CTRL register 160 and the CFG register 162 and the test portions for the DATA register.Using these registers, data may be written into the JTAG chain, or read from the JTAG chain, in various manners. For example, the CTRL register 160 and the CFG register 162 may be configured to trigger a JTAG IR shift to select the target JTAG register chain. Then, for each DWORD, the contents may be written into the DATA register for shifting into the IEEE 1500 bus 126. When reading a sequence from the JTAG chain, the CTRL register 160 and the CFG register 162 may be configured to trigger a JTAG IR shift to select the target JTAG register chain. Then, for each DWORD, the CTRL/CFG registers may be configured with the target DWORD number to shift out the target DWORD value into the DATA register, at which point the DATA register can be read. In some instances, the CTRL register 160 and/or the CFG register 162 may be configured for each DWORD. As such, the 1ST module 122 may provide a mechanism to access the IEEE 1500 interface (e.g., bus 126) in order to configure JTAG for BIST. In contrast to conventional systems, the registers of the 1ST module 122 provide the ability to target specific DWORDS within the JTAG registers, and more specifically to target specific bits by using the MASK register, as described in more detail herein.The test system of the present disclosure may also include on-chip software and hardware that communicate with the off-chip 1ST controller 106 to receive JTAG test sequences (e.g., A 108) stored in the off-chip external memory 104. In one aspect of the present disclosure, the on-chip 1ST master 120 includes on-chip 1ST random access memory (RAM) 170 into which a copy of a test sequence can be loaded. For example, when it is time to run a test, the off-chip 1ST controller 106 may retrieve the test sequence from the off-chip external memory 104 and load the JTAG test sequence into the on-chip 1ST RAM 170. In addition, the on-chip 1ST master 120 may include an on-chip 1ST controller 174, which may retrieve the test sequence from the on-chip 1ST RAM 170 and provide the test sequence to the registers of the 1ST module (e.g., as illustrated by path 168A). In one aspect, the on-chip 1ST controller may provide the test sequence on a per-DWORD basis, such that after the on-chip 1ST controller 174 sends a DWORD, it waits until the 1ST module signals a readiness to receive another DWORD. In one embodiment, one or more software instructions may trigger operations by the on-chip 1ST controller 174. For example, a software trigger may be written to a software register in the on-chip 1ST controller 174 after the on-chip RAM 170 is loaded with the test sequence. Once this register is written, a finite state machine (FSM) 178 of the on-chip 1ST controller 174 may be triggered and may continue executing all the sequences from the on-chip RAM 170 until an end of the operations, such as when an end of the sequence is reached.FIG. 2 illustratively depicts at least part of a sequence, with a path shown in dashed lines, by which a test sequence is communicated from an off-chip external memory 104. For example, the off-chip 1ST controller 106 may retrieve the BIST test sequence A 108 from the off-chip external memory 104 - as depicted by path 168D - and provide the BIST test sequence A to the on-chip RAM by path 168C. A copy of the BIST test sequence A 172 is depicted in the on-chip RAM 170. Further, the on-chip 1ST controller may fetch the BIST test sequence A 172 from the on-chip RAM (e.g., according to the path 168B) and provide it on a per-DWORD basis to the 1ST module 122, as depicted by the path 168A. As described herein, the 1ST module translates the DWORD(s) into the IEEE 1500 bus 126 for input to the JTAG register(s).In some embodiments, the hardware and software of the test system described with respect to FIGs. 1 and 2 may be leveraged in different ways to improve runtime and memory utilization when running BIST. As described herein, in conventional systems, when a JTAG register is shifted in/out, the whole chain needs to be accessed, otherwise the chain may become corrupted. For example, even if the BIST includes programing a single bit in a chain, the whole chain may need to be shifted to avoid corruption. However, storing the entire contents of all the chains, as is typically done in conventional systems, may utilize a significant amount of storage (e.g., millions of bits). For example, if a JTAG chain is one hundred DWORDS long, even if only two DWORDS need to be written for BIST, the number of bits needed for storage of the content identified in Table 1 (above) is 3264 bits (e.g., 32 bits for CTRL Register 160, 32 bits for CFG register 62, and 3200 bits for DATA register 166). In addition, in order to read data out of the IEEE 1500 bus 126, 6400 bits are needed for storage (e.g., 32 bits for CTRL register for each DWORD in the JTAG chain and 32 bits for CFG register 162 for each DWORD in the JTAG chain), since the CTRL register 160 and the CFG register 162 may be configured separately for each DWORD. In contrast to conventional systems, the system of the present disclosure only stores target DWORDS that need to be written to execute a given BIST, in combination with instructions for configuring the CTRL register 160 and the CFG Register 162. As such, the system of the present disclosure leverages a predefined reset value of each bit in the JTAG chain shift register and uses the organization of the JTAG chain shift register into sub-sets of bits (e.g., DWORDS) to target specific bits and skip non target bits.More specifically, when configuring the CTRL register 160 and the CFG register 162 to execute BIST, the test system of the present disclosure may include a“burst mode” instruction, which may cause existing values shifted out of the WSO to be looped back to WSI to retain the existing value (e.g., predefined reset value). As such, a burst-write sequence into a JTAG chain may be executed by initially configuring the CTRL register 160 and the CFG register 162 to select burst write mode, in addition to configuring the chain which performs the IR shift. Then, as described herein, for the target DWORD, the CTRL register 160 and the CFG register 162 may be configured to program a target DWORD (see e.g., Table 1 , above), and the DATA register 166 may be programmed with the contents of the DWORD to be shifted into the IEEE 1500 bus 126. Using the burst mode instruction, in combination with the registers in the 1ST module 122, the test system of the present disclosure significantly reduces memory utilization, since none of the non-target DWORDS in a given BIST are stored. For example, if a JTAG chain is 100 DWORDS and if only two DWORDS are written to execute BIST, then 192 bits are needed for storage (e.g. 32 bits for CTRL register 160 for first DWORD, 32 bits for CFG register 162 for first DWORD, 32 bits for DATA register 166 of first DWORD, 32 bits for CTRL register 160 for second DWORD, 32 bits for CFG register 162 for second DWORD, and 32 bits for DATA register 166 of second DWORD).In another aspect of the present disclosure, in addition to providing the ability to program at the DWORD level, the test system may use the MASK register 164 to program each bit within a DWORD. That is, if only certain bits in a DWORD need to be overwritten for the BIST, then the MASK Register 164 may be programmed to only modify those select bits, and retain the other bits at their predefined reset value. As used in this disclosure, this may be referred to as a“skip-bit instruction,” which identifies single bits in a given DWORD that can retain predefined reset value. If all bits in a DWORD are to be programmed for a BIST, then the MASK Register 164 can be skipped. As such, the sequence described above for configuring burst mode may include one or more additional operations related to the MASK register 164. More specifically, after the CTRL register 160 and the CFG register 162 are configured to program a target DWORD (see e.g., Table 1), the MASK register may be programmed with selective bits in the target DWORD. However, this can be skipped if all bits in a DWORD will be programmed. The DATA register 166 may then be programmed with the contents of the DWORD being shifted into the IEEE 1500 bus 126. As such, the MASK register 164 provides for additional storage savings at the bit level.In another aspect of the present disclosure, the amount of overhead needed to configure multiple DWORDS through the CTRL register 160 and the CFG Register 162 is reduced when a given BIST requires programming consecutive DWORDS (e.g., directly adjacent to one another in the chain). That is, as described herein, in some instances the CTRL register 160 and the CFG registers 162 may be configured for eachDWORD, and when both registers are 32 bit, this requires 64 bits of storage for each DWORD. In some instances, when consecutive DWORDS are programmed in order to execute BIST, an “auto-increment DWORD” instruction can be configured in the BIST and provided to the CTRL register 160 and the CFG registers 162. This instruction may program the 1ST module 122 to automatically translate subsequent DWORDS and skip the CTRL register 160 and the CFG Register 162 writes for the subsequent DWORDS. This approach can be combined with the burst mode and MASK Register 164 to provide combined reductions in memory requirements and runtime for running a given BIST. For example, if aJTAG chain is one hundred DWORDS and if only two consecutive DWORDS are written to execute BIST, then 128 bits are needed for storage using burst_mode and auto-increment (e.g., 32 bits for CTRL register 160 for a first and second DWORD, 32 bits for CFG register 162 for the first and second DWORD, and 64 bits for DATA register 166 for the first and second DWORD). The burst_mode and auto-increment modes also provide significant storage savings for reading a JTAG chain. For example, as described above, to read a chain of one hundred DWORDS using conventional systems, 6400 bits of storage are used In contrast, using burst write and auto- increment modes for reading a JTAG chain, the storage requirements to store the read instructions of a one hundred DWORD chain is 64 bits, since the CTRL register 160 and the CFG register 162 may not have to be re-configured for each DWORD (e.g., 32 bits for the CTRL register 160 and 32 bits for the CFG register 162). In such an example, storage size for reading a one hundred DWORD JTAG chain is reduced from 6400 bits to 64 bits.Once the computing platform 100 is manufactured to include the hardware described above, including the JTAG/IEEE 1500 architecture and the test system, BIST can be configured using ATE. In addition, with the ability to program only target DWORDS for a given BIST, the size of the resulting BIST test sequence can be significantly reduced Once configured, these BIST test sequences can be stored separately in the off-chip external memory 104, such that the off-chip 1ST controller 106 can selectively retrieve a select BIST test sequence. For example, referring to FIG. 3, a block diagram showing more detailed view of the off-chip architecture 102 is depicted. FIG. 3 includes the off-chip external memory 104, which may store BIST test sequences. The BIST test sequences may include BIST test sequence A 108, which may include both an instruction portion (e.g., FSM instruction, CTRL instruction, CFG instruction, and MASK instruction) and a test-data portion (e.g., DATA instruction). In some instances, a test sequence (e.g., BIST test sequence A 108) may have an instruction portion and a data portion related to a single JTAG register. In other aspects, a test sequence (e.g., BIST test sequence A 108) may have an instruction portion and a data portion related to multiple JTAG registers, such that there may be multiple FSM instructions, CTRL instructions, CFG instructions, MASK instructions, and DATA instructions within a single test sequence. When triggered to retrieve the BIST test sequence A 108, the off- chip 1ST controller 106 (e.g., secured software) may retrieve the BIST test sequence A 108 from the memory 104 and communicate the BIST test sequence A 108 to the on-chip architecture 112. For example, the BIST test sequence A 108 may be loaded into the on-chip RAM as described above. The BIST test sequence A 108 can then be used to configure the registers and run BIST on a target portion of the ITAG/IEEE 1500 architecture 116. As such, in contrast to conventional systems that shift the entire chain and specify a deterministic value for all bits on the chain, the test system of the present disclosure permits any target BIST instance or target JTAG registers to be selectively programmed with the discrete BIST test sequences (e.g., 108), while preserving the existing value of all the other registers on the same chain.As mentioned herein, conventional systems do not capture the functional values of clocks, power, and I/O states. For example, conventional approaches permit overriding the functional values using designated JTAG registers (e.g., 158 in FIG. 2), but these conventional approaches are not aware of the already configured functional settings and thus are not able to subsequently maintain or restore the functional settings. In contrast, prior to running LBIST, the system of the present disclosure captures and holds the functional settings for configuring clocks, power, and I/O to avoid corrupting these states during LBIST scan operations. More specifically, the present disclosure includes software that programs JTAG registers to capture the functional settings prior to running the LBIST scan operation, and in contrast to conventional test systems, the functional settings can be read after LBIST.More specifically, referring to FIG. 4, an example JTAG register is provided, such as, without limitation, the JTAG register 158 in FIG. 2. The system of the present disclosure may include software that programs a VALS JTAG register 210 and a MASK JTAG register 212 to capture the functional settings prior to running the LBIST scan operation, such that the functional settings can be read and restored after the test. Some of these operations depicted in FIG. 4 relate specifically to a finite state machine (e.g., finite state machine (FSM) 176 of the 1ST module 122) having various states consistent with the JTAG standard. Prior to overriding the functional data, during the“captureDR” step, the functional values may be written into the VALS JTAG register 210, as depicted by path 214. Then during the“shiftDR” stage the entire chain may be shifted through the VALS JTAG register 210, as depicted by arrows 216. Finally, during the“updateDR” step, the functional values in the shift register may be captured into the update latch of the VALS JTAG register 210, as indicated by arrow 218. At this point, the override data can be shifted into the VALS JTAG register 210 in a conventional manner. For example, the MASK JTAG register 212 may initially set all values to“0” while the VALS JTAG register 210 is programmed. After the VALS JTAG register 210 is programmed, all bits in the chain of the MASK JTAG register 212 may be set to“1.” For instance, during“shiftDR” 220 the entire chain is shifted to set to“1,” and during“updateDR” 222 the value captured in the shift register is captured into the update latch. In this aspect of the present disclosure, the capture path into the VALS JTAG register 210, which is unused in conventional systems, may be utilized to capture the functional states of the clocks , power, and I/O, so that the functional values are not corrupted during LBIST.In a further aspect, the present disclosure also includes operations for optimizing the capture of functional values by reducing the capture/hold time of the cycles. More specifically, captureDR 214 and updateDR 218 may include single cycle operations and shiftDR 216 may include a multi cycle operation depending on the length of the target JTAG chain. For example, if the target JTAG chain is 1000 bits, the time required to execute the captureDR 214, shiftDR 216, and updateDR 218 operations mentioned above is 1002 (e.g., 1 cycle for captureDR 214, 1000 cycles for shiftDR 216, and 1 cycle for updateDR 218). However, the shiftDR operation (e.g., path 216 in FIG. 4) may not actually be required in order to capture the functional configuration into the VALS JTAG register 210. But as mentioned above, conventional systems do not provide the ability to skip steps in the FSM, which causes unnecessary cycles (e.g., 1000 unnecessary cycles based on the example of a JTAG chain with 1000 bits). In contrast to conventional systems, the present disclosure provides the ability to skip the shiftDR 216 state entirely by programming a bit in the CFG Register 162, which allows the system to omit the cycles that would otherwise be required in the shiftDR 216 state. This is another example of how the hardware (e.g., CFG register 162) described in FIGs. 1 and 2 is leveraged to improve runtime associated with BIST.In a further aspect of the present disclosure, when running BIST, some of the settings in a JTAG register capture functional settings, while other fields are programmed with new values. This may apply for certain JTAG registers having some fields used to capture functional settings and other fields used to configure settings for MBIST or LBIST. As such, the various aspects described herein, such as burst_mode, auto-increment, MASK Register, and capturing functional values can be combined to selectively program a few bits while capturing the functional settings of the other bits.Referring back to FIG. 2, the chip 112 may have multiple JTAG registers replicated across partitions or replicated across clusters, such that for some BIST, these JTAG registers may need to be programmed with the same data. For example, the architecture 116 may include 300 partitions that should each receive an MBIST JTAG chain of 200 bits. Conventional test systems may execute more than 60,000 cycles to program the data into the daisy-chained JTAG registers in all the partitions and may require all such data to be stored on-chip for every partition. These conventional approaches may increase both storage and runtime requirements on the system.In contrast to these conventional systems, the test system of the present disclosure may configure a JTAG register in each partition and in each cluster to enable either a broadcast mode - for distributing a JTAG test sequence - or a daisy-chain mode. In broadcast mode, all partitions in a cluster can be broadcasted and all clusters in a chip can be broadcasted. For example, the instructions that are stored as part of the BIST test sequence (e.g, A 108) may include an instruction to activate a broadcast mode, which is controlled by the partition-level or cluster- level JTAG register organized among the architecture 116. As such, when the broadcast mode is activated by way of the JTAG register receiving the broadcast-mode instruction, all of the partitions (or clusters) can be programmed in parallel with the same BIST test sequence. As such, using broadcast mode, runtime and storage utilization canbe reduced significantly. Referring back to the example outlined above (e.g., 300 partitions and BIST JTAG chain of 200 bits), instead of programming a replicated JTAG chain with 60,000 cycles using conventional systems, the broadcast mode of the present disclosure uses just a few hundred shift cycles, since the partitions can be programmed in parallel. In addition, the entire daisy chain contents are not required to be stored on the chip. Instead, it may be sufficient to store just the per partition couple of hundred bits, retrieve the bits once, and broadcast the bits into all the partitions.In some instances, to utilize broadcast mode, the chain length and the contents to be programmed may need to be the same across the partitions/clusters . Also, the entire chain may need to be programmed if there are hurdles to programming selective bits while retaining the other fields. For example, referring back to FIG. 4 describing capture of functional values, test scenarios may arise in which MASK JTAG registers (e.g., 212) in different partitions are different lengths . As explained above, each bit of the MASK JTAG Register may be changed to“1,” (e.g., during shiftDR 220) and as such, over-shifting will still effectively program the MASK JTAG Register to the required“1” value (under- shifting would not and would leave the“0” value in some of the bits). Therefore, in the system of the present disclosure, the maximum chain length among all target petitions receiving data via a broadcast is used to determine how many“1” values to shift into the MASK JTAG Register. For example, a first partition may include a first MASK JTAG Register having a first quantity of bits (e.g., 120), and a second partition may include a second MASK JTAG Register having a second quantity of bits (e.g., 110), which may be less than the first quantity of bits. If a test sequence was broadcasted to the first and second partition, then a quantity of“Is” equal to the first quantity (e.g., 120) may be shifted into the second MASK JTAG Register to capture the functional values. With continued reference to FIG. 2, the architecture 116 may include one or more SKUs, and some partitions or clusters on the JTAG daisy-chain path may be power gated. In that case, the chain could potentially get blocked and prevent access to the non-power gated partitions or clusters. In other words, if a partition or cluster is power gated, then it could break the scan path going through it. As such, an aspect of the present disclosure uses the IEEE 1500 WS BYPASS path to reduce the likelihood that the chain will be blocked. More specifically, a small logic is kept in the always-on domain for every partition and cluster such that when power is gated, WSC goes through the single-bit WS BYPASS path in that always-on domain. As such, even in the lower power mode, the IEEE 1500 chain remains intact.Furthermore, the JTAG test sequence may be configured to access one cluster or one partition at a time, while bypassing the other clusters or partitions through the single -bit bypass path that is on the non-gated power domain. In other words, even though one cluster or one partition is accessed at a time, the JTAG test sequence may be configured to include instructions and data for all the clusters or partitions because the hardware must support all SKUs - but the on-chip 1ST controller 174 may load test sequences on a per-cluster or per-partition basis. For example, after a JTAG sequence for a partition or cluster is loaded into the IEEE 1500 bus 126, the finite state machine may transition from shiftDR to shiftIR to permit the next partition or cluster to be loaded separately. Even if a cluster or partition is rail·· gated, the sequence targeting that cluster or partition may still be executed, and real programming for that cluster may fail. However, this may not trigger a fault because the rail-gated cluster is not used functionally and doesn’t need to be tested by the test system.Having described various aspects of the present disclosure with respect to FIGs. 1-4, various additional aspects will now be described. When describing these various additional aspects, reference may also be made back to FIGs. 1-4 for illustrative purposes. More specifically, another aspect of the present disclosure includes a test system for performing BIST. The test system may include a test-sequence register (e.g., JTAG register 158) including a plurality of bit subsets (e.g., DWORDS having 32 bits or bit subsets having another quantity of bits) and forming a path (e.g., serial or daisy-chain) configured to receive a serial input of a test sequence. In addition, the test system may include an instruction register (e.g., 156) to control transmission of the serial input into the test-sequence register. The test system may also include a control register (e.g., 160) to program the instruction register by receiving a first set of data (e.g., content of CTRL register identified in Table 1 above) including a first subset identifier (e.g., DWORD id lsb) of a bit subset in the plurality of bit subsets. The test system may include a configuration register (e.g., 162) to program the instruction register by receiving and communicating a second set of data including a second subset identifier (e.g., DWORD id msb) of the bit subset. Furthermore, the test system may include a data register (e.g., 166) to receive the test sequence (e.g., DATA instruction in BIST test sequence) to be written into the serial input.The test system may include other additional or alternative components. For example, the test system may also include another register (e.g., 164) to program the instruction register (e.g., 156) by receiving a skip-bit instruction that specifies one or more bits of the bit subset to retain a predefined reset value. In other aspects of the test system, a burst mode instruction may be provided to the CTRL register and/or the CFG register, and the burst_mode instruction may program the instruction register to loop bits back into the path (e.g., WSC path) as a serial input (e.g., where the bits are in the test-sequence register (e.g., 158) when the test sequence is transferred into the path and are not in the bit subset). In another aspect of the test system, an auto-increment instruction may be provided to the CTRL register and/or the CFG register, and the auto-increment instruction may program the instruction register to control a serial input of another bit subset in a same manner as the bit subset (e.g., where the other bit subset follows consecutively after the bit subset in the test sequence).In other aspects of the test system, the test-sequence register (e.g., JTAG register) may be configured to serially output test results that are shifted into the data register (e.g., 166) to be read. In still another aspect, the test system may further include an access-control application (e.g., access control 125) configured to control access to the test-sequence register (e.g., JTAG register) between the data register (e.g., 166) and external automatic test equipment (e.g., 128). In some embodiments, the test-sequence register may be disposed on a chip (e.g., 112), and the test system may further include a test-sequence-retrieval application (e.g., on-chip 1ST controller 174) on the chip and configured to retrieve the test sequence (e.g., 172) from on-chip memory (e.g., 170). In addition, the test system may include a test-sequence-retrieval application (e.g., off-chip 1ST controller 106) maintained off the chip and configured to retrieve the test sequence (e.g., 108) from off-chip external memory (e.g., 104) and load the test sequence into on-chip memory (e.g., 170).In the test system, the test sequence may include a functional- value-re store instruction that, when run in the JTAG register, captures functional values of a clock, a power source, and I/O prior to running LBIST. For example, during a captureDR operation of a finite state machine (FSM), which controls access to the path (e.g., WSC path), the functional values may be captured (e.g., 214 in FIG. 4) in a data chain in a VALS JTAG register (e.g., 210). In addition, during a shiftDR stage of the FSM, the data chain in the VALS JTAG register may be shifted (e.g., 216 in FIG. 4). Furthermore, during an updateDR operation, the functional values may be captured in an update latch of the VALS JTAG register (e.g., 218). After the VALS JTAG register is programmed, all bits in a MASK JTAG register may be set to a common value (e.g.,“1”).In the test system, the test-sequence register may be a first test-sequence register. The system may also include a first chip partition including the first test-sequence register and a second chip partition daisy chained with the first chip partition and including a second test-sequence register (e.g., second JTAG register). The first test-sequence register and the second test-sequence register may both be configured to receive a broadcast input of the test sequence for running logic built-in self-test (LBIST), such as where the first and second test-sequence registers are duplicates. The first chip partition and the second chip partition may each include a respective MASK JTAG register (e.g., 212) having bits that are all changed to a common value prior to running the LBIST when functional values of clocks, power, and I/O states are captured. When a first MASK JTAG register of the first partition has a first quantity of bits that is larger than a second quantity of bits of a second MASK JTAG register of the second partition, then the broadcast input provided to both the first chip partition and the second chip partition may include a quantity of common values equal to the first quantity.Now referring to FIGs. 5 and 6, each block of methods 500 and 600, described herein, comprises a computing process that may be performed using any combination of hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. The methods 500 and 600 may also be embodied as computer-usable instructions stored on computer storage media. The methods 500 and 600 may be provided by a standalone application, a service or hosted service (standalone or in combination with another hosted service), or a plug-in to another product, to name a few. In addition, methods 500 and 600 are described, by way of example, with respect to the system 100 of FIGs. 1-3. However, these methods may additionally or alternatively be executed by any one system, or any combination of systems, including, but not limited to, those described herein.Now referring to FIG. 5, FIG. 5 is a flow diagram showing a method 500 for running BIST, in accordance with some embodiments of the present disclosure. The method 500, at block B502, includes retrieving, from memory on a chip, a JTAG sequence to perform a built-in self-test (BIST) on the chip when a test portion of the JTAG sequence is input into a data register (e.g., JTAG register) having a first quantity of bits.The method 500, at block B504, includes loading an instruction portion of the JTAG sequence into one or more first instruction registers (e.g., 160, 162, etc.). The first instruction registers may program a second instruction register (e.g., 156) controlling the data register. An example of a JTAG sequence is the BIST test sequence A 172, which may be retrieved by the on-chip 1ST controller and loaded into the registers of the 1ST module 122. The JTAG sequence may include a test-data portion for performing the BIST when the test-data portion is translated into the IEEE 1500 bus 126 and input into the JTAG data register (e.g., 158). The JTAG data register includes a first quantity of bits. In addition, the BIST test sequence A 172 also includes an instruction portion that is loaded into the CTRL register 160 and the CFG register 162 for programing a JTAG instruction register (e.g., 156), which may control the JTAG data register 158.The method 500, at block B506, includes loading the test portion of the JTAG sequence into a data shift register (e.g., 166) on the chip to shift the test portion into a serial data bus to be transmitted to the data register. The test portion of the JTAG sequence may include a second quantity of bits (e.g., DWORD), which may be less than the first quantity of bits (e.g., since the DWORD is a bit subset of the JTAG chain). In this respect, only a portion of the JTAG chain may be written, thereby reducing runtime and memory utilization.Referring now to FIG. 6, FIG. 6 includes another flow diagram showing a method 600 for running BIST, in accordance with some embodiments of the present disclosure. The method 600, at block B602, includes, prior to running a logic built-in self-test (LBIST) on a chip, copying functional values of at least one of a clock, power source, or I/O states to a first shift register (e.g., a VALS JTAG register 212). For example, during a captureDR operation, functional values of a clock, power source, and I/O state(s) may be copied to a shift register. The capture DR state may correspond to a state of a finite state machine (FSM), and the FSM may include a first shiftDR state after the captureDR state. For example, in FIG. 4, reference numeral 214 may represent functional values being copied into the VALS JTAG register 210.The method 600, at block B604, includes capturing the functional values in update latches of the first shift register. For example, the functional values may be captured in update latches of the VALS JTAG shift register 210 during a first updateDR operation (e.g., 218 in FIG. 4).The method 600, at block B606, includes, in a second shift register (e.g., MASK JTAG register), shifting all bits to a common value. For example, during a second shiftDR operation (e.g., arrow 220), the respective value of all the bits in the MASK JTAG register 212 may be shifted to a same value that controls an operation of the VALS JTAG register 210. For example, by shifting all values to“1” in the MASK JTAG register 212, the VALS JTAG register 210 can be programmed with override data.The method 600, at block B608, includes capturing the common value in an update latch of the second shift register. For example, each of the common values is captured in an update latch of the MASK JTAG register 212 during a second updateDR operation (e.g., arrow 222 in FIG. 4), where shifting all bits to a common value may enable the functional values to be overridden with override data shifted into the VALS JTAG register 210.The disclosure may be described in the general context of computer code or machine -useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The disclosure may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The disclosure may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.As used herein, a recitation of “and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example,“element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, “at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, “at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B.The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter may also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms“step” and/or“block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. |
The present disclosure includes memory blocks erasable in a single level cell mode. A number of embodiments include a memory comprising a plurality of mixed mode blocks and a controller. The controller may be configured to identify a particular mixed mode block for an erase operation and, responsive to a determined intent to subsequently write the particular mixed mode block in a single level cell (SLC) mode, perform the erase operation in the SLC mode. |
What is Claimed is:1. An apparatus, comprising:a memory comprising a plurality of mixed mode blocks; anda controller configured to:identify a particular mixed mode block for an erase operation; and responsive to a determined intent to subsequently write the particular mixed mode block in a single level cell (SLC) mode, perform the erase operation in the SLC mode.2. The apparatus of claim 1, wherein the memory further comprises a plurality of SLC reserve blocks, and wherein the controller is configured to operate the plurality of mixed mode blocks in the SLC mode and at least one of a number of extra level cell (XLC) modes, and write data in the SLC mode to the plurality of SLC reserve blocks using a same trim set used to write data in the SLC mode to the plurality of mixed mode blocks.3. The apparatus of claim 1, wherein the particular mixed mode block is identified from among a group of free blocks, and wherein the controller is configured to identify which free blocks are in an SLC erased state, which free blocks are in an extra level cell (XLC) erased state, and which free blocks are awaiting erasure.4. The apparatus of any of claims 1-3, wherein the controller is configured to:maintain an erase count for the plurality of mixed mode blocks; and adjust the erase count by different amounts depending on whether a respective block is erased in the SLC mode or in an extra level cell (XLC) mode.5. The apparatus of any one of claims 1-3, wherein the controller is configured to:identify the particular mixed mode block for the erase operation in association with a background garbage collection operation; and perform a foreground garbage collection operation only when an amount of free blocks has reduced to a threshold level.6. An apparatus, comprising:a memory comprising a plurality of mixed mode blocks, wherein the plurality of mixed mode blocks include a group of free blocks, and wherein the group of free blocks includes:a first portion of blocks allocated as single level cell (SLC) erased blocks;a second portion of blocks allocated as extra level cell (XLC) erased blocks; anda third portion of blocks allocated as ready to be erased blocks.7. The apparatus of claim 6, wherein:the SLC erased blocks comprise blocks that have been erased in the SLC mode;the XLC erased blocks comprise blocks that have been erased in an XLC mode; andthe ready to be erased blocks comprise blocks that have not been erased.8. The apparatus of claim 6, further comprising a controller configured to: increment an erase counter by a first amount in response to a first respective mixed mode block being erased in an SLC mode; andincrement the erase counter by a second amount in response to a second respective mixed mode block being erased in an XLC mode.9. The apparatus of claim 6, further comprising a controller configured to: maintain the second portion of blocks such that a number of blocks associated with the second portion of blocks is equal to or less than a refresh block threshold limit; andmaintain the third portion of blocks such that a number of blocks associated with the third portion of blocks is less than a number of blocks associated with the second portion of blocks.10. The apparatus of any of claims 6-9, wherein the third portion of blocks comprises blocks that do not include valid host data.1 1. The apparatus of any of claims 6-9, further comprising a controller configured to:erase the third portion of blocks; andwrite data to the third portion of blocks in either the SLC mode or the XLC mode.12. The apparatus of any of claims 6-9, further comprising a controller configured to:determine that a first block among the third portion of blocks has a lower erase count than a second block among the third portion of blocks;erase the first block among the third portion of blocks in the SLC mode; andadd the erased first block to the first portion of blocks.13. An apparatus, comprising:an array of memory cells comprising a plurality of mixed mode blocks; anda controller configured to:determine a respective erase count for mixed mode blocks among the plurality of mixed mode blocks;allocate a first mixed mode block among the plurality of mixed mode blocks to a first pool of mixed mode blocks based on the respective erase count for the first mixed mode block;allocate a second mixed mode block among the plurality of mixed mode blocks to a second pool of mixed mode blocks based on the respective erase count for the second mixed mode block; andallocate a third mixed mode block among the plurality of mixed mode blocks to a third pool of mixed mode blocks based on the respective erase count for the third mixed mode block.14. The apparatus of claim 13, wherein the first mixed mode block has a lower respective erase count than the respective erase count of the second mixed mode block and the respective erase count of the third mixed mode block.15. The apparatus of claim 13, wherein the controller is configured to: erase blocks in the first pool of mixed mode blocks in a single level cell(SLC) mode;erase blocks in the second pool of mixed mode blocks in an extra level cell (XLC) mode; andstore mode blocks that are ready to be erased in the third pool of mixed mode blocks.16. The apparatus of claim 15, wherein the controller is further configured to prioritize writing data to blocks that have been erased in the XLC mode.17. A method of operating a memory device, comprising:allocating a first portion of mixed mode blocks associated with the memory device to be erased in a single level cell (SLC) mode;allocating a second portion of mixed mode blocks associated with the memory device to be erased in an extra level cell (XLC) mode; andallocating a third portion of mixed mode blocks associated with the memory device such that the third portion comprises blocks that are ready to be erased.18. The method of claim 17, further comprising:erasing blocks in the first portion of mixed mode blocks in the SLC mode; andwriting data to blocks in the first portion of mixed mode blocks in the SLC mode.19. The method of claim 17, further comprising:erasing at least one block in the second portion of mixed mode blocks in the SLC mode; and writing data to the at least one block in the second portion of mixed mode blocks in the SLC mode.20. The method of any of claims 17-19, further comprising maintaining a number of blocks in the third portion such that the number of blocks in the third portion is less than a number of blocks in the second portion.21. The method of any of claims 17-19, further comprising:performing, during idle time of the memory device, garbage collection operations on mixed mode blocks associated with the memory device;adding a first subset of the garbage collected blocks having an erase count above a threshold erase count to the third portion of mixed mode blocks; andadding a second subset of the garbage collected blocks having an erase count lower than the threshold erase count to the first portion of mixed mode blocks.22. The method of any of claims 17-19, further comprising:determining that a first mixed mode block associated with the memory device has a lower erase count than a second mixed mode block associated with the memory device; andadding the first mixed mode block to the first portion of mixed mode blocks in response to the determination that the first mixed mode block has the lower erase count than the second mixed mode block. |
MEMORY MANAGEMENT Technical Field[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to memory management.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits and/or external removable devices in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and can include random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can retain stored data when not powered and can include NAND flash memory, NOR flash memory, phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetic random access memory (MRAM), among others.[0003] Memory devices can be combined together to form a solid state drive (SSD). An SSD can include non-volatile memory (e.g., NAND flash memory and/or NOR flash memory), and/or can include volatile memory (e.g., DRAM and/or SRAM), among various other types of non-volatile and volatile memory. Flash memory devices can include memory cells storing data in a charge storage structure such as a floating gate, for instance, and may be utilized as non-volatile memory for a wide range of electronic applications. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption.[0004] An SSD can be used to replace hard disk drives as the main storage volume for a computer, as the solid state drive can have advantages over hard drives in terms of performance, size, weight, ruggedness, operating temperature range, and power consumption. For example, SSDs can have superior performance when compared to magnetic disk drives due to their lack of moving parts, which may avoid seek time, latency, and other electromechanical delays associated with magnetic disk drives. [0005] Memory cells in an array architecture can be programmed to a target (e.g., desired) state. For instance, electric charge can be placed on or removed from the charge storage structure (e.g., floating gate) of a memory cell to program the cell to a particular data state. The stored charge on the charge storage structure of the memory cell can indicate a threshold voltage (Vt) of the cell, and the state of the cell can be determined by sensing the stored charge (e.g., the Vt) of the cell.[0006] For example, a single level cell (SLC) can be programmed to a targeted one of two different data states, which can be represented by the binary units 1 or 0. Some flash memory cells can be programmed to a targeted one of more than two data states (e.g., 1111, 0111, 0011, 1011, 1001, 0001, 0101, 1101, 1100, 0100, 0000, 1000, 1010, 0010, 0110, and 1110) such that they represent more than one digit (e.g., more than one bit). Cells configured for programming to more than two data states may be referred to as extra level cells (XLC). For example, multi-level cells (MLCs), triple level cells (TLCs), and/or quad-level cells (QLCs) may be referred to generally herein as XLCs. XLCs can provide higher density memories for a given number of memory cells; however, XLCs may have a lower endurance and/or data retention capability as compared to SLCs. For example, an expected useful life of SLCs may be 50,000 to 100,000 cycles (e.g., program-erase cycles), while an expected useful life of XLCs may be 1,000 to 5,000 cycles.Brief Description of the Drawings[0007] Figure 1 illustrates a diagram of a portion of a memory array having a number of physical blocks in accordance with a number ofembodiments of the present disclosure.[0008] Figure 2 is a functional block diagram of an apparatus in the form of a computing system comprising a memory system in accordance with a number of embodiments of the present disclosure.[0009] Figure 3 illustrates a diagram of a controller in accordance with a number of embodiments of the present disclosure.[0010] Figure 4 illustrates a diagram of a memory having various portions in accordance with a number of embodiments of the present disclosure. [0011] Figure 5 illustrates an example flow diagram for memory management in accordance with a number of embodiments of the present disclosure.Detailed Description[0012] Apparatuses and methods for memory management are provided.In one or more embodiments of the present disclosure, an apparatus for memory management may include a memory comprising a plurality of mixed mode blocks and a controller. The controller may be configured to identify a particular mixed mode block for an erase operation and, responsive to a determined intent to subsequently write the particular mixed mode block in a single level cell (SLC) mode, perform the erase operation in the SLC mode. As used herein, "mixed mode blocks" are blocks (e.g., memory blocks, memory cells, etc.) that can be operated in either an SLC mode or an XLC mode. The endurance and/or wear ratio of the mixed mode blocks may be affected by which mode the mixed mode block is operated in. For example, mixed mode blocks may have a higher performance and/or a higher endurance when operated in an SLC mode as opposed to when they are operated in an XLC mode.[0013] As used here, "wear ratio" refers to the number of SLC writes of a mixed mode block in SLC mode that results in the same cell degradation caused by the number of writes in XLC mode. For example, for a wear ration of 2, two write cycles in the SLC mode would result in a same amount of cell degradation as one write cycle in an XLC mode. In some embodiments, the life of a mixed mode block may be measured in terms of XLC program (e.g., write) cycles and/or erase cycles. In examples where the cell degradation is the same for an SLC mode write as it is for an XLC mode write, writing data to a cell in the SLC mode may have a same endurance cost as writing data to a cell in an XLC mode. For a wear ratio greater than 1, multiple SLC writes may be performed to a memory block with an equivalent endurance cost of one XLC mode writing operation.[0014] Memory management (e.g., managing high SLC endurance and lesser SLC wear requirements on multi-cursor dynamic SLC cache architecture) in accordance with the present disclosure can increase the performance (e.g., increase the speed) and/or increase the endurance (e.g., increase the lifetime) of the memory, among other benefits. Further, memory management schemes in accordance with the present disclosure can reduce the endurance requirement and/or erase latencies associated with XLC modes of operation. Further, memory management schemes in accordance with the present disclosure can improve the write performance of a memory and/or increase the total bytes written (TBW) of the memory. As used herein, "endurance" refers to a measure of the useful life of memory (e.g., number of program operations and/or erase cycles that a memory block can experience without experiencing data retention and/or read issues.[0015] Memory management schemes in accordance with the present disclosure can increase the performance and/or endurance of the memory as compared with previous memory management approaches. For example, a number of previous memory block management approaches may utilize only a single SLC low endurance write trim, and may be inadequate in a multi-cursor architecture such as a multi-cursor dynamic SLC cache. As used herein, a "dynamic SLC cache" refers to a cache that can be dynamically resized using mixed mode blocks that constitute the bulk of the advertised user size of the drive/card. For example, a size (e.g., number of blocks associated with) a dynamic SLC cache may be changed on the fly to accommodate various demands on the memory.[0016] In some embodiments, various trims may also be used to alter the endurance and/or wear ratio of SLCs and/or XLCs. For example, a trim that yields a high endurance may be used for SLCs in some situations, and a trim that yields high wear ratios may be used for SLCs in some other situations.However, alternating between multiple trims may not be desirable, because the frequency at which firmware may toggle between system tables and user blocks may make alternating between multiple trims unstable and/or effect write speeds. In some embodiments, a trim that strikes a balance between a high endurance and a high wear ratio may be used.[0017] As an additional example, a number of previous approaches may employ one SLC write trim for high endurance and a second SLC write trim for wear ratio; however, this may be inadequate and/or inefficient due to the frequency with which firmware toggles between system tables and user blocks. In contrast to a single cursor architecture, where garbage collection operations and host data writes and system table writes are performed on a single open block (e.g., a single cursor), multi-cursor refers to an architecture where different open memory blocks (e.g., cursors) may have different operations performed thereon. For example, in a multi-cursor architecture, a first open memory block may be used for host data writes, a second open block may be used for folding and/or garbage collection operations, a third open block may be used for system table writes, etc.[0018] In contrast to some prior approaches, embodiments of the present disclosure may provide for reduced mixed mode block endurance requirements and/or erase latencies with a single SLC trim set. For example, in a multi-cursor memory architecture, system tables may be associated with a dedicated cursor, which may alleviate some of the challenges associated with some previous approaches described above. In addition, in some embodiments, this reduced endurance requirement may allow for improvement to the write performance and/or an increase in the total bytes written (TBW) for the memory. In some embodiments, memory blocks in the memory may be allocated into different groups or pools, with each group or pool corresponding to particular types of memory blocks. For example, one group may contain memory blocks that have been erased in SLC mode, another group may contain memory blocks that have been erased in an XLC mode, and another group may contain memory blocks are ready to be erased.[0019] In some embodiments, a multi-cursor architecture may allow for system tables to be assigned to a dedicated cursor that comprises SLC reserved memory blocks. This may reduce an endurance burden associated with assigning system tables to user data memory blocks, and may increase the TBW of the memory device.[0020] As used herein, "a number of something can refer to one or more such things. For example, a number of memory cells can refer to one or more memory cells. Additionally, the designators "N", "B", "R", and "S", as used herein, particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with a number of embodiments of the present disclosure.[0021] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 208 may reference element "08" in Figure 2, and a similar element may be referenced as 308 in Figure 3.[0022] Figure 1 illustrates a diagram of a portion of a memory array 100 having a number of physical blocks in accordance with a number ofembodiments of the present disclosure. Memory array 100 can be, for example, a NAND flash memory array. However, embodiments of the present disclosure are not limited to a particular type of memory or memory array. For example, memory array 100 can be a DRAM array, an RRAM array, or a PCRAM array, among other types of memory arrays. Further, although not shown in Figure 1, memory array 100 can be located on a particular semiconductor die along with various peripheral circuitry associated with the operation thereof.[0023] As shown in Figure 1 , memory array 100 has a number of physical blocks 116-0 (BLOCK 0), 116-1 (BLOCK 1), . . ., 116-B (BLOCK B) of memory cells. The memory cells can be single level cells and/or extra level cells such as, for instance, triple level cells (TLCs) or quadruple level cells (QLCs). As used herein, the term extra level cell (XLC) may be used to refer to generally multilevel cells such as MLCs, TLCs, QLCs, etc. The number of physical blocks in memory array 100 may be 128 blocks, 512 blocks, or 1,024 blocks, but embodiments are not limited to a particular multiple of 128 or to any particular number of physical blocks in memory array 100. A first number of blocks 116-0, 116-1, . . ., 116-B can be allocated as a first portion or pool of memory blocks, a second number of blocks 116-0, 116-1, . . ., 116-B can be allocated as a second portion or pool of memory blocks, and/or a third number of blocks 116-0, 116-1, . . ., 116-B can be allocated as a third portion or pool of memory blocks.[0024] A number of physical blocks of memory cells (e.g., blocks 116-0,116-1, . . ., 116-B) can be included in a plane of memory cells, and a number of planes of memory cells can be included on a die. For instance, in the example shown in Figure 1, each physical block 116-0, 116-1, . . ., 116-B can be part of a single die. That is, the portion of memory array 100 illustrated in Figure 1 can be die of memory cells. [0025] As shown in Figure 1, each physical block 116-0, 116-1, . . ., 116-B contains a number of physical rows (e.g., 120-0, 120-1, . . ., 120-R) of memory cells coupled to access lines (e.g., word lines). The number of rows (e.g., word lines) in each physical block can be 32, but embodiments are not limited to a particular number of rows 120-0, 120-1, . . ., 120-R per physical block. Further, although not shown in Figure 1, the memory cells can be coupled to sense lines (e.g., data lines and/or digit lines).[0026] Each row 120-0, 120-1, . . ., 120-R can include a number of pages of memory cells (e.g., physical pages). A physical page refers to a unit of programming and/or sensing (e.g., a number of memory cells that are programmed and/or sensed together as a functional group). In the embodiment shown in Figure 1, each row 120-0, 120-1, . . ., 120-R comprises one physical page of memory cells. However, embodiments of the present disclosure are not so limited. For instance, in a number of embodiments, each row can comprise multiple physical pages of memory cells (e.g., one or more even pages of memory cells coupled to even-numbered bit lines, and one or more odd pages of memory cells coupled to odd numbered bit lines). Additionally, forembodiments including XLCs, a physical page of memory cells can store multiple pages (e.g., logical pages) of data, for example, an upper page of data and a lower page of data, with each cell in a physical page storing one or more bits towards an upper page of data and one or more bits towards a lower page of data.[0027] A program operation (e.g., a write operation) can include applying a number of program pulses (e.g., 16V-20V) to a selected word line in order to increase the threshold voltage (Vt) of the selected cells coupled to that selected word line to a desired program voltage level corresponding to a target (e.g., desired) data state. A sense operation, such as a read or program verify operation, can include sensing a voltage and/or current change of a sense line coupled to a selected cell in order to determine the data state of the selected cell.[0028] In a number of embodiments of the present disclosure, and as shown in Figure 1, a page of memory cells can comprise a number of physical sectors 122-0, 122-1, . . ., 122-S (e.g., subsets of memory cells). Each physical sector 122-0, 122-1, . . ., 122-S of cells can store a number of logical sectors of data (e.g., data words). Additionally, each logical sector of data can correspond to a portion of a particular page of data. As an example, a first logical sector of data stored in a particular physical sector can correspond to a logical sector corresponding to a first page of data, and a second logical sector of data stored in the particular physical sector can correspond to a second page of data. Each physical sector 122-0, 122-1, . . ., 122-S, can store system and/or user data, and/or can include overhead data, such as error correction code (ECC) data, logical block address (LBA) data, and recurring error data.[0029] Logical block addressing is a scheme that can be used by a host for identifying a logical sector of data. For example, each logical sector can correspond to a unique logical block address (LBA). Additionally, an LBA may also correspond to a physical address. A logical sector of data can be a number of bytes of data (e.g., 256 bytes, 512 bytes, or 1 ,024 bytes). However, embodiments are not limited to these examples.[0030] It is noted that other configurations for the physical blocks 1 16-0,116-1 , . . ., 116-B, rows 120-0, 120-1 , . . ., 120-R, sectors 122-0, 122-1 , . . ., 122- S, and pages are possible. For example, rows 120-0, 120-1 , . . ., 120-R of physical blocks 116-0, 1 16-1, . . ., 116-B can each store data corresponding to a single logical sector which can include, for example, more or less than 512 bytes of data.[0031] Figure 2 is a functional block diagram of an apparatus in the form of a computing system 201 comprising a memory system 204 in accordance with a number of embodiments of the present disclosure. As used herein, an"apparatus" can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example.[0032] Memory system 204 can be, for example, a solid state drive(SSD). In the embodiment illustrated in Figure 2, memory system 204 includes a host interface 206, a memory (e.g., a number of memory devices 210-1 , 210-2, . . ., 210-N) (e.g., solid state memory devices), and a controller 208 (e.g., an SSD controller) coupled to physical host interface 206 and memory devices 210-1, 210-2, . . ., 210-N.[0033] Memory devices 210-1, 210-2, 210-N can include, for example, a number of non-volatile memory arrays (e.g., arrays of non-volatile memory cells). For instance, memory devices 210-1 , 210-2, . . ., 210-N can include a number of memory arrays analogous to memory array 100 previously described in connection with Figure 1.[0034] In some embodiments, the memory devices 210-1, . . ., 210-N can include a number of arrays of memory cells (e.g., non-volatile memory cells). The arrays can be flash arrays with a NAND architecture, for example.However, embodiments are not limited to a particular type of memory array or array architecture. As described above in connection with Figure 1 , the memory cells can be grouped, for instance, into a number of blocks including a number of physical pages of memory cells. In a number of embodiments, a block refers to a group of memory cells that are erased together as a unit. A number of blocks can be included in a plane of memory cells and an array can include a number of planes. As one example, a memory device may be configured to store 8KB (kilobytes) of user data per page, 128 pages of user data per block, 2048 blocks per plane, and 16 planes per device.[0035] In operation, data can be written to and/or read from a memory device of a memory system (e.g., memory devices 210-1, . . ., 210-N of memory system 204) as a page of data, for example. As such, a page of data can be referred to as a data transfer size of the memory system. Data can be transferred to/from a host 202) in data segments referred to as sectors (e.g., host sectors). As such, a sector of data can be referred to as a data transfer size of the host. In some embodiments, NAND blocks may be referred to as erase blocks, with blocks being a unit of erasure and pages being a measure of reads and/or writes.[0036] Host interface 206 can be used to communicate information between memory system 204 and another device such as a host 202. Host 202 can include a memory access device (e.g., a processor). As used herein, "a processor" can intend a number of processors, such as a parallel processing system, a number of coprocessors, etc. Example hosts can include personal laptop computers, desktop computers, digital cameras, digital recording and playback devices, mobile (e.g., smart) phones, PDAs, memory card readers, interface hubs, and the like.[0037] Host interface 206 can be in the form of a standardized physical interface. For example, when memory system 204 is used for information storage in computing system 201, host interface 206 can be a serial advanced technology attachment (SAT A) physical interface, a peripheral component interconnect express (PCIe) physical interface, or a universal serial bus (USB) physical interface, among other physical connectors and/or interfaces. In general, however, host interface 206 can provide an interface for passing control, address, information (e.g., data), and other signals between memory system 204 and a host (e.g., host 202) having compatible receptors for host interface 206.[0038] Controller 208 can include, for example, control circuitry and/or logic (e.g., hardware and firmware). Controller 208 can be included on the same physical device (e.g., the same die) as memories 210-1, 210-2, . . ., 210-N. For example, controller 208 can be an application specific integrated circuit (ASIC) coupled to a printed circuit board including physical host interface 206 and memories 210-1, 210-2, . . ., 210-N. Alternatively, controller 208 can be included on a separate physical device that is communicatively coupled to the physical device that includes memories 210-1, 210-2, . . ., 210-N. In a number of embodiments, components of controller 208 can be spread across multiple physical devices (e.g., some components on the same die as the memory, and some components on a different die, module, or board) as a distributed controller.[0039] Controller 208 can communicate with memory devices 210-1,210-2, . . ., 210-N to sense (e.g., read), program (e.g., write), and/or erase information, among other operations. Controller 208 can have circuitry that may be a number of integrated circuits and/or discrete components. In a number of embodiments, the circuitry in controller 208 may include control circuitry for controlling access across memory devices 210-1, 210-2, . . ., 210-N and/or circuitry for providing a translation layer (e.g., a flash translation layer) between host 202 and memory system 204.[0040] Controller 208 can control operation of a dedicated region, such as a block addressing portion, of each respective memory device 210-1, 210-2, . . ., 210-N as (e.g., configure a portion of each respective memory devices 210-1, 210-2, . . ., 210-N to operate as) a static (e.g., dedicated) single level cell (SLC) cache and/or a dynamic SLC cache. For example, a portion of each respective memory device 210-1, 210-2, . . ., 210-N can be configured to operate as a static cache in SLC mode and/or a dynamic cache in SLC mode. This portion of each respective memory device 210-1, 210-2, . . ., 210-N can be, for example, a first plurality of blocks (e.g., physical blocks) of memory cells in each respective memory, as will be further described herein (e.g., in connection with Figure 3), and may be referred to herein as a first portion of the memory. In addition, portions of each respective memory device 210-1, 210-2, . . ., 210-N can include a second plurality of blocks, a third plurality of blocks, etc.[0041] To ensure the highest possible endurance is available for portions of the memory that are written and/or will be written in SLC mode, portions of the memory may be erased in the SLC mode, as SLC erase operations (e.g., erase operations performed in SLC mode) are less destructive than XLC (e.g., TLC, QLC, etc.) erase operations. For example, in a number of embodiments, the memory cells of the first portion (e.g., the memory cells of the first plurality of blocks) can be erased in SLC mode, and in a number of embodiments, the memory cells of the first portion can be written in SLC mode. In both such embodiments, controller 208 can perform erase operations, as well as program and sense operations, on the cells in SLC mode. In some embodiments, the first portion may be configured to achieve a highest possible endurance, and may be used to write system tables, for example. The portion of the memory allocated to system tables may be outside a portion of the memory that is allocated to user data (e.g., a user size).[0042] As used herein, XLC memory (e.g., XLCs) can refer to memory(e.g. memory cells) that can be programmed to a targeted one of more than two data states (e.g., memory cells that can store more than a single bit of data). For example, XLC memory can refer to memory cells that store two bits of data per cell (e.g., MLCs), memory cells that store three bits of data per cell (e.g., TLCs), and/or memory cells that store four bits of data per cell (e.g., QLCs).[0043] The second portion of each respective memory 210-1, 210-2,210-N can be, for example, a second plurality of blocks (e.g., physical blocks) of memory cells in each respective memory, as will be further described herein (e.g., in connection with Figure 3). Controller 208 can perform erase operations, as well as program and sense operations, on the cells of the second portion in SLC or XLC mode.[0044] The size of the second portion of each respective memory 210-1,210-2, . . ., 210-N can correspond to the quantity of memory cells used by that memory to program data stored in the SLCs of the memory to the XLCs of the memory (e.g., to fold the SLC data to the XLCs). In some embodiments, the first portion may include static blocks that are used for system tables (e.g., system tables outside the user size), and the second portion may include mixed mode user data blocks. The size of the second portion may be configured to support a first amount of user data size in an XLC mode, and the remaining amount of user data size in an SLC mode. In some embodiments, a mixed mode block may be interchangeable and may therefore be used in the SLC mode to the XLC mode.[0045] In some embodiments, the static SLC blocks are never programmed in XLC mode. For example, in some embodiments, SLC endurance of the static SLC blocks may be increased without regard to XLC wear ratio. Accordingly, mixed mode blacks may be used interchangeably in the SLC mode or the XLC mode. In some embodiments, when using a mixed mode block in the SLC mode, XLC wear ratio may be increased without regard to SLC endurance. In some embodiments, a high SLC endurance without regard to XLC wear ratio may be achieved for static SLC blocks, while a low SLC endurance combined with high XLC ratio may be achieved for mixed mode blocks. The low SLC endurance combined with a high XLC wear ratio may be achieved for mixed mode blocks using a single SLC trim set. In some embodiments, a mixed mode block erased in XLC mode can be used to program in SLC mode, and a mixed mode block erased in SLC mode may not be used to program in XLC mode.[0046] In some embodiments, the controller 208 may be configured to determine that a particular memory block associated with a memory block among the plurality of memory blocks is to be written in a single level cell (SLC) mode, and erase data stored in the particular memory block in the SLC mode in response to the determination that the particular memory block is to be written in the SLC mode. The particular memory block may be a host memory block and/or may have been written in an XLC mode prior to the determination that the particular block is to be written in the SLC mode.[0047] In some embodiments, the controller 208 may be configured to increment an SLC erase counter for the particular memory block in response to the data stored in the particular block being erased in the SLC mode. In at least one embodiment, at least one memory block among the plurality of memory blocks may be erased during idle time of the apparatus 204. [0048] The controller 208 may be configured to write data to the particular memory block in the SLC mode after the data stored in the particular memory block is erased in the SLC mode. The controller 208 may be configured to determine a free block count for memory blocks among the plurality of memory blocks. In some embodiments, foreground garbage collection may be invoked in response to the free block count being reduced to below a threshold number of free blocks.[0049] The embodiment illustrated in Figure 2 can include additional circuitry, logic, and/or components not illustrated so as not to obscure embodiments of the present disclosure. For example, memory device 204 can include address circuitry to latch address signals provided over I/O connectors through I/O circuitry. Address signals can be received and decoded by a row decoders and column decoders, to access memories 210-1 , 210-2, . . ., 210-N.[0050] Figure 3 illustrates a diagram of a controller 308 in accordance with a number of embodiments of the present disclosure. The controller may be analogous to controller 208 illustrated in Figure 2, and may be coupled to a host interface and/or a plurality of memory devices, as illustrated in Figure 2, herein.[0051] The controller 308 may include a memory management component 340, which may comprise a wear leveling 342 component, a garbage collection 344 component, a mapping 346 component and an erase block tracking 348 component. In some embodiments, the memory management 340 component may further include a trim 350 component, which may include an SLC write 352 component, an SLC erase 354 component, an XLC write 356 component, and an XLC erase 358 component.[0052] In some embodiments, the wear leveling 342 component may be configured to implement wear leveling on one or more blocks associated with the memory device(s) (e.g., memory device(s) 210-1, . . ., 210-N illustrated in Figure 2) to control the wear rate of such memory devices. Wear leveling may reduce the number of process cycles (e.g., program and/or erase cycles) performed on a particular groups of blocks by spreading such cycles more evenly over an entire memory array and/or memory device. Wear leveling can include static wear leveling and/or dynamic wear leveling to minimize the amount of valid blocks moved to reclaim a block. For example, static wear leveling may include writing static data to blocks that have high program/erase counts to prolong the life of the block. In some embodiments, wear leveling may include garbage collection operations, which may be implemented by garbage collection 344 component.[0053] Garbage collection may include reclaiming (e.g., erasing and making available for programming) blocks that have the most invalid pages among blocks in the memory device(s). In some embodiments, garbage collection may include reclaiming blocks with more than a threshold amount (e.g., quantity) of invalid pages. However, if sufficient free blocks exist for a programming operation, then a garbage collection operation may not occur. Garbage collection may generally be performed in the background (e.g., during idle time of the memory); however, in some embodiments, garbage collection may be performed in foreground, for instance in response to a determination that an amount of free blocks has decreased below a threshold free block count.[0054] In some embodiments, the memory management 340 component may include a mapping 346 component that may be configured to control mapping of memory blocks in the memory device(s). For example, the mapping 346 component may be configured to map bad blocks that discovered during wear leveling and/or garbage collection operations to blocks that may still accept valid data.[0055] In some embodiments, the controller 308 may be configured to control wear leveling utilizing information that may be determined by the erase block tracking 348 component. For example, the erase block tracking 348 component may be configured to increment a counter associated with each block in response to the block being written and/or erased. In some embodiments, the erase block tracking 348 component may be configured to increment the counter by a different value in response to the block being written or erased in an SLC mode than when the block is written and/or erased in an XLC mode. For example, the erase block tracking 348 component may be configured to increment the counter associated with a particular block by a first value in response to the particular block being written and/or or erased in the SLC mode, and to increment the counter associated with the particular block by a second value in response to the particular block being written and/or erased in an XLC mode. [0056] The memory management 340 component may further include a trim 350 component. The trim 350 component may include an SLC write 352 component, an SLC erase 354 component, an XLC write 356 component, and/or an XLC erase 358 component. The SLC write 352 component, SLC erase 354 component, XLC write 356 component, and/or XLC erase 358 component may be configured to provide different trims to various blocks based on whether the block is to be (or has been) written and/or erased in an SLC mode or in an XLC mode. In some embodiments, the SLC write 352 component may be used to write SLC data to SLC reserved blocks and to mixed mode blocks using a same[0057] Figure 4 illustrates a diagram of a memory 410 in accordance with a number of embodiments of the present disclosure. In some embodiments the memory 410 or a portion of the memory 410 can serve as a dynamic SLC cache. Memory 410 can be analogous to memory devices 210-1, 210-2, 210-N previously described in connection with Figure 2, or may be a portion of memory devices 210-1, 210-2, . . ., 210-N previously described in connection with Figure 2. In some embodiments, memory 410 can include a number of memory arrays analogous to memory array 100 previously described in connection with Figure 1.[0058] As shown in Figure 4, memory 410 can include a first portion430-1, a second portion 430-2, and a third portion 430-3. Each respective portion 430-1, 430-2, 430-3 can include a number of blocks (e.g., physical blocks) of memory cells (e.g., portion 430-1 can include a first number of blocks, portion 430-2 can include a second number of blocks, and portion 430-3 can include a third number of blocks). For instance, in the example illustrated in Figure 4, portion 430-1 can include Block O through Block_X-l of memory 410, portion 430-2 can include Block X through Block_Y-l of memory 410, and portion 430-3 can include Block Y through Block Max of memory 410.[0059] As shown in Figure 4, at least a portion (e.g., portion 430-1) can be smaller (e.g., include fewer blocks of memory cells) than portions 430-2 and 430-3. However, embodiments of the present disclosure are not limited to a particular size for (e.g., number of blocks in) portions 430-1, 430-2, and 430-3. For example, the portions 430-1, 430-2, and 430-3 may be the same size (e.g., may comprise a same number of memory blocks), portion 430-2 may be smaller than portions 430-1 and 430-3 and/or portion 430-3 may be smaller than portions 430-1 and 430-2. Further, although portions 430-1, 430-2, and 430-4 are illustrated as contiguous areas (e.g., as comprising contiguous blocks of memory cells) in Figure 4, embodiments of the present disclosure are not so limited (e.g., portions 430-1 , 430-2, and/or 430-3 may comprise non-contiguous blocks of memory cells).[0060] In some embodiments, each portion 430-1 , 430-2, 430-3 can represent a set or pool of memory blocks. For example, first portion 430-1 may represent a first set or pool of memory blocks, second portion 430-2 may represent a second set or pool of memory blocks, and third portion 430-3 may represent a third set or pool of memory blocks. In some embodiments, each set or pool of memory blocks may comprise memory cells with particular features and/or may comprise memory cells with particular types of data.[0061] For example, the first portion 430-1 of memory blocks may include memory cells that have been erased in a SLC mode. The second portion 430-2 of memory blocks may include memory cells that have been erased in an XLC mode, and the third portion 430-3 of memory blocks may include memory cells that are ready to be erased. For example, the third portion 430-3 of memory blocks may include memory cells that do not contain valid data (e.g., memory blocks that do not contain valid host data) that have not yet been erased, but are ready to be erased.[0062] In some embodiments, the size of the second set of memory blocks (e.g., the set of memory blocks that have been erased in an XLC mode) may be set to be equal to a refresh block count. As used herein, a "refresh block count" is a block count that is equal to a threshold number of free memory blocks. The refresh block count may be configurable and/or may be set by a user.[0063] In some embodiments, garbage collection operations may not be invoked unless the free block count is reduced to a number of blocks less than the refresh block count. As an example, the free block count may be set to five memory blocks. In this example, if there are five or more free memory blocks available, garbage collection operations will not be performed; however, if the number of free memory blocks is reduce to, for example, four memory blocks, garbage collection operations may be performed. [0064] In some embodiments, various modules may move data from source blocks to XLC target blocks as part of garbage collection operations. The modules may include read disturb, retention, static wear leveling, and/or read error handling modules. The source memory blocks that are freed during garbage collection operations may be added to the second portion 430-2 of memory blocks. In some embodiments, a number of remaining memory blocks that are freed as part of background garbage collection operations may be added to the first portion 430-1 of memory blocks. Background garbage collection operations may refer to garbage collection operations that are performed during idle time of the memory device. For example, memory blocks that are freed as part of garbage collection operations that are not added to the second portion 430-2 of memory blocks may be added to the first portion 430-1 of memory blocks.[0065] In some embodiments, the size of the third portion 430-3 of memory blocks may (e.g., the set of memory blocks to be erased) may be set to a value that is less than the refresh block count. For example, the size of the third portion 430-3 of memory blocks may be maintained such that the number of memory blocks associated with the third portion 430-3 of memory blocks is less than the number of memory blocks associated with the second portion 430-2 of memory blocks.[0066] In some embodiments, the first portion 430-1 may include memory cells that are erased in an SLC mode. The second portion 430-2 may include memory cells that are erased in an XLC mode, and the third portion 430- 3 may include memory cells that are ready to be erased. In some embodiments, the first portion 430-1, the second portion 430-2 and/or the third portion 430-3 may include background cache memory blocks.[0067] The memory 410 may be coupled to a controller 408. The controller 408 may be analogous to controller 208 illustrated and described in connection with Figure 2, and controller 308 illustrated and describe in connection with Figure 3, herein. The controller 408 may be configured to increment an SLC erase counter associated with a memory block among the first portion 430-1 of memory blocks in response to the memory block being erased in an SLC mode, and/or may be configured to increment an XLC erase counter associated with a memory block among the second portion 430-2 of memory blocks in response to the memory block being erased in the XLC mode.[0068] In some embodiments, the controller 408 may be configured to control and/or maintain the number of memory blocks associated with each respective portion 430-1, 430-2, and 430-3. For example, the controller 408 may be configured to maintain the second portion 430-2 of memory blocks such that a number of memory blocks associated with the second portion 430-2 of memory blocks is equal to or less than a refresh block threshold limit. In another example, the controller 408 may be configured to maintain the third portion 430- 3 of memory blocks such that a number of memory blocks associated with the third portion 430-3 of memory blocks is less than a number of memory blocks associated with the second portion 430-2 of memory blocks.[0069] The controller 408 may be configured to erase the second portion430-2 of memory blocks and subsequently write data to the second portion 430-2 of memory blocks in either an SLC mode or an XLC mode.[0070] In some embodiments, the controller 408 may be configured to monitor memory blocks to determine an erase count for respective memory blocks. For example, the controller 408 may be configured to increment a counter associated with each memory block in response to the memory block being erased. In some embodiments, the controller 408 may increment a first counter (e.g., an SLC erase counter) associated with a particular memory block in response to the particular memory block being erased in the SLC mode.Similarly, the controller 408 may be configured to increment a second counter (e.g., an XLC erase counter) associated with the particular memory block in response to the particular memory block being erased in the XLC mode. The controller may be further configured to prioritize memory blocks for erasing and/or writing data based on the first and/or second erase counter.[0071] In some embodiments, memory blocks associated with the second portion 430-2 may be prioritized for being erased and/or written such that memory blocks associated with the second portion 430-2 are erased and/or written prior to memory blocks in the first portion 430-1 and/or the third portion 430-3 being erased and/or written. Embodiments are not so limited; however, and in some embodiments, memory blocks associated with the first portion 430- 1 may be erased and/or written prior to erasing and/or writing memory blocks associated with the second portion 430-2 and/or the third portion 430-3.[0072] Memory blocks with erase counters (SLC erase counters and/orXLC erase counters) that have higher numbers of erases may be moved to the second portion 430-2. In some embodiments, memory blocks associated with the third portion 430-3 may not be immediately erased following garbage collection operations, and may instead be held in the third portion 430-3 until it becomes useful to erase them. Memory blocks that are associated with the third portion 430-3 that have low SLC erase counts may be moved to the first portion 430-1 and subsequently erased in the SLC mode.[0073] For example, the controller 408 may be configured to determine that a first memory block among the third portion 430-3 of memory blocks has a lower erase count than a second memory block among the third portion 430-3 of memory blocks. The controller 408 may be further configured to erase the first memory block among the third portion 430-3 of memory blocks in the SLC mode, and add the erased first memory block to the first portion 430-1 of memory blocks. As an example, the controller 408 may be configured to allocate memory blocks to the first portion 430-1, second portion 430-2, and/or third portion 430-3 based on a respective erase count for each memory block.[0074] In some embodiments, the blocks of portions 430-1, 430-2, and430-3 can be configured to operate as a dynamic single level cell (SLC) cache. That is, the blocks of portions 430-1, 430-2, and 430-3 can be configured to operate as a dynamic cache in SLC mode. However, embodiments are not so limited, and the memory blocks of portion 430-1 may be configured to operate in SLC mode as a dynamic SLC cache for the lifetime of the memory 410.[0075] Figure 5 illustrates an example flow diagram 560 for memory management in accordance with a number of embodiments of the present disclosure. At block 562, an erase count for a particular block may be determined. For example, the number of times the particular block has been erased may be determined. In some embodiments, the number of times the particular block has been erased may include information from a counter that is incremented both when the particular block has been erased in an SLC mode and when the particular block has been erased in an XLC mode. As described above, the counter may be incremented by different values depending on whether the particular block was erased in an SLC mode or in an XLC mode.[0076] At 564, a determination may be made whether the particular block has an erase count associated therewith that is below a threshold value. For example, it may be determined that the particular block has been erased a certain number of times in the SLC mode and a certain number of times in the XLC mode. The value of the counter may reflect the number of times the particular block has been erased in the SLC mode and the number of times the particular block has been erased in the XLC mode. If the value of the counter is not below a first threshold counter value (e.g., the particular block has been erased more times in the SLC mode and/or the XLC mode than a threshold number of combined erases), the particular block may be added at 565 to a pool of blocks that have been erased in the XLC mode.[0077] If the value of the counter is below a threshold counter value(e.g., the particular block has been erased less times in the SLC mode and/or the XLC mode than a threshold number of combined erases), the particular block may be added at 566 to a pool of blocks that are ready to be erased. From the blocks that are in the ready to be erased pool, a determination may once again be made whether a particular block has an erase count associated therewith that is less than a second threshold erase count value.[0078] If the particular block has been erased more times than the second threshold erase count value, the particular block may be held at 569 in the ready to be erased pool. In contrast, if the particular block has been erased less times than the second threshold erase count value, the particular block may be added at 568 to an SLC erased pool. By moving the blocks to various pools based on the erase count value associated with the particular blocks, uniform wear leveling may be achieved by using blocks from the SLC erased pool prior to using blocks from the XLC erased pool or the ready to be erased pool. Some embodiments include a method of operating a memory device in accordance with a number of embodiments of the present disclosure. The method can be performed by, for example, controller 208 previously described in connection with Figure 2, controller 308 previously described in connection with Figure 3, or controller 408 described in connection with Figure 4. [0079] The memory system can be, for example, memory devices 210-1,210-2, . . ., 210-N previously described in connection with Figure 2 and/or memory 410 previously described in connection with Figure 4. That is, the memory can include a first portion (e.g., a first number of blocks), a second portion (e.g., a second number of blocks), and a third portion (e.g., a third number of blocks).[0080] In some embodiments, the method for operating the memory device may include allocating a first portion of memory blocks associated with the memory device to be erased in a single level cell (SLC) mode, allocating a second portion of memory blocks associated with the memory device to be erased in an extra level cell (XLC) mode, and allocating a third portion of memory blocks associated with the memory device such that the third portion comprises memory cells that are ready to be erased.[0081] The method may further include erasing memory blocks in the first portion of memory blocks in the SLC mode, and/or writing data to the memory blocks in the first portion of memory blocks in the SLC mode. In some embodiments, the method may include erasing at least one memory block in the second portion of memory blocks in the XLC mode, and/or writing data to the at least one memory block in the second portion of memory blocks in the SLC or XLC mode.[0082] The number of memory blocks in the third portion may be maintained such that the number of memory blocks in the third portion is less than a number of memory blocks in the second portion. Garbage collection may be performed on memory blocks associated with the memory device , for example, during idle time of the memory device. In some embodiments, memory blocks that have had garbage collection operations performed thereon may be added to the first portion of memory blocks.[0083] In some embodiments, the method may include determining that a first memory block associated with the memory device has a lower erase count than a second memory block associated with the memory device, and adding the first memory block to the first portion of memory blocks in response to the determination that the first memory block has the lower erase count than the second memory block. [0084] In some embodiments, SLC caching may include using data blocks in mixed mode (e.g., SLC mode and XLC mode). A total number of program/erase cycles accumulated by a mixed mode block may be equal to the number of SLC program cycles used in an SLC caching mode and a number of XLC program cycles used in an XLC storage mode. Programming a block in SLC mode degrades a cell (e.g., consumes part of the lifetime of the NAND) at a smaller rate when compared to XLC mode.[0085] As described herein, in some embodiments, a mixed mode block that is to be written in the SLC mode may be erased in the SLC mode prior to writing in the SLC mode. In some embodiments, an SLC trim set may be used while writing to a block that has been previously erased in an XLC mode.[0086] In some embodiments, block erased in the SLC mode are not written in the XLC mode. However, by selectively erasing blocks based on whether the block is intended for SLC caching or storage, blocks may be allocated as they are needed. Although this may increase an erase latency, by allocating various portions of blocks into blocks that have been erased in SLC mode, XLC blocks, and blocks that are ready to be erased, such latencies may be mitigated.[0087] In some embodiments, if there are either SLC erased blocks or blocks that are ready to be erased, host data may be written in the SLC mode. If there are no blocks in either of these portions, host data may be written in the XLC mode using blocks in the XLC portion.[0088] In some embodiments, because solid state drives and mobile workloads may have frequent idle time and, as blocks are used the portions may be continuously replenished by either moving blocks from that are ready to be erased to the SLC erased portion, or by moving blocks that have comparatively lower erase counts that were garbage collected during the idle time to the SLC erased portion. This may allow for a steady portion of SLC erased blocks and XLC erased blocks to be provided. In addition, a majority of host data may be written to the SLC erased blocks, which may reduce the wear of the blocks and may allow for a higher amount of host data to be written for a given life of the blocks.[0089] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of a number of embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of ordinary skill in the art upon reviewing the above description. The scope of a number of embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of a number of embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[0090] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
An apparatus for controlling a screen includes a storage area, an analyzer, and a controller. The storage area stores information indicative of one or more screen usage attributes. The analyzer determines a pattern of usage based on the stored information. The controller automatically changes a density of the screen from a first density to a second density based on the pattern of usage determined by the analyzer. The change to the second density produces a change in a size of one or more items displayed on the screen. |
We claim: 1. An apparatus for controlling a screen, comprising: a storage area to store information including one or more screen usage attributes; an analyzer to determine a pattern of usage based on the stored information; and a controller to automatically change the screen from a first density to a second density based on the pattern of usage to be determined by the analyzer, wherein the change to the second density is to produce a change in a size of one or more items to be displayed on the screen. 2. The apparatus of claim 1, wherein the one or more items to be displayed on the screen include at least one of text, an image, video information, or an icon. 3. The apparatus of claim 1, wherein the change to the second density is to be performed for a website. 4. The apparatus of claim 1, wherein the change to the second density is to be performed for an application. 5. The apparatus of claim 1, wherein the one or more screen usage attributes correspond to one or more types of actions to display information or execute a function, or both. 6. The apparatus of claim 5, wherein the one or more types of actions correspond to at least one of a touch input, mouse input, key or button input, or voice command input. 7. The apparatus of claim 1, wherein the pattern of usage to be determined by the analyzer corresponds to at least one of a number of invalid input(s), a number of valid input(s), or a total number of input(s) to be received during a predetermined period of time. 8. The apparatus of claim 7, wherein the controller is to change the density of the screen from the first density to the second density based on a ratio of the number of invalid input(s) and the number of valid input(s) determined over the predetermined period of time. 9. The apparatus of claim 8, wherein the controller is to change the density of the screen from the first density to the second density based on a comparison of the ratio to a predetermined threshold value. 10. The apparatus of claim 1, wherein the second density is lower than the first density. 11. The apparatus of claim 1, wherein the analyzer is located in the controller. 12. The apparatus of claim 1 , wherein the controller is to control the storage area to store information indicative of the one or more screen usage attributes for only Internet websites. 13. The apparatus of claim 1 , wherein the controller is to control the storage area to store information indicative of the one or more screen usage attributes for only applications. 14. The apparatus of claim 1, wherein the controller is to determine the second density based on a product of the first density and a correction factor, and wherein the correction factor is to be determined based on the pattern of usage. 15. A method for controlling a screen, comprising: storing information including one or more screen usage attributes; determining a pattern of usage based on the stored information; and automatically changing a density of the screen from a first density to a second density based on the pattern of usage, wherein the change to the second density produces a change in a size of one or more items displayed on the screen. 16. The method of claim 15, wherein the one or more screen usage attributes correspond to one or more types of actions to display information or execute functions, or both. 17. The method of claim 15, wherein the pattern of usage corresponds to at least one of a number of invalid input(s), a number of valid input(s), or a total number of input(s) received over a predetermined period of time. 18. The method of claim 17, wherein the density of the screen is changed from the first density to the second density based on a ratio of the number of invalid input(s) and the number of valid input(s) determined over the predetermined period of time. 19. The method of claim 18, wherein the density of the screen is changed from the first density to the second density based on a comparison of the ratio to a predetermined threshold value. 20. The method of claim 15, wherein the second density is lower than the first density. 21. The method of claim 15 , wherein the change to the second density is performed for at least one of a website or an application. 22. A non-transitory computer-readable medium storing a program for controlling a display screen, the program including: first code to store information including one or more screen usage attributes; second code to determine a pattern of usage based on the stored information; and third code to automatically change a density of the screen from a first density to a second density based on the pattern of usage, wherein the change to the second density is to produce a change in a size of one or more items displayed on the screen. 23. The computer-readable medium of claim 22, wherein the one or more screen usage attributes correspond to one or more types of actions to display information or execute functions, or both. 24. The computer-readable medium of claim 22, wherein the pattern of usage is to correspond to at least one a number of invalid inputs, a number of valid inputs, and a total number of inputs. 25. The computer-readable medium of claim 24, wherein the density of the screen is to be changed from the first density to the second density based on a ratio of the number of invalid inputs and the number of valid inputs determined over a predetermined period of time. 26. An apparatus for controlling a screen, comprising: a storage area to store screen usage attributes; and a controller to determine a pattern of usage based on the stored screen usage attributes and to automatically perform a screen control function based on the pattern of usage, wherein the screen control function is to change an appearance of one or more items displayed on the screen. 27. The apparatus in claim 26, wherein the screen control function is to include changing a density of the screen from a first density to a second density, and wherein the change to the second density is to produce a change in size of the one or more items displayed on the screen. 28. The apparatus of claim 26, wherein the pattern of usage is to correspond to at least one of a number of invalid input(s), a number of valid input(s), or a total number of input(s). 29. The apparatus in claim 28, wherein the controller is to perform the screen control function based on a comparison of at least one of the number of invalid input(s), the number of valid input(s), or the total number of input(s) to a predetermined threshold value. 30. The apparatus of claim 30, wherein the storage area is to store the screen usage attributes for at least one of a website or application, and wherein the controller is to perform the screen control function for said at least one website or application. |
APPARATUS AND METHOD FOR AUTOMATICALLY CONTROLLING DISPLAY SCREEN DENSITY FIELD One or more embodiments described herein relate to controlling a display screen. BACKGROUND Smart phones, tablets, notebook computers, and other electronic devices have display screens set to standard factory settings. These settings include screen density, which is generally measured in terms the number of dots (pixels) per inch or by some other metric and which is related to screen resolution. To provide consumers with large amounts of information at a given time, the screen density on these devices is usually set to a high setting. This setting, however, may not be suitable for some users, especially those with large fingers, poor eyesight, or who otherwise have difficulty performing touch, stylus, or mouse inputs. For these users, attempts at selecting small- sized icons, links, or other functions may result in errors, e.g., selecting an unintended icon or link. On some devices, it may be possible to manually manipulate screen size in order to make icon selection easier. However, the time and inconvenience involved complicates use of the device and may cause additional inaccuracies and mistakes. Also, on these devices, a manual manipulation is required each time a user accesses the same website or application. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows one embodiment of an apparatus for controlling a screen. Figure 2 shows an example of stored screen usage attributes. Figure 3 shows an example how an unintended touch input may be entered. Figure 4 shows one example of how a screen density change may be performed. Figure 5 shows another example of how a screen density change may be performed. Figure 6 shows pre-stored density values for changing screen density. Figure 7 shows one embodiment of a method for controlling a display screen.DETAILED DESCRIPTION Figure 1 shows an embodiment of an apparatus for controlling a screen of an electronic device. The screen may be included in a same housing of the device or the screen may be coupled to the device through an appropriate wired or wireless interface. Examples of the device include a 5 smart phone, tablet, pod-type terminal, electronic reader, game terminal, remote-controller, camera, appliance, digital recorder, global-positioning terminal, notebook or desktop computer, or media player, as well as any other electronic device that operates to display information on a screen. The apparatus includes a storage area 10, and analyzer 20, and a controller 30. o The storage area 10 stores information indicative of one or more screen usage attributes. The attributes may correspond to one or more types of actions performed in displaying information and/or one or more types of actions performed in executing functions based on selected displayed information. According to one embodiment, the screen usage attributes may correspond to different ways commands are entered based on the information displayed on screen5 40. Examples of commands that correspond to screen usage attributes include touch inputs made by a finger, stylus, or mouse, drag-and-drop operations, move operations, swipe actions used to contract or expand screen size or ones to perform other screen-specific functions, and/or activation of one or more keys or buttons for affecting the display of information or performing a o function, to name few. The touch inputs may be used, for example, to select a link, activate a selectable icon, input text or numbers using an electronic keyboard displayed on the screen (where mistakes are constantly made while entering individual letters), and/or perform a text or website editing function (e.g., copy, cut, and/or paste an image or text). 5 The screen usage attributes may be stored under control of controller 30 or another processor of the electronic device. In accordance with one embodiment, the attributes are stored using control software 50 resident within the device, for example, in a read-only or other type of internal memory. The control software causes the storage area to store information indicative of the types of user inputs made during operation over a predetermined period of time. The time o period may be programmed into the control software and may or may not be adjusted by a user, for example, through the use of a control menu. In accordance with one embodiment, the control software causes the storage area to store screen usage attributes for only a certain mode of operation of the electronic device. The modemay correspond to when an internet website is accessed, either directly or through the use of a browser. In this mode, the storage area may store information indicating the types of commands entered. The commands may be entered based on touch inputs or any of the other input techniques previously described, e.g., mouse inputs, inputs made though the pressing or activation of a key or button, or voice command inputs as well as other input techniques. According to one technique, the command software may control the storage area to store information identifying the commands entered for all websites accessed and for all browser use. Alternatively, the command software may control the storage area to store information identifying the entered command for only one or more predetermined websites. The predetermined websites may correspond, for example, to a specific category of websites (e.g., news, sports, streaming media, social networking, etc.) or to specific websites that have been identified. A control menu may be displayed beforehand to allow a user to specify the specific or category of websites for which screen usage inputs (commands) are to be monitored Another mode of operation corresponds to the use of one or more applications, such as those found on a smart phone or pod/pad-type device. During this mode, the controller may control the storage area to store screen usage attributes relating to the applications to be executed on the electronic device. As in previous embodiments, the attributes may correspond to commands entered by touch input, key/button input, voice input, etc. The command software may control the storage areata store information identifying these commands for all executable applications or one or more predetermined applications. The predetermined applications may, for example, correspond to a specific category of applications (e.g., utilities, media, social networking, finance, games, medical, etc.) or to specific applications that have been identified. A control menu may be displayed beforehand to allow a user to specify the specific or category of applications for which screen usage inputs (commands) are to be monitored. In accordance with another embodiment, the screen usage attributes may correspond to commands entered in for a plurality of operational modes. By storing screen usage attributes in this manner, the storage area will compile a statistical base of data which can be used by the analyzer to identify usage patterns in a manner to be described in greater detail. According to one embodiment, a sequential list of input commands corresponding to the screen usage attributes may be stored in the storage area. Optionally, the controller may cause the storage area to store information indicative of other information, including but not limited to the time difference (Δ) between input commands,the website or application corresponding to each input command, and/or the execution results of the commands. In accordance with one embodiment, this additional information may assist the analyzer in determining whether an invalid input or valid input was made each time a command was entered (e.g., for each screen usage attribute). Alternatively, the analyzer may determine whether a valid or invalid entry was made based solely on the sequential list of commands. The execution results may include, for example, information indicating whether a command was a valid or invalid input. An invalid input may, for example, access an unintended website or application and/or may correspond to the case where no action was taken. The latter situation may arise when, for example, the touch input failed to select a link on the screen, because, for example, the user's finger touched a portion of the screen adjacent the link that corresponds to an unselectable inactive area. Detection of a valid or invalid input may be determined, for example, by the controller and/or analyzer. The analyzer 20 performs the function of analyzing the information in the storage area. In performing this function, the analyzer may be programmed to identify specific patterns of usage from the stored information that arise when a command or other type of screen usage attribute produced an invalid input. The analyzer may be implemented, for example, by statistical or data analysis software. This software stored in memory 50 or another memory, e.g., one on the same chip as the controller. An example of how the analyzer may identify valid and invalid inputs based on patterns of usage is discussed relative to Figures 2 and 3. In Figure 2, a partial list of screen usage attributes stored in storage area 10 is provided. These attributes include a sequential list of input commands entered over a predetermined period of time, e.g., a learning period of one week. In this list, a repetitive pattern of "touch input" and "back" commands is stored, as shown by region 80. This pattern may occur, for example, when a user intended to touch one link or selectable icon on a displayed webpage but instead actually touched another link or icon, or completely missed touching any active area on the page. Such a case may arise when the original screen density causes the link or icon to be too small in comparison to the size of a user's finger in order to make an accurate selection, or touch input. Also, when screen density is too high, a person may be unable to touch an intended link or icon as a result of poor eyesight or because of a poor interface screen design. Figure 3 provides an example of this situation. In this figure, two links are shown that respectively correspond to Article 1 and Article 2. Because of the large size of the user's fmger coupled with the high screen density, a touch input by the user may mistakenly select the link forArticle 2, when the link for Article 1 was intended. The control software of the analyzer may be programmed to identify a re-occurring pattern of three successive invalid touch inputs, for purposes of identifying a pattern of usage that identifies an invalid input. According to one technique, when this repetitive pattern occurs a predetermined number 5 of times over the one-week period, action may be taken by the controller to automatically change the screen density. The density change may be performed for this website only or generally for all websites to be displayed on the screen. A similar set of control operations may be performed for identifying patterns of usage from an analysis of screen usage attributes relating to an application. According to another technique, a pattern of usage may be recognized based on other o screen usage attributes in the storage area. This additional attributes may include time difference (Δ) and website/application information. The time difference information may provide an indication of time between successive touch input commands, and the website/application information may identify the websites and/or applications accessed by a user during the learning time period. 5 Figure 2 provides an example of these additional screen-usage attributes and how they may be used as a basis for identifying a pattern of usage. When the time difference between a successive, repetitive pattern of "touch input" and "back" commands is below a predetermined limit (e.g., 2 seconds), the analyzer may be programmed to identify a pattern of usage of invalid touch inputs. The shortness of this time limit suggests that the user has selected the wrong link o and has quickly attempted to correct the problem by touching the Back button or arrow, so that he may try once again to select the correct link. Another pattern of usage may correspond to one or more successive input commands (whether by touch, stylus, or mouse) which produces no action at all, as shown by region 85 in Figure 2. When these or other patterns of usage have been identified a predetermined number of5 times during the learning time period, action may be taken by the controller to automatically change the screen density. This decision may be made in a variety of ways. One way involves receiving information from the analyzer indicating the number of invalid inputs and the number of valid inputs that have occurred over the learning time period. This may be performed on a website-by-website basis, an application-by-application basis, or0 both, or generally for all websites and/or all applications. Based on this information, the screen density may be changed by the controller, either for specific ones of the websites or applications for which the erroneous usage pattern has been repeated for more than the predetermined numberof times, or for all websites and/or applications in general. The analyzer may also optionally control the storage area to store corresponding information identifying invalid and/or valid inputs. Also, in the foregoing examples, the screen usage attributes and patterns of usage were discussed relative to the selection of links or icons. In other embodiments, screen usage attributes and patterns of usage may be identified for other types of commands, including but not limited to attempts at selecting text to be cut, copied, and pasted, swipes to cause different information within a same page to be displayed or different pages to be displayed, screen expansion or contraction touches or move operations as well as others. In accordance with one embodiment, a valid input may be determined to occur when an intended action is accomplished on a first attempt, e.g., an intended website was accessed based on only one touch input to that link. Thus, for example, in Figure 2, the website "dudgereport.com" was accessed based on only one touch input 81. Appropriate monitoring software may be used to determine when the valid and invalid inputs occur. The controller 30 performs one or more screen control functions based on a pattern of usage identified by the analyzer. The screen control functions include automatically changing a density of the screen from a first density to a second density based on the identified pattern(s) of usage. The screen density may be changed on a website-by-website basis, application-by- application basis, and/or generally for all websites and/or applications. According to a website-by-website implementation, the controller receives from the analyzer information indicative of the number of valid inputs that have been identified for a particular website and the number of invalid inputs that have been identified for a particular website during the predetermined (learning) period. As previously indicated, the invalid inputs correspond to patterns of usage identified by the analyzer, e.g., ones corresponding regions 80 and 85. Additionally, or alternatively, the controller may receive from the analyzer information indicative of the number of valid input episodes and the number of invalid input episodes that have been identified during the predetermined (learning) period. For the sake of clarity, an episode may collectively refer to the identification of a usage pattern that contains invalid inputs. Thus, in Figure 2, region 80 may be considered to have three invalid inputs but only one invalid input episode. When this information is received from the analyzer, the controller performs a comparison of the numbers to one or more predetermined threshold values. According to one implementation, the controller may compute a ratio of these numbers for comparison to apredetermined threshold value. Based on this comparison, the controller will either automatically perform a screen control function or will not perform this function. For example, if the ratio is greater than a value of 1, then more invalid inputs (or episodes) occurred during the learning period that valid inputs (or episodes). In other embodiments, the threshold value may be less than one. According to another implementation, the controller may compare only the number of invalid inputs (or input episodes) for the particular website, and then automatically perform or not perform a screen control function based on the comparison. According to another implementation, the controller receives from the analyzer information indicative of the number of valid inputs (or episodes) and the number of invalid inputs (or episodes) for all websites accessed during the predetermined (learning) period. The controller may then perform a compute a ratio of these numbers for comparison to a threshold value, or may only compare the number of invalid inputs (or episodes) to a threshold. Similar implementations may be performed for a particular application or group of applications, or generally for all applications. The change in screen density performed by the controller may be accomplished by automatically setting a default value in a setting menu that corresponds to the adjusted density. The adjusted density may produce a change in the size of the information displayed on the screen. For example, as shown in Figure 4, when the screen density is changed to a lower value, the links corresponding to Articles 1 and 2 become larger, thereby making it easier for the user to select the intended link with a finger, stylus or cursor. The change in screen density may also produce a change in screen resolution. In other embodiments, the controller may change the screen to a higher value,to produce a commensurate change in the size of the information displayed on the screen. In the foregoing embodiments, the analyzer and controller were described to control the display of information for websites or applications. In other embodiments, the analyzer may identify patterns of usage and the controller may control screen density for information displayed in a control screen, management screen, or menu of an electronic device. Figure 5 shows an example of the case where the controller changes the screen density of a settings menu 90 displayed on a mobile terminal from a higher density to a lower density. While this change causes a portion of the settings menu to be left out, the size of the menu is increased to allow for easier and more accurate touch selection of the items in the portion that is displayed.Also, in Figure 5, the menu is show to correspond to the limits of the screen. However, in other embodiments, the menu may be smaller than the physical dimensions of the screen. In this case, the controller may only control the screen density of information shown in the menu, with the screen density of other portions of the screen left unadjusted. Also, the controller may change screen density selectively for only portions of a screen that do not relate to menus, such as, for example, chat message windows, address entry windows for entering email or text message addresses, windows or areas used for social networking or receipt of notification messages, video or media player regions, images, as well as sub-areas dedicated to displaying other types of information on the screen. The change in screen density to be performed by the controller may be accomplished based on one or more predetermined density values stored in memory for purpose of improving the ease of use of making inputs by a user or for other reasons. For example, in accordance with one embodiment, a plurality of different density values may be stored in memory for selection by the controller. The selection may be performed based on a comparison to predetermined threshold value(s) as previously discussed. For example, as shown in Figure 6, if the comparison shows a difference lying in a first range, the controller may select a first one of the predetermined density values for changing screen density. If the comparison shows a difference lying in a second range different from the first range, then the controller may select a select a second one of the density values for changing screen density, and so on. In accordance with another embodiment, the change in screen density may be performed based on an adjusted pixel density computed by the controller (or analyzer). The adjusted pixel density may be considered to be equivalent to a change in screen resolution computed in accordance with Equations (l)-(4):where DP corresponds to a diagonal resolution of the screen measured in pixels, Wp corresponds to the display resolution width, and Hp corresponds to the display resolution height. Current Pixel Density = DP/PS (2)where the Current Pixel Density may be measured in pixels per square inch and PS corresponds to the physical side of the screen. Invalid Input Ratio = Invalid Inputs/Total Inputs (3) 5 Adjusted Pixel Density = Current Pixel Density * K (Invalid Input Ratio + 1) (4) where K is a predetermined constant value set by the user or control software for the screen. These equations may be used in accordance with the following example for purposes of i o changing the density (and thus resolution) of a screen of an iPhone 3 GS model. For this phone, consider the case where the display resolution width is 480 pixels and the display resolution height is 320 pixels, or vice versa. Based on Equation (1), the diagonal resolution of the screen is 576 pixels. The physical size of the screen is 3.5 inches (measured on the diagonal). Based on 15 Equation (2), the current pixel density is 164.5714 pixels per inch. Using a touch screen device driver hook, the touch screen inputs are monitored to be as follows for a predetermined (learning) time period equal to 2 days. The even ACTION_DOWN refers to a touch input made by one finger and the numbers separated by a comma refer to the x and y screen coordinate positions where the touch occurred. 20 1. event ACTION JDOWN [#0(pid 0) = 135,179]: No view implemented in the application screen for this event. Hence, an invalid input. Increment Invalid Input count by 1. event ACTION_DOWN [#0(pid 0) = 135,184]: Valid view implemented. Thus, valid input. Increment total input count by 1. event ACTION JDOWN [#0(pid 0) = 144,205]: Valid view implemented. Thus, a valid input. Increment total input count by 1. 30 event ACTION_DOWN [#0(pid 0) = 152,227] : Valid view implemented. Thus, a valid input. Increment total input count by 1.The foregoing data includes touch inputs as screen usage attributes, and the pattern of usage corresponds to whether those inputs are valid or invalid. For the period for which the four usage attributes were stored in memory, there was 1 touch input that produced an invalid result out of a total of 4 touch inputs. Based on Equation (3), the invalid input ratio is 0.25. In other words, 25% of all the touch inputs by the user had to be corrected by repeating the touch input action. With this information known, the adjusted screen density may now be computed based on Equation (4), where K = 1 : Adjusted Pixel Density = 164.5714 * (0.25 + 1) * 1 = 205.7143 Thus, in this example, the controller changes the screen density from the initial value of 164 to 205 pixels per inch. Given this adjusted density and maintaining the aspect ratio of the screen, the new screen resolution may be determined and set. As a result, the size of the items (text, icons, links, images, etc.) on the screen may be increased (or decreased), for example, based on the specific characteristics (e.g., finger size) of the user. The screen control function automatically performed by the controller has been identified as a change in screen density. However, in other embodiments, the controller may automatically perform additional or different screen control functions based on the pattern of usage information output from the analyzer. These additional or different functions may include changing a background or foreground font color of the screen or different portions of the screen, changing the color or appearance of text, icons, or other information on the screen, and/or changing a font size of text on the screen. Also, the controller has been described as computing the ratio and/or performing the threshold-value comparison for purposes of determining whether to automatically perform a screen control function. However, in other embodiments, the analyzer may perform this function and inform the controller of the result. Figure 6 shows operations performed in accordance with one embodiment of a method for controlling a display screen of an electronic device. The device may correspond to any of those previously mentioned, including ones that either include or are coupled to the screen. The method includes setting an option in a control menu of an electronic device that includes or is coupled to the screen. The option in the control menu turns on or otherwiseactivates a screen control manager for automatically performing a screen control function based on screen usage. (Block 710). The screen control manager may correspond to all or portion of the apparatus in Figure 1 or another apparatus. (The operation in Block 170 may be optional, as the device may be set by the factory to automatically perform the method without user interaction). After the screen control manager has been set, screen usage attributes begin to be monitored for a predetermined period of time. (Block 720). The screen usage attributes may be one or more predetermined types or may be all screen usage attributes entered into the electronic device. The attributes correspond to any of the types previously discussed, including but not limited to various input or other commands. The commands may be entered using touch inputs, swipes, drag-and-drop operations, expansion or contraction operations, stylus inputs, cursor entries, voice commands, or other types of inputs or commands including those detected by so- called tactile sensors, image or voice recognition systems, or based on wireless and/or remote control signals. The types of screen usage attributes may be recognized, for example, by system operating software and information identifying the screen usage attributes is stored in memory, as previously discussed. (Block 730). Examples of stored screen usage attributes are shown in Figure 2. The predetermined time for monitoring and storing operations of the screen usage attributes may be a fixed period of time set in the system operating software or may be a time adjustable by a user, for example, by accessing a corresponding control menu setting. According to one embodiment, the time period may be continuous with no stop period. Additionally, the time period may be considered to be a leaning time period, after which an assessment is made for purposes of performing a screen control function. Also, the screen usage attributes may be performed on a website-by-website basis, application-by-application basis, a combination thereof, or generally for all websites and/or applications accessed over the learning time period. The screen usage attributes stored in memory is analyzed to identify one or more patterns of usage. (Block 740). The analysis may be performed in a manner similar to the operations of the analyzer in Figure 1 previously discussed. Moreover, the stored attributes may be analyzed continuously or intermittently throughout the learning period, or the analysis may be performed after the learning period has expired. Once the pattern(s) of usage have been identified, a decision is made as to whether to perform a screen control function. The screen control function may be automatically performed seamlessly and without user input (perhaps, with the exception of the initial activation setting inBlock 710. In other embodiments, no such setting may be required but rather the screen control function may be automatically set by system software without requiring any user intervention.) In accordance with one embodiment, the decision as to whether to perform a screen control function is based on a comparison of the usage pattern(s) to one or more predetermined 5 threshold values. (Block 750). For example, as previously indicated, a ratio may be computed based on the number of invalid inputs and the number of valid inputs for a given website or application (or generally for all websites and/or applications). If the ratio is greater than a predetermined value (with the number of invalid inputs being divided by the number of valid inputs), then the screen control function may be automatically performed after the learning time l o period expires. Alternatively, the decision on whether to perform a screen control function may be based solely on the number of invalid inputs compared to a threshold value. The screen control function is performed based on a result of the decision, e.g., comparison. (Block 760). As previously indicated, the screen control function may be automatically performed under these circumstances and may involve changing a density of the 15 screen from a first density to a second density based on the pattern of usage. This change may produce a corresponding change in the size of one or more links, icons, images, video, graphical objects, text, or other items displayed on the screen. The change in screen density may be performed for the entire screen or selectively for only one or more portions of the screen, with the screen density of other portions left undisturbed. 20 The screen control function may also include a change in other screen parameters. For example, as previously discussed, these parameters may include background or foreground color and/or font size, as well as other adjustable parameters of the screen. Another embodiment corresponds to a computer-readable medium storing a program for performing operations of the one or more embodiments of the method described herein. The 25 program may be part of the operating system software or may be a separate application to be executed by a central processing unit or controller of the electronic device. The medium may be a read-only memory, random access memory, flash memory, disk, or other article capable of storing information. Also, the program may be executed remotely using, for example, a cloud- type processor or through software downloaded for execution through a wired or wireless link. 30 According to another embodiment, the data in storage area 10 may alternatively or redundantly be stored in a remote medium, such as a cloud-type storage device in communication with the electronic device. Also, in Figure 1, the controller and analyzer are shown to be separate components. However, in other embodiments, the analyzer may be included within the controller.For example, the same control software may perform the functions of analyzer and controller as previously described herein. Any reference in this specification to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to effect such feature, structure, or characteristic in connection with other ones of the embodiments. Also, the features of any one embodiment described herein may be combined with the features of one or more other embodiments to form additional embodiments. Furthermore, for ease of understanding, certain functional blocks may have been delineated as separate blocks; however, these separately delineated blocks should not necessarily be construed as being in the order in which they are discussed or otherwise presented herein. For example, some blocks may be able to be performed in an alternative ordering, simultaneously, etc Although the present invention has been described herein with reference to a number of illustrative embodiments, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this invention. More particularly, reasonable variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the foregoing disclosure, the drawings and the appended claims without departing from the spirit of the invention. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art. |
PROBLEM TO BE SOLVED: To provide: a process of producing a circuitry device structure that enables a fine interval distance between a stress relief and a contact window structure; and the structure.SOLUTION: An embodiment includes a substrate, a first metallic post 68 and a second metallic post. The first metallic post 68 is located on the substrate. The maximum horizontal dimension Hw of the first metallic post 68 divided by the height Ht of the first metallic post 68 is less than 4, and the height of the first metallic post 68 is from 20 μm to 300 μm. The second metallic post is located on the substrate. The maximum horizontal dimension of the second metallic post divided by the height of the second metallic post is less than 4, and the distance Hb between the center of the first metal post and the center of the second metal post is from 10 μm to 250 μm.SELECTED DRAWING: Figure 7a |
A track device structure, comprising: a substrate, a first metal column, and a second metal column, wherein the first metal column is located on the substrate and the maximum width of the first metal column is The first metal column is smaller than 4 divided by the height of the first metal column, and the height of the first metal column is between 20 μm and 300 μm, wherein the second metal column is located on the substrate, The maximum width of the metal column divided by the height of the second metal column is smaller than 4, and the height of the first metal column is between 20 μm and 300 μm, and the center of the first metal column A line device structure characterized in that the distance from the point to the center point of the second metal column is between 10 μm and 250 μm.The line device structure according to claim 1, wherein a first polymer layer having a thickness of 20 μm to 300 μm is formed on the substrate, and the first metal column and the second metal column are coated. .The line device structure according to claim 1, wherein the first metal column body comprises a gold layer with a thickness of 30 μm to 100 μm.The line device structure according to claim 1, wherein the first metal column body comprises a copper layer with a thickness of 30 μm to 100 μm.The line device structure according to claim 1, wherein the first metal column and the second metal column are connected by a metal connection line.The substrate is a semiconductor substrate, a first metal structure located on the semiconductor substrate, a protective layer located on a metal line and containing a silicon nitride compound, a second metal structure located on the protective layer, and the protective layer An opening located inward comprising a first pad exposing a first metal structure, the second metal structure comprising a second pad connected to the first pad, and a position as viewed from a top down view of the first pad The line device structure according to claim 1, wherein the first metal column is located on the second pad, different from the position as viewed from the top view of the second pad.2. A method according to claim 1, further comprising: providing a projection located on the first metal body, wherein the projection is connected to a pre-formed external circuit, and the projection comprises a gold layer having a thickness of 10 μm to 30 μm. The line device structure described in.The invention is characterized in that it comprises a projecting mass located on the first metal column, wherein the projecting mass is connected with a pre-formed external circuit, and the projecting mass comprises a tin solder layer with a thickness of 10 μm to 150 μm. The line device structure according to 1.A pad located on the first metal column, wherein the maximum width of the pad is greater than the maximum width of the first metal column, and the pad is used for connection with a wire fabricated in a wire fabrication process The line device structure according to claim 1, characterized in thatThe line device structure according to claim 1, wherein the top surface of the first metal column is used for connection with a wire manufactured in a wire manufacturing process.The line device structure according to claim 1, further comprising a metal coil connecting the first metal column and the second metal column.The second metal structure includes a pad connected to the first metal column, and the second metal structure includes a pad connected to the first metal column, and the position of the pad viewed from the top is a view from the first metal column In contrast to the position seen from the point of view, the protrusions are located on the pads and connect with a pre-formed external circuit, the protrusions comprising a gold layer with a thickness of 10 μm to 30 μm. The line device structure according to 1.A second metal structure and a projecting mass, wherein the second metal structure is located on a substrate, the second metal structure comprises a pad connected to the first metal column, and viewed from the pad down view The position is different from the position seen from the top view of the first metal column, the protrusion is located on the pad and is connected with the pre-formed external circuit, the protrusion having a thickness of 10 μm to 30 μm The line device structure according to claim 1, comprising a tin solder layer ofThe second metal structure includes a second metal structure located on a substrate, the second metal structure includes a pad connected to the first metal column, and a position viewed from the pad down view is the position of the first metal column. The line device structure according to claim 1, wherein the pad is used for connection with a wire manufactured in a wire manufacturing process, unlike a position viewed from a look down.A line device structure, comprising: a semiconductor substrate, a first metal column, a second metal column, an insulating layer, a first protruding block, and a first protruding block, wherein the first metal column is The first metal column is located on the semiconductor substrate, and the maximum width of the first metal column divided by the height of the first metal column is smaller than 4, and the height of the first metal column is between 20 μm and 300 μm. The second metal column is located on the semiconductor substrate and is smaller than 4 by dividing the maximum width of the second metal column by the height of the second metal column, and the height of the first metal column The insulating layer is located on the semiconductor substrate and covers the first metal column and the second metal column, and the first protrusion block is the first metal column. Or suitable for connection to a previously formed external circuit located in the insulating layer, and the second projecting block is the second metal column or the insulating block Suitable for connection to a pre-formed external circuit located in a layer, characterized in that the distance from the center point of said first protrusion to the center point of the second protrusion is between 10 μm and 250 μm Line device structure.The line device structure according to claim 15, wherein a distance from a center point of the first protrusion to a center point of the second protrusion is between 100 m and 200 m.16. The line device structure of claim 15, wherein the first metal column comprises a gold layer having a thickness of 20 μm to 300 μm.16. The line device structure according to claim 15, wherein the first metal column comprises a copper layer having a thickness of 20 μm to 300 μm.16. The line device structure of claim 15, wherein the first metal column comprises a gold layer having a thickness of 10 [mu] m to 30 [mu] m.The line device structure according to claim 15, wherein the first metal column comprises a tin solder layer having a thickness of 10 m to 150 m.The line device structure according to claim 15, wherein the material of the first insulating layer comprises polyimide.A first metal structure located on a conductor base, a protective layer containing a silicon nitride compound located on the first metal structure, a second metal structure located on the protective layer, and an opening located in the protective layer A first pad exposing a first metal structure, the second metal structure including a second pad connected to the first pad, and a position viewed from the top view of the first pad is the second pad 16. The line device structure according to claim 15, wherein the first metal column is located on a second pad, unlike the position as viewed from a look down.The first metal structure includes a first metal structure located on the first insulating layer and the first metal column, the metal structure includes a pad connected to the first metal column, and a position viewed from the pad down view The line device structure according to claim 15, wherein the first protrusion is located on the pad, different from the position as viewed from the top view of the second pad. |
Method of manufacturing line deviceThe present invention relates to a method of manufacturing line devices, and more particularly to a method of manufacturing line devices that effectively improves the performance of an IC.Semiconductor wafers are used in the fabrication of ICs with continuous increase in density and reduction in geometric design, providing interconnections and isolation between different layers of semiconductor devices through the structure of multiple conductive and insulating layers. For example, in large ICs such as masters and passive devices, thin film transistors, CMOS (capacitors), capacitors, chokers, resistors, etc., it is necessary to increase several electromagnetic property connections between different layered structures and semiconductor devices. At the same time, a massive increase in the number of wires is also necessary for a prefabricated IC. Therefore, these wires pass through the protective layer in the IC chip and are exposed to the outside, and finally connected to the output input pad. This wire is used to connect with the external contact structure of the chip packaging.The wafer level chip scale package (WLCSP) is a technology for packaging IC chips in a so-called wafer level system, which is different from the process of fabricating a single unit package after traditional chip cutting. Therefore, the chip is cut into a single unit and WLCSP is used for wafer production / package test and wafer level burn-in (WLBI) before final chip carrier package, for example, ball grid array (BGA) package. ) Can be matched. The advantage is that by reducing the occupied volume and thickness, smaller dimensions, lighter weight, relatively simple assembly process, reduced overall production costs and better electromagnetic properties are obtained. In addition, WLCSP simplifies the process of transporting one device from silicon material to a customer site, and at the same time the chip package production volume of IC is increased, the cost is also reduced. However, because manufacturing ability and structural reliability are related, they face a very big challenge.The WLCSP can basically be extended to the junction device fabrication process and device protection fabrication process in the wafer fabrication process, and in the first step of the WLCSP, post-passivation is performed through the semiconductor IC line reconfigurable technology. Form a Passivation) and extend the distance of the standard pads. Therefore, a low cost solder stent is formed, and a sill or alignment oriented solder is realized. For the presentation of the reconfigurable technology, for example, the presenters of Patent Documents 1 to 3 are the same as the applicant of the present invention, and as disclosed in this patent, a single line layout layer. Connect to the output input pad of the semiconductor structure. The RDL layer is formed on a post passivation polymer layer or an elastic material layer, and a post-like contact window manufactured using a mask fabrication process is formed on the RDL layer, and the post-like contact window formed after this reaction The lateral direction is independent and not supported at all, and the structure formed after the reaction is further advanced to the chip carrier package using flip chip assembly technology. Even though this post-passivation structure and the corresponding fabrication process can solve and improve the spacing problem that exists in the IC package, the integrated scale required IC for sustained growth should be more severely limited It is also a potential risk to the damage caused by stress induction.Patent Document 4 includes another RDL layer post passivation structure WLCSP. The RDL layer is formed on the polymer layer on post passivation, and another polymer layer is covered on the RDL layer, and the polymer layer is etched or drilled to form Micro-vias, and further metal Are filled with holes of microvias to form internal connections, so-called conductive columns. However, the upper polymer layer and the lower polymer layer are separated by one chromium-copper layer so as not to be in contact with the RDL layer, and the other is an electroless plating on the protruding tail end of the above-mentioned conductive column. Bonded to screen printed or stenciled tin lead. Since the conductive pillars extend to the outside of the polymer layer and the top surface of the above structure is not smooth, micro vias are formed by the conductive pillars on the premise that high resolution lithography can not be achieved, The formation of tin-lead by plating is not achieved, and ultimately the distance between contact windows in the IC package is limited. Moreover, this limitation becomes more pronounced as the thickness of the polymer layer increases. However, as the thickness of the polymer layer increases, it provides a satisfactory stress relief.This point is described below. Also, as mentioned above, because the lower polymeric layer is separated from the upper polymeric layer, the lower polymeric layer alone can not provide better stress relief and the thickness of the current lower polymeric layer. Is made thin to reduce lateral movement of the RDL layer, so the stress relief is somewhat weakened and this issue is discussed below.One challenge in the reliability of the structure is to provide a multilayer structure formed of the above WLCSP by providing sufficient stress relief. These include semiconductor IC chips and post passivation structures outside the fixed amount. By way of example, the thin film bonded on the protective layer was affected by the bi-shear stress, which was induced by heat. Equation (1) shows a mathematical theoretical simulation equation of bi-shear stress in post passivation and provides in the equation the physical parameters of the silicon substrate structure in the IC chip.σ ppt: Bi-shear stress in the thin film of post passivation R: radius of curvature at which the silicon substrate is thermally curved Ys: Young's modulus of silicon substrate vsi: Poisson's ratio of silicon substrate x Si: thickness of silicon substrate xppt: post passivation thin film Target Thickness From the above equation, besides increasing the Poisson's ratio of the silicon substrate, it is possible to lower the bi-shear stress in two ways. (A) Decrease xSi. This means placing the silicon substrate thinner. Or (b) increase xppt. This means increasing the thickness of the post passivation structure.FIG. 1 is a known post passivation structure 10 that includes one RDL layer 12 and one stress relief polymer layer 14. The stress relief polymer layer 14 is also referred to as a stress buffer layer. The polymer layer 14 is formed on the protective layer 16 on the surface of the semiconductor IC chip 18, and an elastic material / epoxy resin / low dielectric constant material or other polymer material is used. The elastic material is mainly to provide sufficient mechanical elasticity to the bonding structure, and the polymer layer 14 is coated on the IC chip 18 according to the result inferred in the above equation (1). The stress formed in the structure on the IC chip 18 is all absorbed and buffered, thereby reducing the local damage generated in the IC chip 18 and, particularly for the precise and complex IC chip 18, the post passivation structure 10 of Confidence increases with this. Also, according to the relational expression in the formula (1), the expression of the buffer effect becomes better as the thickness of the polymer layer 14 increases.However, when using thick polymer layer 14, one problem is often encountered. The RDL layer 12 shown in FIG. 1 is usually made of copper, and connects the output input pad 20 of the IC chip 18 to an external circuit. At the top of the pad 20 and simultaneously and separately with the tin-lead bumps or copper conductive pillars formed, the RDL layer 12 is connected very closely to the underlying package structure, and the package structure therein is The RDL layer 12 is defined by the polymer layer 14 as a slope 22 having a constant slope, since it may be a single chip carrier. The RDL layer 12 gradually rises from one lower IC plane forming the output input pad 20 to one higher IC plane. For example, the bevel 22 at the top of the polymer layer 14 is determined by covering the thick polymer layer 14 opening in the metallization step. In practical applications, the slope of the bevels 22 varies with different openings of each polymer layer 14, each opening being determined by the actual manufacturing process conditions and the underlying physical properties and characteristics of the polymer body. For example, the wetting contact angle associated with the energy at the surface of the material, described in the example, in many situations, the slope of the polymer layer 14 bevel 22 on the IC protective layer 16 is about 45 degrees, so the RDL layer 12 extend from the pad 20 in the IC to the top of the polymer layer 14 by a fixed amount of lateral movement. Thus, this lateral movement allowed the RDL layer 12 to tolerate a certain amount of tolerance. Finally, this allowed tolerance allows for the slope rate of each type of slope 22 formed by the polymer layer 14 of different openings, and because the lateral movement of each RDL layer is different, The spacing distance between the contact windows of the window is limited, the contact windows in this are jointly and separately defined on the tin-lead bumps or copper bodies, and the distance between the contact window structure and the opening on the protective layer increases accordingly As a result, the minute gap distance is not maintained between the post passivation structure and the underlying package structure. On the contrary, when the thick polymer layer 14 is not employed, stress insufficiency causes damage to the electric path in the precise IC chip due to stress induction. Also, for large conductive columns, the distance between the output input structures is limited due to the lack of lateral support. However, a large conductive pillar structure can provide sufficient distance, lowering the coupling capacitance generated between the output input pad 20 and the electromagnetic characteristic path in the IC chip 18 Therefore, a large conductive column structure is required.The subject matter presented above is feasible for the problems encountered by reducing the separation distance from the contact window structure on the post passivation structure, and this also impedes the integrated scale in the IC. ing.Taking this into consideration, the WLCSP and the corresponding fabrication process are submitted to improve the stress relief and at the same time achieve a reduction in the spacing distance of the contact window structure.米国特許第6,645,136号米国特許第6,784,087号米国特許第6,818,545号米国特許第6,103,552号The main object of the present invention is to provide a method of manufacturing a line device, which can provide a stress relief and a reduction in the distance between the contact window structures. According to the invention, the spacing distance is less than 250 μm, and it can also be achieved to keep the number of pinholes to a goal of less than 400.Another object of the present invention is to provide a method of manufacturing a line device, which comprises a post passivation structure supported by one RDL, which forms a support layer having a thinner relative thickness on a protection layer. For example, the polymer layer supports the interstices between the RDL structures, and also forms a support layer with a greater relative thickness. For example, the polymer layer supports the gaps between the RDL structures between the layered packing structures located next to each other.The present invention provides a method of manufacturing a line device for the above purpose. In the manufacturing process, a semiconductor base, a metal layer located on the semiconductor base, and a first polymer layer located on the semiconductor base and the metal layer are provided. The first polymer layer is polished. A second polymer layer is formed on the first polymer layer, and the metal layer is exposed at one opening in the second polymer layer.The present invention provides a method of manufacturing a line device for the above purpose. The semiconductor base and the metal base located on the semiconductor base are provided in the manufacturing process, the maximum width of the metal base divided by the heights of the first metal column and the second metal column being smaller than 4, and The height of the metal column is 20 μm to 300 μm. A first insulating layer is formed on the semiconductor base and the metal post is covered. A second insulating layer is formed on the first insulating layer, and the first metal pillar is exposed in the opening of the second insulating layer.The present invention proposes a manufacturing process of a kind of line device structure and its structure for the above object, and in the manufacturing process, one semiconductor wafer and a first metal layer located on the semiconductor wafer and the semiconductor wafer And one polymer layer located on the first metal layer, wherein the semiconductor wafer includes a plurality of transistors, the plurality of transistors mixing trivalent and pentavalent ions into the semiconductor wafer. The polymer layer is polished. A second metal layer is formed on the polymer layer and the first metal layer. A design definition layer is formed on the second metal layer, and the second metal layer is exposed in an opening in the design definition layer. One third metal layer is formed on the second metal layer exposed to the opening. Remove the design definition layer. The second metal layer except under the third metal layer is removed.The present invention proposes a fabrication process of a kind of line device structure and its structure for the above purpose, and provides the semiconductor wafer and the metal pillar located on the semiconductor wafer in the fabrication process. The maximum width of the metal column divided by the height of the metal column is less than 4 and the height of the metal column is from 20 μm to 300 μm, the semiconductor wafer therein containing a large number of transistors, The transistor mixes trivalent and pentavalent ions up to the semiconductor wafer. One insulating layer is formed on the semiconductor wafer and covers the metal post. One first metal layer is formed on the insulating layer and the metal column. A design definition layer is formed on the first metal layer, and the first metal layer is exposed in an opening in the design definition layer. One second metal layer is formed on the first metal layer exposed to the opening. Remove the design definition layer. The first metal layer except under the second metal layer is removed.The present invention proposes a fabrication process of one kind of line device structure and its structure for the above purpose, and in the fabrication process, one semiconductor base and one first metal layer located on the semiconductor base and its fabrication process A polymer layer is provided on the semiconductor base and its first metal layer. The polymer layer is polished. One protrusion is formed on the first metal layer, and the formed protrusion includes the second metal layer on the polymer layer and the first metal layer. A design definition layer is formed on the second metal layer, and the second metal layer is exposed in an opening in the design definition layer. One third metal layer is formed on the second metal layer exposed to the opening. Remove the design definition layer. The second metal layer except under the third metal layer is removed.The present invention proposes, for the above purpose, a fabrication process of a kind of line structure and its structure, and provides the semiconductor base and the metal pillar located on the semiconductor base during the manufacturing process. The maximum width of the metal column divided by the height of the metal column is less than 4 and the height of the metal column is between 20 μm and 300 μm. One insulating layer is formed on the semiconductor base and covers the metal post. An opening is formed inside the insulating layer to expose the metal pillar.The present invention proposes, for the above purpose, a manufacturing process of a kind of line device structure and its structure, and in the manufacturing process, one semiconductor base and one metal layer located on the semiconductor base and its semiconductor base And a polymer layer located on the metal layer. The polymer layer is polished. An opening is formed inside the polymer layer to expose the metal layer.The present invention proposes, for the above purpose, a manufacturing process and a structure of a kind of line device structure, and in the manufacturing process, provide one semiconductor base and one metal column located on the semiconductor base, The maximum width of the metal column divided by the height of the metal column is less than 4 and the height of the metal column is between 20 μm and 300 μm. One insulating layer is formed on the semiconductor base and covers the metal post. The insulating layer is etched.The present invention proposes a manufacturing process of a kind of line device structure and its structure for the above purpose, and in the manufacturing process, one semiconductor base and one metal pillar located on the semiconductor base and its semiconductor base And one polymer layer located on the metal post. The polymer layer is removed to expose the top surface of one of the metal columns, and the height between the polymer layers from the top surface is 10 μm to 150 μm.The present invention proposes, for the above purpose, a manufacturing process of a kind of line device structure and its structure, and in the manufacturing process, one semiconductor base and one metal layer located on the semiconductor base and its semiconductor base And one polymer layer located on the metal layer. The polymer layer is polished. The polymer layer is etched.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides one semiconductor base in the manufacturing process. One polymer layer is provided on the semiconductor base, and the depth of one opening in the polymer layer is 10 μm to 300 μm. One metal layer is formed on the polymer layer and in the opening, and the metal layer other than the opening is removed.The present invention proposes, for the above purpose, a manufacturing process and a structure of a kind of line device structure, and in the manufacturing process, provide one semiconductor base and one metal column located on the semiconductor base, The maximum width of the metal column divided by the height of the metal column is less than 4 and the height of the metal column is between 20 μm and 300 μm. One insulating layer is formed on the semiconductor base and covers the metal post. One protruding mass is formed on the metal layer. The protrusion connects with an external circuit. A second insulating layer is formed between the semiconductor base and the external circuit.The present invention proposes, for the above purpose, a manufacturing process of a kind of line device structure and its structure, and in the manufacturing process, one semiconductor base and one metal layer located on the semiconductor base and its semiconductor base A first polymer layer is provided on the top and the metal layer. The first polymer layer is polished. One protruding mass is formed in the metal layer. The protrusion connects with an external circuit. One second polymer layer is formed between the semiconductor base and the external circuit.The present invention proposes, for the above purpose, a manufacturing process of a kind of line device structure and its structure, and in the manufacturing process, one semiconductor base and one metal layer located on the semiconductor base and its semiconductor base And a first polymer layer located on the metal layer. The first polymer layer is polished. One protrusion is formed on the metal layer, the protrusion including one electroplating process.The present invention proposes, for the above purpose, a manufacturing process and a structure of a kind of line device structure, and in the manufacturing process, provide one semiconductor base and one metal column located on the semiconductor base, The maximum width of the metal column divided by the height of the metal column is less than 4 and the height of the metal column is between 20 μm and 300 μm. One insulating layer is formed on the semiconductor base and covers the metal post. One protruding mass is formed on the metal post. The protrusion comprises one electroplating process.The present invention proposes, for the above purpose, a manufacturing process of a kind of line device structure and its structure, and in the manufacturing process, one semiconductor base and one metal layer located on the semiconductor base and its semiconductor base And one metal layer located on the metal layer. The polymer layer is polished. One wire is formed in one wire fabrication process and connected on the metal layer.The present invention proposes, for the above purpose, a manufacturing process and a structure of a kind of line device structure, and in the manufacturing process, provide one semiconductor base and one metal column located on the semiconductor base, The maximum width of the metal column divided by the height of the metal column is less than 4 and the height of the metal column is between 20 μm and 300 μm. One insulating layer is formed on the semiconductor base and covers the metal post. One wire is formed in one wire making process, and the upper connection of the metal column is made.According to the present invention, for the purpose described above, a manufacturing process of a kind of line device structure and its structure are presented, and one substrate is provided in the manufacturing process. Placing one first metal column on the substrate and dividing the maximum width of the first metal column by the height of the first metal column, it is smaller than 4 and the height of the first metal column is 20 μm To 300 μm. If one second metal column is placed on the substrate and the maximum width of the second metal column is divided by the height of the second metal column, it is smaller than 4, and the height of the second metal column is 20 μm To 300 μm. The distance from the center point of the first metal column to the center point of the second metal column is 10 μm to 250 μm.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides one semiconductor base in the manufacturing process. Placing one first metal column on the semiconductor base and dividing the maximum width of the first metal column by the height of the first metal column is less than 4 and the height of the first metal column is 20 μm to 300 μm. Placing one second metal column on the semiconductor base and dividing the maximum width of the second metal column by the height of the second metal column is less than 4 and the height of the second metal column is 20 μm to 300 μm. Place one second metal column between 20 μm and 300 μm. One insulating layer is placed on the semiconductor base and covers the first and second metal posts. A first projecting mass is formed on the first metal column. One second protrusion is formed on the second metal column, and the distance from the center point of the first protrusion to the center point of the second protrusion is 10 μm to 250 μm.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides one semiconductor base in the manufacturing process. Placing one first metal column on the semiconductor base and dividing the maximum width of the first metal column by the height of the first metal column is less than 4 and the height of the first metal column is 20 μm to 300 μm. If one second metal column is placed on the semiconductor base and the maximum width of the second metal column divided by the height of the second metal column, it is smaller than 4 and the height of the second metal column is 20 μm to 300 μm. One metal line connects the top surface of the first metal column to the top surface of the second metal column, and the material of the metal line includes gold.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides one semiconductor base in the manufacturing process. Placing one first metal column on the semiconductor base and dividing the maximum width of the first metal column by the height of the first metal column is less than 4 and the height of the first metal column is 20 μm to 300 μm. If one second metal column is placed on the semiconductor base and the maximum width of the second metal column divided by the height of the second metal column, it is smaller than 4 and the height of the second metal column is 20 μm to 300 μm. The top surface of the first metal column and the top surface of the second metal column are connected by one metal line. One polymer layer is placed on the metal track.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides one semiconductor base in the manufacturing process. One metal column is placed on the semiconductor base, and the maximum width of the metal column divided by the height of the metal column is smaller than 4, and the height of the metal column is 20 μm to 300 μm. One wire is formed in the process of making two wires and connected to the metal column and its polymer layer.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides one semiconductor base in the manufacturing process. When one metal column is placed on the semiconductor base and the maximum width of the metal column is divided by the height of the metal column, it is smaller than 4, and the height of the metal column is 20 μm to 300 μm. One polymer layer is placed on the metal track and the metal pillar is covered. One protrusion is formed on the metal post and has a thickness of 10 μm to 150 μm.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides one semiconductor base in the manufacturing process. When one metal column is placed on the semiconductor base and the maximum width of the metal column is divided by the height of the metal column, it is smaller than 4, and the height of the metal column is 20 μm to 300 μm. One polymer layer is placed on the semiconductor base and covers the metal pillars. One metal coil is placed on the semiconductor base and the thickness of the metal coil is 1 μm to 15 μm.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides it with one semiconductor base. When one metal column is placed on the semiconductor base and the maximum width of the metal column is divided by the height of the metal column, it is smaller than 4, and the height of the metal column is 20 μm to 300 μm. One protrusion is formed on the metal pillar, and the thickness of the protrusion includes a gold layer of 10 μm to 30 μm.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides it with one semiconductor base. When one metal column is placed on the semiconductor base and the maximum width of the metal column is divided by the height of the metal column, it is smaller than 4, and the height of the metal column is 20 μm to 300 μm. One protrusion is formed on the metal pillar, and the protrusion includes one titanium-containing gold layer.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides it with one semiconductor base. When one metal column is placed on the semiconductor base and the maximum width of the metal column is divided by the height of the metal column, it is smaller than 4, and the height of the metal column is 20 μm to 300 μm. One protrusion is formed on the metal post, and the protrusion includes one chromium-containing gold layer.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides it with one semiconductor base. When one metal column is placed on the semiconductor base and the maximum width of the metal column is divided by the height of the metal column, it is smaller than 4, and the height of the metal column is 20 μm to 300 μm. One protrusion is formed on the metal pillar, and the protrusion includes one tantalum-containing gold layer.The present invention proposes, for the purpose described above, a manufacturing process of a kind of line device structure and its structure, and provides it with one semiconductor base. When one metal column is placed on the semiconductor base and the maximum width of the metal column is divided by the height of the metal column, it is smaller than 4, and the height of the metal column is 20 μm to 300 μm. One first polymer layer is placed on the semiconductor base and the metal pillars are coated. Put one board. One protrusion is located between the metal pillar and the substrate, and one second polymer layer is located between the substrate and the semiconductor base and covers the protrusion.The objects, technical contents, features and achievement effects of the present invention can be easily understood if the following detailed description is given using specific embodiments and the attached drawings.The present invention is a manufacturing process of a kind of line device structure and its structure, in which many metal post (Post) structures are formed in a semiconductor base, and the distance between adjacent metal posts is reduced to 250 μm or less. Will describe several different embodiments.(First Embodiment) A manufacturing process of the line device structure of the first embodiment is shown in FIG. First, a single semiconductor base 30 is provided, the type of semiconductor base 30 being a silicon base, a gallium arsenide base (GAAS) or a silicon germanium base, and a base of silicon-on-insulator (SOI) In the present embodiment, the semiconductor base 30 is one circular semiconductor wafer, and the semiconductor wafer 30 has one main moving surface, and the main moving surface of the semiconductor wafer 30 has pentavalent and trivalent ions ( For example, it is made to form several electronic devices 32 by passing through boron ions, phosphorus ions etc.), and the electronic devices 32 are metal oxide semiconductors such as MOS devices (MOS devices) or P-channel MOS devices (p-channel MOS devices) or n-channel MOS devices (n-channel MOS devices) or BICMOS devices (BICMOS devices) or bipolar transistors (Bipolar Junction Transistor, BJT), diffusion area, resistor, capacitor, CMOS and the like.Please refer to FIG. One thin connection structure 34 is formed on the main movement surface of the semiconductor wafer 30, and the thin connection structure 34 is composed of a plurality of thin film insulating layers 36 with a thickness of 3 μm or less and thin line layers 38 with a thickness of 3 μm or less The thin line layer 38 among them is made of copper metal or aluminum metal, and the thin film insulating layer 36 is also called dielectric barrier, and is usually formed by chemical vapor deposition, and this thin film insulating layer 36 is silicon oxide. Or tetraethoxysilane (TEOS) oxide, SiwCxOyHz, silicon nitride compound or silicon oxynitride compound, or glass (SOG), fluoroglass (FSG), silk layer (SiLK) formed by spin coating method ), Black diamond thin film (Black Diamond), polyarylene ether (polyarylene ether), polybenzoxazole (polybenzoxazole (PBO)), porous silica oxide Ru. Alternatively, the thin film insulating layer 36 is a material having a dielectric constant (FPI) of 3 or less.During formation of a plurality of thin line layers 38, in the process of the semiconductor wafer 30, in the metal damascene process, first, one diffusion blocking layer is formed on the bottom and side walls of one thin film insulating layer 36 opening and on the surface on the thin film insulating layer 36 After sputtering, for example, a seed layer of copper material is sputtered onto the diffusion blocking layer, a copper layer is electroplated on the seed layer, and a copper layer outside the opening of the thin film insulating layer 36 by a photo chemical mechanical polishing (CMP) method. Remove the seed layer and the diffusion blocking layer until the upper surface of the thin film insulating layer 36 is exposed. The other method is to first sputter an aluminum layer or an aluminum alloy layer on the thin film insulating layer 36, and design the aluminum layer or the aluminum alloy layer by lithography etching. The thin line layers 38 pass through the through holes 40 of the thin film insulating layer 36 and are connected to each other or to the electronic device 32, and the general thickness of the thin line layers 38 is 0.1 μm to 0.5 μm, In the lithographic fabrication process, thin line layers 38 are fabricated using a 5 × stepper or scanner or better machine.Next, a protective layer 42 is provided on the surface of the semiconductor base 30 by chemical vapor deposition (CVD), and the plurality of crevices of the protective layer 42 are exposed to the pad 44, thereby making the electronic device in the semiconductor base 30 32 protect the destruction of moisture and foreign ionic contamination, ie the protective layer 42 is mobile ions (eg sodium ions) moisture (moisture) transition metals (eg Penetration of gold, silver, copper) and other impurities causes damage to thin metal lines of the transistor / polysilicon resistor device under the protective layer 42 or the electronic device 32 of polysilicon silicon capacitor device. Prevent. In order to achieve the purpose of protection, the protective layer 42 is usually composed of silicon oxide, silicon oxide compound, silicon phosphide, silicon nitride and silicon oxide and silicon oxy-nitride.The first type of production method of the protective layer 42 first forms a silicon monoxide layer having a thickness of 0.2 μm to 1.2 μm using a chemical vapor deposition method, and then uses a chemical vapor deposition method. A silicon monoxide layer 0.2 μm to 1.2 μm thick is formed on the silicon oxide layer.The second type manufacturing method of the protective layer 42 first forms a silicon monoxide layer having a thickness of 0.2 μm to 1.2 μm using a chemical vapor deposition method, and then uses a plasma enhanced chemical vapor deposition method. A silicon monoxide layer is formed on the silicon oxynitride layer to a thickness of 0.05 μm to 0.15 μm.In the third type manufacturing method of the protective layer 42, a silicon nitride oxide layer having a thickness of 0.05 μm to 0.15 μm is first formed by using a chemical vapor deposition method, and then the chemical vapor deposition method is used. Forming a silicon monoxide layer having a thickness of 0.2 μm to 1.2 μm on the silicon oxynitride layer and using a chemical vapor deposition method to form a silicon nitride layer having a thickness of 0.2 μm to 1.2 μm Is formed on the silicon oxide layer.The method of producing the fourth type of protective layer 42 first forms a silicon monoxide layer having a thickness of 0.2 μm to 0.5 μm using a chemical vapor deposition method, and then uses a spin-coating method (spin-coating). A silicon dioxide layer having a thickness of 0.5 μm to 1 μm is formed on the silicon monoxide layer by using a silicon trinitride layer having a thickness of 0.2 μm to 1.2 μm using a chemical vapor deposition method. Is formed on the silicon dioxide layer.The fifth type of production method of the protective layer 42 first forms a silicon monoxide layer having a thickness of 0.5 μm to 2 μm using high density plasma chemical vapor deposition (HDP-CVD), and then forms a chemical vapor phase. Using the method, silicon mononitride having a thickness of 0.2 μm to 1.2 μm is formed on the silicon oxide layer.According to the sixth type manufacturing method of the protective layer 42, silicon dioxide (USP) having a thickness of 0.2 μm to 3 μm not implanted with impurities is first formed, and then, for example, tetraethoxysilane (TEOS) oxide is formed. After forming an insulating layer with a thickness of 0.5 μm to 3 μm, such as borophosphosilicate glass (BPSG) or phosphosilicate glass (PSG), on a silicon glass not implanted with the impurity, Using the phase method, a silicon mononitride layer of 0.2 μm to 1.2 μm in thickness is formed on the insulating layer.The seventh-class production method of the protective layer 42 selectively forms a silicon oxynitride layer having a thickness of 0.05 μm to 0.15 μm by using a chemical vapor deposition method first, and then the chemical vapor deposition method is also performed. To form a silicon mononitride layer 0.2 μm to 1.2 μm thick on the silicon oxynitride layer or the silicon oxide layer, or selectively first using a chemical vapor deposition method Forming a silicon trinitroxide layer having a thickness of 0.05 μm to 0.15 μm on the silicon nitride layer, and using chemical vapor deposition to form silicon monoxide having a thickness of 0.2 μm to 1.2 μm A layer is formed on the silicon trinitrite layer or silicon nitride layer.The eighth-class production method of the protective layer 42 first forms a silicon monoxide layer having a thickness of 0.2 μm to 1.2 μm using chemical vapor deposition (PECVD), and then forms a spin-coating method (spin-coating method). ) To form a silicon dioxide layer 0.5 .mu.m to 1 .mu.m thick on the silicon monoxide layer, and then 0.2 .mu.m to 1.2 .mu.m trinitriding using a chemical vapor deposition method. A silicon layer is formed on the silicon dioxide layer, and a 0.2 μm to 1.2 μm thick silicon mononitride layer is formed on the silicon trioxide layer using chemical vapor deposition, and then chemical vapor deposition is performed. A 0.2 法 m to 1.2 酸化 m thick silicon tetraoxide layer is formed on the silicon nitride layer using a phase method.The ninth kind of production method of the protective layer 42 first forms a silicon monoxide layer of 0.2 μm to 2 μm in thickness using high density plasma chemical vapor deposition (HDP-CVD), A silicon mononitride layer of 0.2 μm to 1.2 μm thick is formed on the silicon monoxide layer using the phase method, and then thick using high density plasma chemical vapor deposition (HDP-CVD) A 0.5 μm to 2 μm silicon dioxide layer is formed on the silicon nitride layer.According to a method of manufacturing the protective layer 42 according to the tenth type, first, a silicon monoxide layer having a thickness of 0.2 μm to 1.2 μm is formed by using a chemical vapor deposition method, and then by using a chemical vapor deposition method A silicon monoxide layer of 0.2 μm to 1.2 μm in thickness is formed on the silicon mononitride layer, and then a silicon dinitride layer of 0.2 μm to 1.2 thickness is formed using a chemical vapor deposition method. It is formed on the silicon oxide layer.The thickness of the protective layer 42 is generally 0.35 μm or more, and the thickness of the silicon nitride layer is generally 0.3 μm or more under favorable conditions.After the protective layer 42 is completed, as shown in FIG. 4a, a first polymer layer 46 having a thickness of 3 to 50 .mu.m is formed on the protective layer 42, and this first polymer layer 46 has an insulating function. The material of the first polymer layer 46 is thermoplastic plastic, thermosetting plastic, polyimide (PI), benzo-cyclo-butene (BCB), polyurethane (polyurethane), epoxy resin, poly It is selected from p-xylene polymers, welding mask materials, elastic materials, porous dielectric materials and the like. The installation method of the first polymer layer 46 includes a hot lamination dry film method, screen printing or spin coating method. Then, as shown in FIG. 4b, the first polymer layer 46 is designed using an etching method to expose many openings 48 in the pads 44 on the semiconductor base 30. It should be noted that if the first polymer layer 46 is a photosensitive material, the first polymer layer 46 is designed using a lithography process. If the first polymer layer 46 is not a photosensitive material, the first polymer layer 46 is designed using a photolithography process and etching process.After the first polymer layer 46 is designed, the temperature from 200 ° C. to 320 ° C. is heated by baking heating or microwave heating or infrared heating, or the temperature from 320 ° C. to 450 ° C. is heated by heating The polymer layer 46 is cured. The volume of the first polymer layer 46 after curing is reduced, and the water content of the first polymer layer 46 is 1% or less, and the water content of the first polymer layer 46 at a temperature of 425 ° C. to 450 ° C. The weight change rate is 1% or less.As shown in FIG. 5, a first adhesion / barrier / seed layer 50 having a thickness of 400 Å to 7000 Å is formed on the first polymer layer 46 and the pad by sputtering. When the adhesion / inhibition layer 50 is made of one or a combination of titanium metal, titanium nitride, titanium tungsten alloy, tantalum metal layer, chromium, chromium copper alloy or tantalum nitride, at least one of them is used, and An adhesion / inhibition layer 50 is formed on the first adhesion / inhibition layer 50 as a formed seed layer. Since this seed layer is useful for the installation of the next metal line, the material of the seed layer changes depending on the material of the next metal line. The seed layer is formed all over the adhesion / inhibition layer of the subsequent example.In the case of a copper metal wire line formed by electroplating on the seed layer, the material of the seed layer is preferably copper. In the case of electroplating a metal line made of silver, the material of the seed layer is preferably silver. When electroplating metal tracks of palladium material, the material of the seed layer is preferably palladium. When electroplating metal tracks of platinum material, the material of the seed layer is preferably platinum. When electroplating metal lines made of rhodium, the material of the seed layer is preferably rhodium. When electroplating metal lines made of ruthenium, the material of the seed layer is preferably ruthenium. When electroplating metal lines of rhenium material, the material of the seed layer is better for rhenium. In the case of electroplating a metal track made of nickel, the material of the seed layer is preferably nickel.Next, as shown in FIG. 6a, one first patterned hardened photoresist layer 54 (hardened photoresist) is formed on the seed layer on the first adhesion / inhibition layer 50, and the first patterned hardened photoresist is formed. Layer 54 exposes the seed layer on the first adhesive / inhibition layer 50 in portions by several openings 56, and this opening 56, 1 × (1 ×) steppers or scanners or better machines The first design-hardening photoresist layer 54 is removed. Then expose the seed layer on the first adhesion / inhibition layer 50 in the opening 36 and electroplate one metal layer 58 of 1 μm to 50 μm thickness to a better thickness of this first metal layer 58 The thickness is between 2 μm and 30 μm, and the first metal layer 58 is connected to the thin connection structure 34, and the material of the first metal layer 58 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium or If one or a combination of nickel is used, at least one of them is used, and removing the first design hardon photoresist layer 54 forms one first RDL line layer 60, and the special feature to be noted is this The first RDL line layer 60 mainly has the first metal layer 58 formed on the opening 48 and extends onto a portion of the first polymer layer 46. The first metal layer 58, which is not simply formed on the opening 48, serves for the next line installation.Next, as shown in FIG. 6b, a second photoresisting hardened photoresist layer 62 is formed on the first RDL line layer 60 and the seed layer on the first adhesion / inhibition layer 50, and then this second patterning is performed. The hardened photoresist layer 62 exposes the first metal layer 58 of the first RDL line layer 60 by means of several openings 64. Then, as shown in FIG. 6c, a second metal layer 66 with a thickness of 20 μm to 300 μm formed by electroplating is formed in this opening 64, and the maximum ridge width of this second metal layer 66 is 3 μm to The material of the second metal layer 66 is at least one of gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium and nickel, or at least one of them. The better thickness of the second metal layer 66 is between 30 μm and 100 μm.The material of this second metal layer 66 is copper, the better first floor of the first RDL line layer 60 is copper, and the material of this second metal layer 66 is silver, the better first layer of the first RDL line layer 60 The floor is silver, the material of this second metal layer 66 is palladium, the better first floor of the first RDL line layer 60 is palladium, the material of this second metal layer 66 is platinum, the first RDL line layer The better first floor of 60 is platinum, the material of this second metal layer 66 is rhodium, the better first floor of the first RDL line layer 60 is rhodium, and the material of this second metal layer 66 is ruthenium The better first floor of the first RDL line layer 60 is ruthenium, the material of this second metal layer 66 is rhenium, and the better first floor of the first RDL line layer 60 is rhenium, this second metal Material of layer 66 is nickel, better first floor of first RDL line layer 60 is Is Kell.Next, as shown in FIG. 6d, the second patterned hardon photoresist layer 62 is removed and the first adhesion / inhibition layer 50 under the first metal layer 58 is etched using hydrogen peroxide as well. In addition to hydrogen peroxide, an iodine-containing etching solution, for example, an etching solution such as iodine potassium may be used. As shown in FIG. 6e, the step of removing the seed layer and the first adhesion / inhibition / seed layer 50 under the first metal layer 58 is the second design hardon photoresist layer 62 or the first design hardon photoresist layer. You may go after removing 54.After removing the first adhesion / inhibition layer 50 under the first metal layer 58, as shown in FIGS. 7a and 7b, the maximum width Hw of each second metal layer 66, ie the metal column 68 defining the cost invention Divided by the height Ht, it is possible that the cylinder is smaller than 4 and this numerical value can be smaller than 3 or 2. The maximum lateral width of the metal column 68 is 3 μm to 50 μm. This metal column 68 is a small column, which is different from the above-described metal layer or line layer, and the distance Hb between the centers of adjacent metal columns 68 is between 10 μm and 250 μm. It is also possible to reduce to better spacing distances from 10 μm to 200 μm · 10 μm to 175 μm · 10 μm to 150 μm. FIG. 7b shows a view from the bottom of the second metal layer 66 where the metal column 68 is installed. It can be clearly seen from the figure that this metal column 68 is not formed on the RDL line layer 60 above the opening 48 but on an area extending from the RDL line layer 60.As shown in FIG. 8a, one second polymer layer 70 covers the metal base 68 on the semiconductor base 30, and the material of the second polymer layer 70 is thermoplastic plastic, thermosetting plastic, polyimide (polyimide). , PI), benzocyclobutene (BCB), polyurethane (polyurethane), epoxy resin, poly (p-xylenes) polymer, welding mask material, elastic material, porous dielectric material and the like. The installation method of the second polymer layer 70 is screen printing or spin coating. Referring to FIG. 8b, when installed by screen printing, the multiple openings 72 are directly formed in the second polymer layer 70 and exposed at the top end of the metal column 68. When the second polymer layer 70 is installed by spin coating, the multiple openings 72 are formed through one design step and then exposed at the top end of the metal column 68. If the second polymer layer 70 is installed by spin coating, the openings 72 are formed by lithography etching. As shown in FIG. 8c, the exposure method of the metal column 68 can expose the metal column 68 by polishing other than the opening 72, but before performing the polishing step, the second polymer layer is first After curing 70, the second polymer layer 70 is polished by chemical physical polishing (CMP) to expose the metal column 68. For the curing step, one of baking heating, microwave heating or infrared heating is selected.It should be mentioned in advance that many embodiments extend beyond the structures in FIGS. 8b and 8c, so that for the present invention, multiple metal columns 68 are formed on the semiconductor base 30 in these two figures. , The characteristic of fine pitch between adjacent metal pillars is observed, and the distance between 10 .mu.m to 250 .mu.m and the maximum width Hw of the metal pillar 68 divided by the height Ht , 4 and so many of the following examples make changes to this metal column 68, and the first example is based on the structure of FIG. 8c.As shown in FIG. 9, one third polymer layer 74 is formed on the second polymer layer 70 using a coating method, and the third polymer layer 74 is formed with a design step. To form a large number of openings 72. The design step of the third polymer layer 74 is lithography or lithography etching. The third polymer layer 74 is formed into a second polymer layer by hot lamination on the second polymer layer 70 or by using a screen printing method. Form on 70. The material of this third polymer layer 74 is thermoplastic plastic, thermoset plastic, polyimide (polyimide, PI), benzocyclobutene (benzo-cyclo-butene, BCB), polyurethane (polyurethane), epoxy resin, poly p-xylenes It is selected from polymers, welding mask materials, elastic materials, porous dielectric materials and the like.As shown in FIG. 10a, a second adhesion / inhibition layer 78 having a thickness of 400 Å to 7000 Å is formed on the surface of the third polymer layer 74 and the top of the metal pillar 68 by sputtering, and the second adhesion The material of the / inhibition layer 78 is formed on the second adhesion / inhibition layer 78 when one or a combination of titanium metal, titanium nitride, titanium tungsten alloy, tantalum metal layer, chromium, chromium copper alloy or tantalum nitride is used. Use at least one kind of seed layer. Next, as shown in FIG. 10b, one third patterned hardened photoresist layer 82 is formed on the seed layer of the second adhesion / inhibition layer 78, and the third patterned hardened photoresist layer 82 is a positive photoresist type. That is, several openings 83 of the third design-hardened photoresist layer 82 are exposed to the seed layer of the second adhesion / inhibition layer 78 on the opening 76 and around the opening 76.Next, as shown in FIG. 10c, one third metal layer 84 is exposed in the opening 83 by electroplating and formed on the seed layer on the second adhesion / inhibition layer 78, and this third metal layer 84 is formed. In the case of one or a combination of gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium or nickel, at least one of them is used. Next, as shown in FIG. 10d, the second adhesion / inhibition layer 78 under the third metal layer 84 is etched away using hydrogen peroxide in the same manner, and an iodine-containing etchant other than hydrogen peroxide is used. For example, an etchant such as iodine potassium may be used. It should be noted that this third metal layer 84 is the difference in thickness formed by plating. Due to the difference in material and thickness of the third metal layer 84, different types and applications occur when the semiconductor base 30 is connected to an external circuit. That is, since the thickness and the width of the opening 83 of the third design-hardened photoresist layer 82 and the formation position of the opening 82 are changed according to different applications, the third metal layer 84 is electroplated to have different thicknesses and positions and It becomes a material. The above external circuit is a flexible substrate, a semiconductor chip, a printed wiring board, a ceramic substrate, a glass substrate or the like.In this embodiment, the type in which the third metal layer 84 is formed is a bump / pad, RDL or a solder. As shown in FIG. 10d above, the material of the third metal layer 84 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, and the thickness (Ha) of the formed third metal layer 84 is This third metal layer 84 is defined as a protrusion 86 as the better thickness is between 10 μm and 25 μm when it is between 5 μm and 30 μm. And it is also possible to reduce the distance between the centers of adjacent protrusions 86 from the center to the center by 250 μm and to reduce the distance to a better distance of 200 μm · 150 μm. Further, as shown in FIG. 11, the semiconductor base 30 is cut to form the semiconductor base 30 into a plurality of semiconductor units 88, and the projecting blocks 86 on each semiconductor unit 88 are connected to one external circuit by forming ACF. it can.As shown in FIGS. 12a and 12b, the material of the third metal layer 84 is one of solder, tin-lead alloy, tin-silver-copper alloy or lead-free solder, and the thickness of the formed third metal layer 84. When (Ha) is between 20 μm and 150 μm, a better thickness is between 30 μm and 105 μm. Next, as shown in FIG. 12c, the semiconductor base 30 is heated, and when the third metal layer 84 is heated, the third metal layer 84 is melted into a spherical shape. It is also possible to define a spacing distance between the centers of adjacent tin balls 92, defined as 92, which is smaller by 250 μm, and to be reduced to a better spacing distance of 200 μm · 150 μm. The third type method of the third metal layer 84 is to form one copper layer with a thickness of 1 μm to 100 μm formed by electroplating in the opening 83 of the third design-hardened photoresist layer 82 and subsequently to electrically One nickel layer of 1 μm to 10 μm thick formed by plating is on top of the copper layer, and one tin layer or tin silver layer or tin silver copper alloy layer of 20 μm to 150 μm thick finally formed by electroplating Is in the upper layer of the nickel layer.The semiconductor base 30 is then cut as shown in FIG. 12d to form the semiconductor base 30 into a plurality of semiconductor units 88, and the tin balls 92 on each semiconductor unit 88 can be bonded onto the outer substrate 94, The substrate 94 is a semiconductor chip, a printed wiring board, a ceramic substrate or a glass substrate.As shown in FIG. 12e, when the tin balls 92 on the semiconductor unit 88 are bonded onto the external substrate 94, before the semiconductor unit 88 is bonded to the external substrate 94, one of the fourth polymer layers 96 is first The fourth polymer layer 96 is formed on a substrate 94, and the material of the fourth polymer layer 96 is thermoplastic plastic, heat-hardening plastic, polyimide (polyimide, PI), benzo-cyclo-butene (BCB), polyurethane (polyurethane), epoxy resin · Poly-p-xylene polymers, weld mask materials, elastic materials, porous dielectric materials, etc. The method of forming the fourth polymer layer 96 involves hot laminating one stylized dry film onto the substrate 94 or hot laminating one photosensitive dry film onto the substrate 94. After that, a photosensitive dry film is designed by lithography, or a fourth polymer layer 96 is formed on the substrate 94 by screen printing, or one photosensitive thin film is formed on the substrate 94 by spin coating, In addition, a non-photosensitive thin film is formed on the substrate 94 by a photosensitive dry film or a spin-coating method by a lithography method, and a non-photosensitive thin film is designed by a lithography etching method. After the tin balls 92 on the semiconductor unit 88 are bonded to the substrate 94, they are heated to cure the fourth polymer layer 96. This heating step selects a method such as baking heating, microwave heating or infrared heating.As shown in FIGS. 13a and 13b, the material of the third metal layer 84 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, and the thickness of the formed third metal layer 84 (Ha) Is between 1 μm and 15 μm, a better thickness is between 2 μm and 10 μm. This third metal layer 84 may be defined as a pad 98 and may be between 250 μm smaller than the center-to-center spacing distance between adjacent pads 98 and may be reduced to a better spacing distance of 200 μm · 150 μm. This pad 98 forms one wire in the wire making process and connects with an external circuit.As shown in FIGS. 14a and 14b, the material of the third metal layer 84 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, and the thickness of the formed third metal layer 84 (Ha) When the thickness is between 5 .mu.m and 30 .mu.m, the better thickness is between 10 .mu.m and 25 .mu.m, and the formation position of this third metal layer 84 is an opening other than on the opening 76 of the third polymer layer 74. The third metal layer 84 is also formed on the 76 side second adhesive / inhibition layer 78, and this third metal layer 84 is defined as the RDL layer 100. This RDL layer 100 forms one wire in the wire manufacturing process, and Connecting. It is emphasized that the third metal layer 84 on the side of the opening 76 is similar to the function of the pad 98 and such an eccentric design is necessary in the wire making process when the dimensions of the pad 98 mentioned above are too small. It is for preventing that a wire area runs short and the wire production process becomes difficult.The applications such as bumps, pads, RDLs or solders in FIGS. 9 to 14b of this example all extend beyond the structure of FIG. 8c, although these applications are the same. Thus, the structure of FIG. 9 can be formed from the third polymer layer 74 of the structure of FIG. 8c, and the third polymer layer 74 has a large number of openings. The structure of FIG. 8b is not designed to expose the metal column 68 by polishing, but the structure of FIG. 8b needs to expose the metal column 68 with multiple openings in the design method and to install the third polymer layer 74. Since the third polymer layer 74 is added to the structure of FIG. 8c, that is, the structure of FIG. 8b is stretched from FIG. 9 as shown in FIG. 10a-d, FIG. 11, FIG. 12a-e, The bump / pad of FIG. 13a-b, FIG. 14a-b, RDL or iron. Illustrative of the application of (solder) or the like will be omitted.Second Embodiment The present embodiment is a drawing from FIG. 8c of the first embodiment. Referring to FIG. 15a, the top of the metal column 68 in this embodiment is a single gold layer 102, with a thickness of 1 μm to 30 μm of the gold layer 102, and a wire on the metal 102 of the metal column 68. In the manufacturing process, one wire 104 is formed and connected to an external circuit. It should be noted here that the metal below the gold layer 102 is a copper layer 104 · nickel layer 106 (copper-nickel-gold structure), a thickness of 10 μm to 100 μm of this copper layer 104, a thickness of 1 μm of this nickel layer 106 The gold layer 102 is on the upper side of the copper layer 104, as shown in FIG. 15b or 30 .mu.m, and the entire material of the metal column 68 is 1 .mu.m to 30 .mu.m in thickness of the gold layer 102, or as shown in FIG. This metal pillar 68 thickness is 10 μm to 100 μm which is gold.Third Example This example is a stretch from FIG. 8c of the first example. Referring to FIG. 16a, one third adhesion / inhibition layer 105 is formed on the second polymer layer 70, and a seed layer is formed on the third adhesion / inhibition layer 105, as shown in FIG. 16b, One fourth patterned hardon photoresist layer 110 is formed on the third adhesive / inhibiting layer 105, and a plurality of openings 112 are formed in the fourth patterned hardon photoresist layer 110, and at least one of the plurality of openings 112 is formed. 112 is located above the metal pillar 68, and this opening 112 electroplates a coil as shown in FIG. The material of the fourth metal layer 114 formed in the opening 112 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, and the thickness of the fourth metal layer 114 is 1 μm 30 μm, the fourth metal layer 114 is a composite metal layer used, one copper layer of 1 μm to 30 μm thick formed by electroplating, and subsequently 1 μm to 10 μm thick formed by electroplating One nickel layer is in the upper layer of the copper layer, and one gold of 1 μm to 10 μm thick formed by the last electroplating is in the upper layer of the nickel layer.As shown in FIG. 16 d, the fourth figuring hardne photoresist layer 110 is removed, and the third one is provided under the fourth figurative hardon photoresist layer 110 using a hydrogen peroxide and an iodine-containing etchant as well. After removing the adhesion / inhibition layer 105, as shown in FIG. 16e, since the fourth metal layer 114 has a coil shape, the fourth metal layer 114 is defined as one first coil metal layer 116, The inner first coil metal layer 116 is connected to the semiconductor base 30 through the metal column 68. As shown in FIG. 16f, in addition to connecting with the semiconductor base 30, it is also possible to connect with external circuitry throughout the wire fabrication process (not shown). Also, in order to protect the first coil metal layer 116 from damage and moisture infiltration, one protective layer 117 can be formed, and the thickness of the protective layer 117 is 5 μm to 25 μm. The material of the protective layer 117 is an organic compound or an inorganic compound, for example, thermoplastic plastic, thermosetting plastic, polyimide (polyimide, PI), benzo-cyclo-butene (BCB), polyurethane (polyurethane), epoxy resin, poly p-xylene polymers, weld mask materials, elastic materials, porous dielectric materials, silicon oxide, silicon oxide compounds, silicon-phosphorus glass, silicon nitride and SiON (silicon oxy-nitride) Composed from etc. This first coil metal layer 116 is applied to the area of passive devices such as inductors, capacitors and resistors.Here, this first coil metal layer 116 lists application in a capacitor passive device, and referring to FIG. 16g, the first coil metal layer 116 covers one fifth polymer layer 118, and this fifth polymer The thickness of the layer 118 is between 20 μm and 300 μm, and the material of the fifth polymer layer 118 is polyimide (PI), and the second coil metal layer 120 can be connected to an external circuit. is there. When a change occurs in the current of the external circuit, an induced electromotive force is generated through the second wire plating layer 120, the first coil metal layer 116 senses it, and the generated signal is transmitted to the semiconductor base 30, The fabrication instructions for passive devices are complete here.By using the above-described electroplating method, it is possible to form one capacitor device 121 (capacitor) on the second polymer layer 70, and as shown in FIG. The low dielectric layer 121a is made of titanium, titanium tungsten alloy, tantalum, tantalum nitride or the like, and the low dielectric layer 121a is a metal column 68. And one high dielectric layer 121b is coated on the low dielectric layer 121a, and the material of the high dielectric layer 121b is a nitrogen oxide compound / silicon oxide compound or polyimide (polyimide, PI). To form one low resistance metal layer 121c by electroplating on the adjacent metal pillars 68, and this low dielectric layer 121a forms two kinds of methods; In another method, one adhesion / inhibition layer having a thickness of 400 Å to 7500 Å is provided on the second polymer layer 70 and the high dielectric layer 121b, and the material of the adhesion / inhibition layer is titanium-titanium tungsten alloy Another seed layer, such as tantalum or tantalum nitride, with one 500 Å to 5000 Å thick seed layer on top of the adhesion / inhibition layer, followed by electroplating with a thickness of 1 μm to 30 μm One copper layer is in the upper layer of the seed layer, and one nickel layer with a thickness of 1 μm to 10 μm formed subsequently by electroplating is in the upper layer of the copper layer.Another method is another method in which one adhesion / inhibition layer with a thickness of 400 Å to 7500 Å is on the upper layer of the second polymer layer 70 and the high dielectric layer 121 b, followed by another method with a thickness of 500 Å to 5000 Å. The first seed layer is on top of the adhesion / inhibition layer, and the final 1 μm to 30 μm gold layer formed by electroplating is on top of the gold seed layer, and voltage is applied to the adjacent metal pillars 68 , A large voltage difference is formed on the upper and lower sides of the high dielectric layer 121b, and this structure has a capacitor function. Finally, in order to protect the capacitor device 121 from damage, it is also possible to coat a protective layer 121 d on the low resistance metal layer 121 c and the second polymer layer 70.Fourth Example This example is a drawing of FIG. 8 b of the first example, and as shown in FIG. 17 a, one fourth adhesive / inhibitory layer 122 is formed on the second polymer layer 70. The material of the fourth adhesion / inhibition layer 122 is titanium, titanium tungsten alloy, tantalum, tantalum nitride or the like, and the material of the seed layer is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, As shown in FIG. 17b, one fifth patterned hardon photoresist layer 126 is formed on the fourth adhesion / inhibition layer 122, and there are multiple openings 128 in the fifth patterned hardon photoresist layer 126. The two openings 128 are located above the metal column 68, and as shown in FIG. 17c, one fifth metal layer 130 with a thickness of μm to 30 μm formed by electroplating is depicted as a fifth design. Over Don photoresist layer 126 fourth in the opening 128 of the adhesion / barrier / seed layer 122 is formed on, and the fifth metal layer 130 in the low resistance, such as gold, silver or copper. Next, as shown in FIG. 17 d, remove the FIG. 5 patterned hardon photoresist layer 126, and similarly using hydrogen peroxide and an iodine-containing etchant, under the FIG. 5 patterned hardon photoresist layer 110. After removing the fourth adhesion / inhibition layer 122, the fifth metal layer 130 is connected to the two metal columns 68, and the fifth metal layer 130 is a current path of the two metal columns 68, and One protection layer 132 can be formed on the second polymer layer 70 and the fifth metal layer 130 to protect them from damage and moisture infiltration. The thickness of the fifth metal layer 130 is 1 μm to 30 μm. The fifth metal layer 130 is a composite metal layer, and one copper layer of 1 μm to 30 μm in thickness formed by electroplating, followed by one nickel layer of 1 μm to 10 μm in thickness formed by electroplating. Le layer is on top of the copper layer, the last one of the gold layer of 10μm thick 1μm was formed by electroplating is the upper layer of the nickel layer.In addition to connecting the fifth metal layer 130 to metal, it is also possible to extend to a multilayer line structure. After one sixth polymer layer 134 is formed on the second polymer layer 70 and the fifth metal layer 130, as shown in FIG. 17e, as shown in FIG. 17f, the sixth polymer layer 134 is formed. A plurality of openings are designed to be exposed to the fifth metal layer 130, and one fifth adhesion / inhibition layer 136 is sputtered in order as shown in FIG. 17g, and the material of the fifth adhesion / inhibition layer 136 is titanium -Titanium tungsten alloy-Tantalum or tantalum nitride, etc. The material of this seed layer is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, as shown in FIG. A hardened photoresist layer 140 is formed on top, and the multiple openings of the hardened photoresist layer 140 of FIG. 6 are exposed to the openings of the sixth polymer layer 134, as shown in FIG. 17i. Then, one sixth metal layer 142 is formed on the hardened photoresist layer 140 of FIG. 6, and the material of the sixth metal layer 142 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, The sixth metal layer 142 has a thickness of 1 μm to 30 μm, and the sixth metal layer 142 is a composite metal layer, which is one copper layer of 1 μm to 30 μm in thickness formed by electroplating, and then One nickel layer of 1 μm to 10 μm thick formed by electroplating is on top of the copper layer, and one gold plate of 1 μm to 10 μm thick finally formed of electroplating is on top of the nickel layer.After removing the fifth adhesion / inhibition layer 136 and the seed layer except as shown in FIG. 17j, except under the sixth design inventive hardon photoresist layer 140 and the sixth metal layer 142, as shown in FIG. A seventh polymer layer 144 is formed on the sixth polymer layer 134 and the sixth gold alloy layer 142, and the seventh polymer layer 144 has a thickness of 10 μm to 25 μm, as shown in FIG. The multiple openings of the polymer layer 144 are designed to be exposed to the sixth metal layer 142, and one wire is exposed to the sixth metal layer 142 in the wire fabrication process and connected to an external circuit as shown in FIG. 17m. .Fifth Embodiment The fifth embodiment is a drawing of FIG. 8b of the first embodiment, and the present embodiment is similar to the fourth embodiment, and as shown in FIG. As in the fourth embodiment, the difference is that the fifth metal layer 130 in the fourth embodiment is a low resistance material, so that the current of the fifth metal layer 130 can rapidly flow, but the fifth The seventh metal layer 146 in the embodiment (see FIG. 18) is a high resistance material, such as chromium / nickel alloy (Cr / Ni), titanium, tungsten or the like, and the thickness of the seventh metal layer 146 is 1 μm to Since it is 3 μm, the seventh metal layer 146 is used as a resistive device in this embodiment.Sixth Embodiment The first to fifth embodiments described above are stretchings of the structures of FIGS. 8b and 8c, while this embodiment is stretching of the structure of FIG. 8a. As shown in FIGS. 19a and 19b, the present embodiment utilizes an etching method to remove a portion of the second polymer layer 70 until the metal pillars 68 with a height of 1 μm to 150 μm are exposed, and this exposed height is obtained. Is the distance from the top surface of the metal column to the top surface of the second polymer layer 70, and if the material of the metal column 68 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, metal A better exposure height of the column 68 is between 15 μm and 30 μm. The metal column 68 is used as a projection block, and the same cutting step is performed as shown in FIG. 19c, and the semiconductor base 30 is cut into a plurality of semiconductor units 88, and similarly, the projection block 86 on each semiconductor unit 88 Can be connected to one external circuit by forming an ACF.If the material of the metal column 68 is solder, tin-lead alloy, tin-silver alloy, tin-silver-copper alloy or lead-free solder, the better exposed height of the metal column 68 is between 50 μm and 100 μm. As shown in FIG. 19d, the metal pillars 68 exposed to the outside in the same heating step are melted into a ball shape (solder tin ball), and then the same cutting step is performed as shown in FIG. 19e. Are cut into a plurality of semiconductor units 88, the protruding blocks 86 on each semiconductor unit 88 are bonded to the external substrate, and one eighth polymer layer 148 is formed between the semiconductor units and the substrate, and each ball-shaped protruding block is To coat.As shown in FIG. 19f, if the material of the metal column 68 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, the better exposed height of the metal column 68 is between 1 μm and 15 μm. It is. The exposed metal column 68 is used as a pad, which forms a wire in the wire fabrication process and is connected to the metal layer and the polymer layer.As shown in FIG. 19g, if the material of the exposed metal column 68 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium and the exposed height is between 5000 Å and 10 μm, One sixth adhesion / inhibition layer 150 is formed on the exposed surface of the second polymer layer 70 and the metal column 68, and the material of the sixth adhesion / inhibition layer 150 is titanium, titanium tungsten alloy, tantalum, tantalum nitride, etc. The seed layer is on top of the sixth adhesion / inhibition layer 150, and the material of the seed layer is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, and the sixth adhesion / inhibition layer One of 1000 Å to 7500 Å in thickness.As shown in FIG. 19h, one seventh patterned hardon photoresist layer 152 is formed on the sixth adhesion / inhibition layer 150, and the multiple openings of the seventh design patterned hardon photoresist layer 152 are the sixth adhesion / inhibition layer. As shown in FIG. 19i, one eighth metal layer 154 is formed in the opening of the FIG. 7 patterned hardon photoresist layer 152, as shown in FIG. 19j. The layer 152 is removed, and the sixth adhesion / inhibition / seed layer 150 other than below the eighth metal layer 154 is also removed. The eighth metal layer 154 connects metal lines and connects between the two metal columns 68. The material of the eighth metal layer 154 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, and the thickness of the eighth metal layer 154 is 1 μm to 30 μm, and the eighth metal layer 154 is a composite metal layer One copper layer of 1 μm to 30 μm thick, formed by electroplating, followed by one nickel layer of 1 μm to 10 μm thick, formed by electroplating, used in the upper layer of copper layer, and finally One gold, 1 μm to 10 μm thick, formed by electroplating is on top of the nickel layer.Finally, as shown in FIG. 19k, the eighth metal layer 154 and the second polymer layer 70 are coated as a protective layer 154 to protect them from damage. The material of the protective layer 154 is thermoplastic plastic, thermosetting plastic, polyimide (polyimide, PI), benzocyclobutene (benzo-cyclo-butene, BCB), polyurethane (polyurethane), epoxy resin, poly p-xylenes, polymer, welding It is selected from a mask material, an elastic material, a porous dielectric material, a silicon oxide, a silicon oxide compound, a silicon nitride glass, a silicon oxide, a silicon oxy-nitride and the like.The method of exposing the metal pillars 68 by utilizing this etching is applied not only to the connection of the bumps, pads and metal but also to coil structures, capacitor structures and resistor structures. The fabrication steps are similar to the above embodiment and will not be described again.Seventh Embodiment The structure of this embodiment is similar to that of FIG. 8c, except that the manufacturing process of the metal column 68 and the second polymer layer 70 is different, as shown in FIG. 20a. After the first RDL layer 60 is formed on the semiconductor base 30, one of the FIG. 9 designed hardon photoresist layers 158 is formed on the first RDL layer 60 and the first adhesion / inhibition / seed layer 50, and the ninth RDL layer 60 is formed. The multiple openings of the patterned hardon photoresist layer 158 are exposed to the first RDL layer 60, and the depth of the opening of the patterned hardon photoresist layer 158 of FIG. 9 is between 20 μm and 300 μm.The material of the hard pattern photoresist layer 158 is thermoplastic plastic, thermosetting plastic, polyimide (PI), benzo-cyclo-butene (BCB), polyurethane (polyurethane), epoxy resin, poly p -Select from among xylene polymers, welding mask materials, elastic materials, porous dielectric materials and the like. In addition, the method of forming the hard photoresist layer 158 of FIG. 9 hot laminates one stylized dry film on the semiconductor substrate 30, or one photosensitive dry film on the semiconductor substrate 30. After hot lamination, the photosensitive dry film is designed by lithography, or one non-photosensitive thin film is hot-laminated on the semiconductor substrate 30, and the non-photosensitive thin film is designed by lithography or screen printing In FIG. 9, the polymer layer 158 is formed on the semiconductor substrate 30, or one photosensitive thin film is formed on the semiconductor substrate 30 by spin coating, and the photosensitive dry film or spin coating is formed by lithography. Semiconductor substrate 30 with one non-photosensitive thin film It is formed, and also photosensitive film with a photolithographic method plus an etching method.As shown in FIG. 20b, a seventh adhesion / inhibition layer 160 having a thickness of 400 Å to 7000 Å is formed on the first RDL layer 60 in the openings of the FIG. 9 designed polymer layer 158 and the FIG. 9 designed polymer layer 158. The material of the seventh adhesion / inhibition layer 160 is titanium, titanium tungsten alloy, tantalum, tantalum nitride or the like, the seed layer is on the upper layer of the seventh adhesion / inhibition layer 160, and the material of the seed layer Are gold, copper, silver, palladium, platinum, rhodium, ruthenium and rhenium, and the sixth adhesion / inhibition layer 150 is one of 1000 Å to 7500 Å in thickness.As shown in FIG. 20c, after forming one ninth metal layer 162 on the seventh adhesion / inhibition / seed layer 160 by Damascene method, the openings of the design polymer layer 158 of FIG. The material of the eighth metal layer 154 is gold, copper, silver, palladium, platinum, rhodium, ruthenium, rhenium, and the thickness of the ninth metal layer 162 is 1 to 30 μm, and the ninth metal layer 162 is a composite metal layer One copper layer of 1 μm to 30 μm thick, formed by electroplating, followed by one nickel layer of 1 μm to 10 μm thick, formed by electroplating, used in the upper layer of copper layer, and finally One gold, 1 μm to 10 μm thick, formed by electroplating is on top of the nickel layer.As shown in FIG. 20 d, the ninth metal layer 162 and the seventh adhesion / inhibition layer 160 other than the openings of the FIG. 9 design polymer layer 158 are removed in one polishing step, and the installation of the metal column 68 is completed. . The maximum width Hw of the metal column 68 divided by the height Ht is a column smaller than 4, and the maximum lateral width of the metal column 68 is 3 μm to 50 μm. Also, the distance Hb between adjacent metal columns 68 is between 10 μm and 250 μm.The structure of the metal column 68 formed by the damascene method is very similar to the structure posted in FIG. 8 c above, so that FIG. Other device fabrication schemes on 68 are the same steps.As shown in FIGS. 21a to 21d, this figure shows the fabrication of the projecting block, pad, tin ball and RDL layer on the patterned polymer layer 158 and the metal column 68, and is a part of the fabrication process Since only the final finished structure is shown here, the part of the manufacturing process is omitted.As shown in FIG. 22 to FIG. 25, this figure displays interconnetion of a wire, a coil, a capacitor device, and a resistance device on the polymer layer 158 and the metal column 68 of FIG. Since the part of the manufacturing process has been described in the above embodiment, only the final finished structure will be posted here, and the part of the manufacturing process will be omitted.The present invention provides a reduction in the distance between the stress relief and the contact window structure, and according to the present invention, it is possible to achieve the target distance of 250 μm or less and the number of pinholes of 400 or less. Also, improvement of the IC function is recognized, and it is possible to significantly reduce the resistance and load of the IC metal connection line of the low power supply IC element.The foregoing has described the features of the present invention by way of examples, the purpose of which is to merely understand the contents of the present invention by those familiar with the art and to implement the invention according to the present invention. It is not intended to limit the scope. Therefore, without departing from the spirit posted in the other present inventions, ones that are completed by modification or amendment and that have the same efficacy should be included in the scope of claims described below.Cross-sectional explanatory drawing of a prior art.Sectional explanatory drawing of the semiconductor base by 1st Example of this invention.Sectional explanatory drawing at the time of thin connection structure and protective layer installation on the semiconductor base by 1st Example of this invention.Cross-sectional explanatory drawing of the 1st polymeric layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 1st polymeric layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 1st adhesion / inhibition layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 1st RDL layer and metal column which were formed by the 1st Example of this invention.Cross-sectional explanatory drawing of the 1st RDL layer and metal column which were formed by the 1st Example of this invention.Cross-sectional explanatory drawing of the 1st RDL layer and metal column which were formed by the 1st Example of this invention.Cross-sectional explanatory drawing of the 1st RDL layer and metal column which were formed by the 1st Example of this invention.Cross-sectional explanatory drawing of the 1st RDL layer and metal column which were formed by the 1st Example of this invention.Explanatory drawing of a gold pillar-column body physical property formed by the 1st Example of this invention.FIG. 2 is a plan view of the physical properties of a gold rod body formed in the first embodiment of the present invention.Cross-sectional explanatory drawing of the 2nd polymeric layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 2nd polymeric layer opening formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 2nd polymeric layer grind | polished by the 1st Example of this invention.Cross-sectional explanatory drawing of the 3rd polymeric layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 3rd metal layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 3rd metal layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 3rd metal layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing of the 3rd metal layer formed in the 1st Example of this invention.Cross-sectional explanatory drawing at the time of the semiconductor base cut of 1st Example of this invention.Sectional explanatory drawing of the tin ball formed by the 1st Example of this invention.Sectional explanatory drawing of the tin ball formed by the 1st Example of this invention.Sectional explanatory drawing of the tin ball formed by the 1st Example of this invention.Cross-sectional explanatory drawing at the time of semiconductor base cut of 1st Example of this invention, and joining to a board | substrate.Cross-sectional explanatory drawing at the time of semiconductor base cut of 1st Example of this invention, and joining to a board | substrate.Sectional explanatory drawing at the time of the metal pillar wire manufacturing process of 1st Example of this invention.Sectional explanatory drawing at the time of the metal pillar wire manufacturing process of 1st Example of this invention.The RDL layer formed by the 1st example of the present invention is a section explanatory view on a metal pillar.The RDL layer formed by the 1st example of the present invention is a section explanatory view on a metal pillar.Cross-sectional explanatory drawing at the time of manufacture of the metal pillar wire of copper / nickel / gold or copper / gold of 2nd Example of this invention.Cross-sectional explanatory drawing at the time of manufacture of the metal pillar wire of copper / nickel / gold or copper / gold of 2nd Example of this invention.Cross-sectional explanatory drawing at the time of manufacture of the metal pillar wire of copper / nickel / gold or copper / gold of 2nd Example of this invention.Cross-sectional explanatory drawing on which a 1st coil metal layer formed by the 3rd example of the present invention is a metal pillar.Cross-sectional explanatory drawing on which a 1st coil metal layer formed by the 3rd example of the present invention is a metal pillar.Cross-sectional explanatory drawing on which a 1st coil metal layer formed by the 3rd example of the present invention is a metal pillar.Cross-sectional explanatory drawing on which a 1st coil metal layer formed by the 3rd example of the present invention is a metal pillar.Cross-sectional explanatory drawing on which a 1st coil metal layer formed by the 3rd example of the present invention is a metal pillar.Cross-sectional explanatory drawing on which a 1st coil metal layer formed by the 3rd example of the present invention is a metal pillar.Cross-sectional explanatory drawing of the 2nd coil metal layer formed by the 3rd Example of this invention.The capacitor | condenser device formed by the 3rd Example of this invention is cross-sectional explanatory drawing on a metal pillar.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 4th Example of this invention.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 4th Example of this invention.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 4th Example of this invention.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 4th Example of this invention.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.Cross-sectional explanatory drawing on the metal column in which the multilayer line layer formed by 4th Example of this invention was.The resistance device formed by the 5th example of the present invention is a section explanatory view on a metal pillar.Sectional explanatory drawing which removed the 2nd polymer layer of one part by the etching system utilized by 6th Example of this invention.Sectional explanatory drawing which removed the 2nd polymer layer of one part by the etching system utilized by 6th Example of this invention.Cross-sectional explanatory drawing at the time of the semiconductor base cut of 6th Example of this invention.Cross-sectional explanatory drawing of the tin ball and cut step which were formed by 6th Example of this invention.Cross-sectional explanatory drawing of the tin ball and cut step which were formed by 6th Example of this invention.Cross-sectional explanatory drawing of the pad formed by 6th Example of this invention.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 6th Example of this invention.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 6th Example of this invention.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 6th Example of this invention.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 6th Example of this invention.Cross-sectional explanatory drawing at the time of a connection with the metal layer of two metal pillars formed by 6th Example of this invention.Fig. 9 is a cross-sectional view of the semiconductor pattern formed on the semiconductor base according to the ninth embodiment of the present invention.Cross-sectional explanatory drawing of the metal pillar formed by the damascene system of 7th Example of this invention.Cross-sectional explanatory drawing of the metal pillar formed by the damascene system of 7th Example of this invention.Cross-sectional explanatory drawing of the metal pillar formed by the damascene system of 7th Example of this invention.Cross-sectional explanatory drawing of protrusion * pad * tin ball * RDL layer structure formed by the other Example of this invention.Cross-sectional explanatory drawing of protrusion * pad * tin ball * RDL layer structure formed by the other Example of this invention.Cross-sectional explanatory drawing of protrusion * pad * tin ball * RDL layer structure formed by the other Example of this invention.Cross-sectional explanatory drawing of protrusion * pad * tin ball * RDL layer structure formed by the other Example of this invention.FIG. 6 is a cross-sectional view of a freeway, coil, capacitor device, and resistor device structure formed in another embodiment of the present invention.FIG. 6 is a cross-sectional view of a freeway, coil, capacitor device, and resistor device structure formed in another embodiment of the present invention.FIG. 6 is a cross-sectional view of a freeway, coil, capacitor device, and resistor device structure formed in another embodiment of the present invention.FIG. 6 is a cross-sectional view of a freeway, coil, capacitor device, and resistor device structure formed in another embodiment of the present invention.10: post passivation structure, 12: RDL layer, 14: polymer layer, 16: protective layer, 18: semiconductor IC chip, 20: pad, 22: bevel, 30: semiconductor base, 32: electronic element, 34: thin connection Structure, 36: thin film insulating layer, 38: thin line layer, 40: through hole, 42: protective layer, 44: pad, 46: first polymer layer, 48: opening, 50: first adhesion / inhibition layer, 54 1st design-hardened photoresist layer 56: opening 58: first metal layer 60: first RDL layer 62: 2 second design-hardened photoresist layer 64: opening 66: second metal layer 68: metal column, 70: second polymer layer, 72: opening, 74: third polymer layer, 76: opening, 78: second adhesive / inhibition layer, 82: third design hardened photoresist layer 83: Opening 84: Third metal layer 86 Protruding block, 88: semiconductor unit, 92: tin ball, 94: substrate, 96: fourth polymer layer, 98: pad, 100: RDL layer, 102: gold layer, 104: copper layer, 105: third adhesive / Inhibiting layer 106: nickel layer 110: FIG. 4 Hardened photoresist layer 112: aperture 114: fourth metal layer 116: first coil metal layer 117: protective layer 118: fifth polymer layer , 120: second coil metal layer, 121: capacitor device, 121a: low dielectric layer, 121b: insulating layer, 121c: low resistance metal layer, 121d: protective layer, 122: fourth adhesive / inhibitory layer, 126: fifth Hardened photoresist layer for design, 128: opening, 130: fifth metal layer, 132: protective layer, 134: sixth polymer layer, 136: fifth adhesion / inhibition layer, 140: sixth design for hardened photo resist Layer 142: sixth metal layer 144: seventh polymer layer 146: seventh metal layer 148: eighth polymer layer 150: sixth adhesion / inhibition layer 152: seventh photo-hardened photo Resist layer, 154: eighth metal layer, 156: protective layer, 158: FIG. 9 designed polymer layer, 160: seventh adhesive / inhibitory layer, 162: ninth metal layer |
The invention relates to an adaptive position indicator. Methods, systems, computer-readable media, and apparatuses for determining a position indicator are presented. In some embodiments, position data indicating a position of a mobile device is obtained. A position indicator is determined based on at least one region of a map. The position of the mobile device is located within the at least oneregion. The position indicator indicates a map-feature-dependent region of the map. The position indicator is provided. |
1.A method for user equipment, which includes:Get the location of the user device;Obtaining a location accuracy confidentiality setting, the location accuracy confidentiality setting indicating a desired accuracy level to provide one or more applications, wherein the location accuracy confidentiality setting is defined by the user;Determining a location indicator based on the location and the location accuracy setting;The location indicator is provided to the one or more applications.2.The method of claim 1, wherein the location indicator includes a point, a region, or a geographic division.3.The method of claim 1, wherein the location indicator is set based on a user input, wherein the user input setting can indicate the size, type, or both of the location indicator.4.The method of claim 3, further comprising using a switch control to provide the user input setting.5.The method of claim 1, wherein the location indicator has a time validity duration, wherein the time validity duration is a period during which the location indicator is valid.6.The method of claim 1, wherein the location indicator is based on at least one area of the map.7.The method according to claim 1, wherein the location indicator indicates a map feature dependency area of the map.8.8. The method of claim 7, wherein the map feature dependency area of the map includes an area defined by the map feature.9.8. The method of claim 7, wherein the map feature dependency area includes a building, an urban settlement, a lake, a parking area, a block, a city, or a route.10.The method according to claim 1, wherein the position accuracy setting may include a minimum size and/or a maximum size of the position indicator.11.A device for determining a location indicator, which includes:Memory; andA processor coupled to the memory and configured to:Obtain the location of the device;Obtaining location accuracy confidentiality settings, the location accuracy confidentiality settings indicating a desired accuracy level to provide one or more applications, wherein the location accuracy confidentiality settings are defined by the user;Determining a location indicator based on the location and the location accuracy setting;The location indicator is provided to the one or more applications.12.The device of claim 11, wherein the location indicator includes a point, a region, or a geographic division.13.The device of claim 11, wherein the location indicator is set based on a user input, wherein the user input setting may indicate the size, type, or both of the location indicator.14.The device of claim 13, the processor is further configured to use a switch control to provide the user input setting.15.The device of claim 11, wherein the location indicator has a time validity duration, wherein the time validity duration is a period during which the location indicator is valid.16.The apparatus of claim 11, wherein the location indicator is based on at least one area of the map.17.The device according to claim 11, wherein the location indicator indicates a map feature dependency area of the map.18.18. The device of claim 17, wherein the map feature dependency area of the map includes an area defined by the map feature.19.The device according to claim 17, wherein the map feature dependency area includes a building, an urban settlement, a lake, a parking area, a block, a city, or a route.20.The device according to claim 11, wherein the position accuracy setting may include a minimum size and/or a maximum size of the position indicator. |
Adaptive position indicatorInformation about divisional applicationThis case is a divisional application. The parent case of this division is an invention patent application with an application date of January 12, 2016, an application number of 201680008213.6, and an invention title of "Adaptive Position Indicator".Background techniqueAn aspect of the invention relates to displaying the location of a mobile device on a map.In some locations, the location of the mobile device may not be determined or the location may be determined with low accuracy. For example, a part of the wireless local area network location environment that is not covered by the access point may lead to a situation where the server does not have sufficient data to determine the location of the mobile device with high accuracy. Similarly, when the mobile device does not receive sufficient data from Global Positioning System (GPS) satellites, the mobile device may not be able to determine its location.When the location of the device is temporarily undeterminable and the device subsequently regains the ability to determine its location, the display point indicator for indicating the location of the mobile device can jump from the previously indicated location to the current location. When the position of the mobile device can be determined with low accuracy, the position of the point indicator can change rapidly over time. Point indicators that jump from one place to another or have positions that change rapidly over time may confuse or confuse the user.The point indicator and/or the precise coordinate indication of the location of the mobile device can indicate the location of the user's mobile device with higher accuracy than the user desires. When the precise indication of the user's location is available to other users, service providers, etc., the user's confidentiality and/or security may be compromised.Summary of the inventionCertain aspects describe determining location indicators.In one example, a method for determining a location indicator is disclosed. The method includes obtaining location data indicative of the location of the mobile device. The location indicator is determined based on at least one area of the map. The position of the mobile device is located within the at least one area. The location indicator indicates the map feature dependency area of the map. The location indicator is provided.In another example, a system is disclosed. The system includes a processor. The processor is configured to obtain location data indicative of the location of the mobile device. The processor determines a location indicator based on at least one area of the map. The position of the mobile device is located within the at least one area. The location indicator indicates the map feature dependency area of the map. The processor provides the location indicator.In another example, a non-transitory computer-readable storage medium including one or more programs is disclosed. The one or more programs are configured to be executed by the processor for performing the method of determining the location of the mobile device. The one or more programs include instructions for obtaining location data indicating the location of the mobile device. The one or more programs additionally include instructions for determining a location indicator based on at least one area of the map. The position of the mobile device is located within the at least one area. The location indicator indicates the map feature dependency area of the map. The one or more programs further include instructions for providing the location indicator.In another example, a mobile device is disclosed. The mobile device includes means for obtaining location data indicating the location of the mobile device. The mobile device additionally includes means for determining a location indicator based on at least one area of the map. The position of the mobile device is located within the at least one area. The location indicator indicates the map feature dependency area of the map. The mobile device further includes means for providing the location indicator.Description of the drawingsThe aspects of the present invention are illustrated by means of examples. In the drawings, similar reference numerals indicate similar elements.Figure 1 illustrates a terrestrial network and satellite positioning system that can be implemented to determine the location of a mobile device.Figure 2 illustrates a point indicator displayed on a map according to some embodiments.Figure 3 illustrates a series of locations determined for a mobile device over a time span according to some embodiments.Figure 4 illustrates an area indicator displayed on a map according to some embodiments.Figure 5 illustrates a geographic subregion indicator displayed on a map according to some embodiments.Figure 6 illustrates an area indicator determined based on AP coverage, according to some embodiments.Figure 7 illustrates a reduced scale map for displaying location indicators according to some embodiments.Figure 8 is a flowchart illustrating an example process for determining a location indicator according to some embodiments.Figure 9 is a flowchart illustrating an example process for determining whether to display a point indicator or an area indicator according to some embodiments.Figure 10 is a flowchart illustrating an example process for determining a location indicator using settings on a mobile device according to some embodiments.Figure 11 illustrates an example of a computing system in which one or more embodiments may be implemented.Figure 12 illustrates an example of a mobile device according to some embodiments.Detailed waysSeveral illustrative embodiments will now be described with respect to the drawings. Although the following describes specific embodiments in which one or more aspects of the present invention may be implemented, other embodiments may be used, and various kinds of operations may be performed without departing from the scope of the present invention or the spirit of the appended claims. modify.The device can display the location of the mobile device using a location indicator, such as a point indicator, displayed on a map.In some embodiments, the available positioning data may not be sufficient to calculate the position with the desired accuracy. For example, it may not be possible to determine location using available positioning data. In another example, the accuracy with which the position can be determined may be insufficient, for example, the metric used to determine the positioning accuracy falls below a threshold (for example, a threshold set by the user).Determining the location indicator based on the determination of positioning accuracy (for example, changing the display manner of the location indicator) may be beneficial in various aspects. Displaying a location indicator that indicates the map feature dependency area of the map can provide an aesthetically pleasing location indicator, which can generally indicate the location of the user's device while protecting the user's privacy and/or security, and can allow tracking movement based on the area Device location (rather than more detailed location tracking), etc.In other embodiments, the user may wish to limit the accuracy with which the mobile device location is available to third parties.As used herein, "mobile device" can refer to any mobile electronic computing device. The mobile device may be capable of receiving positioning data transmitted by an access point (AP), a global positioning system (GPS) satellite, a positioning server, and/or other positioning system components. Examples of mobile devices may include smartphones, laptop computers, portable gaming systems, wearable devices, devices installed in automobiles, robots, specialized electronic devices for positioning, and/or any other such electronic devices. Additional examples of mobile devices and computing devices may be disclosed below with respect to FIGS. 10-11.As used herein, "location indicator" may refer to any image, icon, indicator, symbol, text, area, and/or other indicating means used to indicate a location. The location indicator can be a point, a region, and/or a volume. The location indicator may be displayed on the map and/or model to indicate the location of the mobile device.As used herein, "location data" can refer to any data received by the mobile device related to the location of the mobile device. The location data may include, for example: identification information for an access point (AP) that can be used by the user device; received signal strength indication (RSSI) from the AP; a map of the RSSI of an individual AP; round-trip signal propagation time (RTT) ); Time of Arrival (TOA) data; Accuracy data, including data indicating the value of a positioning accuracy measure; GPS navigation messages and/or other positioning data received from artificial satellites (SV); Used for sign-based positioning systems Location data, and information indicating the location of the mobile device, such as latitude and longitude values and/or other coordinates used to indicate the location.As described herein, the term "user" refers to any person who interacts with a network-based system capable of determining the location of a mobile device. Such persons may have mobile devices associated with the persons that interact electronically with the network-based positioning system. Such persons can indicate information about areas in the map, and this information can be provided to a network-based positioning system.As used herein, an "access point" or AP refers to a device connected as a part of a network accessible by a user's mobile device. This network can provide wireless access to a wider network using specific wireless networking protocols such as the IEEE 802.11 protocol, Bluetooth, and/or any other wireless communication method.The embodiments described herein may be implemented using any positioning network such as the ground network system described in relation to FIG. 1 and/or the satellite network system described in relation to FIG. 2.Figure 1 illustrates a terrestrial network and satellite positioning system that can be implemented to determine the location of a mobile device according to some embodiments. The ground network and satellite positioning system 100 may include a mobile device 102; a server 104; multiple access points (AP), such as AP 106, AP 108, and AP 110; multiple base stations, such as base station 112, base station 114, and base station 116; And multiple artificial satellites (SV), such as SV 118, SV 120 and SV 122.The server 104 may include one or more computing devices capable of processing location data and/or communicating with the mobile device 102 regarding location data. The server 104 can access data from a database (not shown). The database may be stored on one or more computing devices of the server 104, and/or may be stored on one or more devices remote from the server 104 and communicatively coupled to the server. The server 104 may be located in an area site where location data is being provided, and/or may be located away from this area.The access points 106 to 110 may be communicatively coupled to the server 104 and any other available infrastructure computing devices by wired and/or wireless connections. The access points 106 to 110 may communicate with the mobile device 102 using network connectivity and/or other wireless connectivity (eg, Wi-Fi, Bluetooth, and the like).In some embodiments, the terrestrial network system includes multiple base stations, such as base stations 112, 114, and 116. The terrestrial network can provide voice and/or data communication for several mobile devices including the mobile device 102 via the base stations 112 to 116. In some embodiments, data communications received by the mobile device via base stations 112 to 116 may include location data. The communication between the mobile device 102 and the base stations 112 to 116 may occur via cellular networks such as CDMA, LTE, WiMAX, and the like.The terrestrial network system may be used to determine the location of the mobile device 102 using signals transmitted from one or more base stations 112 to 116 and/or APs 106 to 110. The environment in which the ground network system can be used to determine the location of the mobile device 102 can include indoor environments, environments suitable for walking distance, and other environments with similar scales, such as shopping malls, airports, stadiums, education campuses, and commercial campuses. (commercial campuses), convention and exhibition centers and the like.The mobile device 102 may receive and/or observe location data from base stations 112 to 116 and/or APs 106 to 110 that may be in known locations. The location data received and/or observed by the mobile device 102 from the base stations 112 to 116 and/or the AP 106 to 110 may include, for example, RSSI, RTT, and TOA.The mobile device 102 can use the location data to estimate the distance between the mobile device 102 and the multiple base stations 112 to 116 and/or the AP 106 to 110. The mobile device 102 can use the estimated distance and the known location to perform trilateration or other location analysis techniques to estimate the location of the mobile device 102.In some embodiments, the mobile device 102 may provide the received and/or observed location data to the server 104. The server 104 can use the location data to estimate the distance between the mobile device 102 and the multiple base stations 112-116 and/or APs 106-110. The server 104 can use the estimated distance and the known location to perform trilateration or other location analysis techniques to estimate the location of the mobile device 102. The server 104 may provide location data indicating the location of the mobile device 102 to the mobile device 102. For example, the server 104 may transmit the location coordinates of the mobile device 102 to the mobile device 102.In another example, the mobile device 102 or the server 104 may compare the position data of RSSI, RTT, and/or TOA of multiple APs 106 to 110 and/or base stations 112 to 116 with heat maps, which are provided in the environment. The expected signal strengths of multiple APs 106 to 110 and/or base stations 112 to 116 at various locations (e.g., grid points) in. The mobile device 102 or the server 104 may use pattern matching and/or another analysis technique to determine the location of the mobile device 102. For example, pattern matching may include finding the location coordinates at which the RSSI of multiple APs 106 to 110 and/or base stations 112 to 116 as determined from the heat map most closely matches the RSSI observed by the mobile device 102.The heat map information corresponding to APs 106 to 110 and/or base stations 112 to 116 may be collected using different techniques. For example, a dedicated device can be used to measure the signal strength at certain locations and send the measured data to the location server. The server 104 may store heat map information corresponding to the mobile device 102 and provide the information to the mobile device.In some embodiments, crowdsourcing schemes can be used to generate heat map information. For example, multiple mobile devices 102 can participate in crowdsourcing. The participating mobile devices 102 can receive and/or observe location data from APs 106 to 110 and/or base stations 112 to 116. The participating mobile devices 102 can transmit location data to the server 104. The server 104 may use the received location data to determine location data including heat maps, locations of APs 106 to 110, RSSI information of APs 106 to 110 at various locations relative to APs 106 to 110, and the like. The location data determined by the server 104 can be used to determine the location of the mobile device 102.A satellite network system that can be implemented for determining the location of a mobile device can include the mobile device 102 and a plurality of artificial satellites (SV), such as SV 118, SV 120, and SV 122. The satellite positioning system may include one or more satellite positioning systems, such as GPS, GNSS, Beidou, GLONASS, and/or Galileo and the like. The mobile device 102 may receive signals from one or more of SVs 118 to 122.In one example, the mobile device 102 may receive and/or observe location data from one or more of SVs 118-122, such as one or more signals from SVs 118-122. The mobile device 102 can use the location data from the SVs 118 to 122 to determine the location of the mobile device 102.Another example positioning technique that can be used by the mobile device 102 or the server 104 to determine the location of the mobile device 102 is landmark-based positioning.When the mobile device determines that the available positioning data is insufficient to calculate the position with the desired accuracy, the mobile device can indicate this situation by changing the display mode of the position indicator. The various methods used to assess accuracy are discussed below.In some embodiments, when the location of the mobile device 102 is determined by the server 104, the location data received by the mobile device 102 from the server 104 may include accuracy data associated with the determined location. When the mobile device 102 determines its own location, the mobile device 102 can determine the accuracy data associated with the determined location.The mobile device 102 and/or the server 104 may evaluate the accuracy data to determine the accuracy of the location determination of the mobile device 102. Based on the determined accuracy, the mobile device 102 and/or the server 104 can determine the type of location indicator to display, such as a point indicator and/or an area indicator as discussed below.In some embodiments, the accuracy data of multiple locations in a specific environment (for example, a heat map) may be stored by the mobile device 102 and/or the server 104. When the location of the mobile device 102 is determined, the heat map data at the determined location may be compared to one or more thresholds. For example, the accuracy data may include RSSI data received by the mobile device 102 from the APs 106 to 110. When the mobile device 102 receives RSSI data exceeding the threshold from the minimum number of APs (for example, 3 APs), the accuracy criterion can be met. The threshold RSSI value may be, for example, -80db to -40db, such as a threshold in the range of -70db to -50db, such as -60db.The accuracy data may include horizontal attenuation of precision (HDOP) values. When the HDOP value is less than the threshold, the accuracy criterion can be met. The mobile device 102 may receive location data that includes the HDOP value associated with the determined location of the mobile device 102. For example, a server that stores the HDOP value associated with the environmental location may determine the HDOP value associated with the determined location of the mobile device 102 and provide the HDOP value to the mobile device 102. In some embodiments, the mobile device 102 receives multiple HDOP values associated with an environmental location. The mobile device 102 can determine the HDOP value associated with the determined location of the mobile device 102. The mobile device can compare the HDOP value with the threshold HDOP value. The threshold HDOP value may be a value in the range of 1 to 8, such as 1 to 4, such as 2.The accuracy data may include the maximum change in the determined position of the mobile device 102 within a time span (ie, time period). When the total movement of the mobile device 102 in a time period (eg, along a path such as the path 306) as determined from the location data is less than a threshold distance, the accuracy criterion may be satisfied. The time period may be, for example, a period in the range of 1 second to 60 seconds, such as a period in the range of 5 seconds to 30 seconds, such as 20 seconds. The threshold distance may be a distance in the range of 5 feet to 50 feet, such as a distance in the range of 10 feet to 30 feet, such as 20 feet.The mobile device 102 may compare the location data determined for the mobile device 102 within a time period with the output of one or more sensors of the mobile device 102 during the time period. The sensor may include, for example, the accelerometer of the mobile device 102. For example, if the change in the position determined for the mobile device 102 in a time period exceeds the distance traveled by the mobile device in the same time period as determined by the accelerometer of the mobile device 102, the position may not be accurately determined. accuracy. When the divergence change between the total movement of the mobile device 102 in a time period as determined from the position data and the total movement of the mobile device 102 in the same time period as determined by the sensor of the mobile device is less than the threshold distance difference, The accuracy criteria can be met. The threshold distance difference (ie, increment) as determined from the location data and as determined by the sensor of the mobile device may be a distance in the range of 1 to 20 feet, for example, a distance in the range of 3 to 10 feet, such as 5 foot.In some embodiments, the one or more accuracy criteria are based on default or custom settings that indicate a desired level of accuracy. For example, the user may wish to share the location of the mobile device with a third party. The user may wish to limit the accuracy with which third parties can view the user's location. The user can define settings, such as a preferred display area indicator and/or point indicator, preferred area type, distance, or other accuracy indications that a third party can view the location of the mobile device. The accuracy criterion indicating the level of accuracy of the desired location display may be stored by the mobile device 102 and/or the server 104, for example.Figure 2 illustrates a location indicator displayed on a map according to some embodiments. The location indicator may be a point indicator 202 displayed on the map 200. The location indicator may be displayed on the display of the mobile device 102 or another display. The dot indicator 202 may be a small dot as shown, or another shape, image, or other indicator used to indicate the location of the mobile device 102. In some embodiments, the point indicator 202 may be displayed at the center of the uncertainty indicator 204 having a size corresponding to the accuracy of the location determination of the mobile device 102 (eg, the radius of a circle). The uncertainty indicator 204 may be a circle as shown, or other shapes, images, or other indicators used to indicate the degree of uncertainty of the determined position of the mobile device 102. The uncertainty indicator 204 may indicate a series of possible locations of the mobile device 102.In the illustrative example of FIG. 2, the map 200 is a floor plan. Alternative maps may be used, such as road maps, route maps, neighborhood maps, three-dimensional models of multi-level structures, or any other maps used to provide location information. The map 200 may be received by the mobile device 102 from the server 104.The map 200 may include multiple areas, such as area 206 and area 208. Regions (for example, 206, 208) can be map feature dependency regions. The map feature dependency area may be an area defined by map features, such as roads, walls, partitions, boundary lines, national boundaries, natural features, and so on. The map feature dependency area can be excluded as a map partition of any partition of the map, such as a rectangular grid cell. For example, the map feature dependency area of an indoor map may be an area delimited or partially delimited by a wall, such as a room. An example of the map feature dependency area of the outdoor map may be a block area delimited by multiple roads. Additional examples of areas may include one or more floors of a multi-storey building, one or more buildings, one or more seating areas, one or more airport boarding gates, one or more urban settlements, one or more Multiple lakes, one or more parking areas, one or more blocks, one or more cities, one or more routes, one or more sections of the map 200, or any combination thereof.The area 206 is a room in which the point indicator 202 is displayed to indicate the determined position of the mobile device 102 within the area 206. In the illustrative example of FIG. 2, area 206 is shown as being bounded by four walls, including the wall shown at 210. The boundary may be a structural boundary; such as walls, partitions, and/or streets; non-structural boundaries; such as land boundaries, national boundaries, and/or other divisions between regions; boundary instructions input by users, etc.; or any combination thereof.A region can include two or more subregions. For example, if the area is a floor of a building (e.g., the map 200 may be a floor of a building), the floor may include sub-areas (e.g., sub-areas 206, 208). In the case of the regions discussed herein, it will be recognized that subregions can be used.In some embodiments, the user can define the map feature dependency area. For example, the user can define a map feature dependency area such as area 206. The user may use the user interface module of the computer system such as the mobile device 102 to define the area information. The user may define the area information by drawing, selecting or otherwise indicating the boundaries and/or partial boundaries of one or more areas 206 on the map 200. In another example, the user may define the area information by overlapping geometric objects on the map 200 to indicate an area corresponding to the geometric objects. In another example, the user may define area information by painting or otherwise indicating an area associated with an area such as area 206. The user may additionally provide identification information, such as room number, room name, floor number, floor name, building name, and/or other metadata associated with the indicated area.In some embodiments, the map analysis module can be used to automatically determine the map feature dependency area of the map 200. For example, the map analysis module may analyze visually available information, numerically available information, or other information available in the map 200. The map analysis module can use, for example, image analysis to locate boundaries, color different areas, or other area indications that can be included in the map. The map analysis module can also use image detection, image tracking, image recognition technology and/or extraction technology to determine the area, boundary, etc. in the map. The map analysis module can consider metadata information, such as building information, user trajectory data, and so on. In some embodiments, the map analysis module may use a threshold distance (eg, the minimum distance between walls) to determine the boundary. For example, if there is no boundary in the map or in an area with a threshold distance (for example, a distance exceeding 30 feet), a boundary can be established within the map or area, such as the middle of the existing boundary or a fixed distance from the existing boundary. The map analysis module may transmit the area information to the server 104 and/or store the area information at the server 104.Figure 3 illustrates a series of positions determined for a mobile device in a time period according to some embodiments. The point indicator 202 is shown at eight locations in the map 200, which moves from the first location 302 to the eighth location 304 along the path indicated at 306.When the position determined for the mobile device 102 cannot be accurately determined, the determined position may greatly change over time, even when the actual position of the mobile device 102 does not change. The low-accuracy location determination can be attributed to the following reasons, for example, there are relatively few APs 106 to 110 in the area of the map 200 (for example, the area 208), and/or there is a line of sight between the mobile device 102 and the APs 106 to 110. The structural barrier. In this case, a user viewing the map 200 displayed by the mobile device 102 can see a jump (e.g., along the path 306) from a portion of the area 208 (e.g., as indicated at the location 302) in a short period of time. ) To a point indicator 202 of another part of the area 208 (e.g., as indicated at position 304). This may confuse or confuse users. In such cases, in addition to or in place of the dot indicator, it may be necessary to display, for example, the area indicator as described with respect to FIGS. 4 to 6.Figure 4 illustrates an area indicator displayed on a map according to some embodiments. The area indicator 402 is a location indicator that can be displayed on the map 200 on the display of the mobile device 102. The area indicator 402 indicates that the determined location of the mobile device 102 is within the area 208. In some embodiments, when the location data cannot be accurately determined, the area indicator 402 may be displayed. When the location data is not available and/or when the location data does not meet the accuracy conditions, the location data may not be accurately determined.The area indicator 402 may be displayed on the map 200 on the display of the mobile device 102, for example. In some embodiments, the size of the area indicator 402 may be larger than the size of the dot indicator 202. The area indicator 402 may have a shape visually associated with the area, such as a shape that matches or resembles the shape of the area. For example, the area indicator 402 has a shape similar to that of the area 208. The shape of the area indicator 402 can be customized or automatically determined as indicated above. The area indicator 402 may include text or other marks, such as text identifying the area. The area indicator 402 may include features to visually distinguish the area indicator 402 from the map 200. For example, the area indicator 402 may be highlighted, for example, the area, text, border, and/or other elements or elements of the area indicator 402 are colored, patterned, bolded, flashed, etc. In one embodiment, the area indicator may be smaller than the indicated area or may be larger than the indicated area.When the location data can be accurately determined and/or when the user wishes to accurately view the location of the mobile device, the point indicator 202 can be displayed. The area indicator 402 can be used when the location data cannot be accurately determined and/or when the user wants to show the location of the mobile device in general rather than indicating the precise location of the device. In some embodiments, the area indicator 402 may be displayed only when the point indicator 202 is not displayed. In other embodiments, the area indicator 402 and the dot indicator 202 are displayed simultaneously.The user can use the switching control to indicate whether the area indicator 402 and/or the point indicator 202 will be displayed on the map 200. For example, the user may use a switch button of a user interface that displays the map 200 and one or more of the area indicator 402 and the point indicator 202 to select the switch function. In some embodiments, the user may touch the area indicator 402 to switch to the point indicator 202, or the user may touch the point indicator 202 to switch to the area indicator 402.Figure 5 illustrates a geographic subregion indicator displayed on a map according to some embodiments. In some embodiments, the area indicator 402 may be a geographic division area indicator, as shown at 502. Geographical divisions can be neighborhoods (for example, Mission as indicated at 502), urban settlements, theme park divisions, shopping malls or other shopping district divisions, postal areas, cities, counties, states and their Similar. The area indicator 502 is a location indicator that can be displayed on the map 500 on the display of the mobile device 102. The map 500 may be a block map, a road map, a city map, a postal map, and/or any other map. In some embodiments, the map 500 may be a map used for indoor and/or outdoor positioning, such as positioning performed using base stations 112 to 116 and/or SVs 118 to 122. In the illustrative example of FIG. 5, the area indicator 502 includes a highlighted area and bold text ("Mission").In some embodiments, the user may share the location indicated by the point indicator 202 and/or the area indicator 402, for example by transmitting location information to a third party (for example, an information collection service or device owned by another user) information. The user can select the shared area indicator 402, for example, to increase privacy and security. The size and/or type of the area indicator to be displayed may be determined based on settings such as settings input by the user.Figure 6 illustrates an area indicator determined based on AP coverage, according to some embodiments. In many cases, it may not be possible to accurately determine the location, such as in an environmental area with relatively few APs or in an environmental area where the line of sight to the AP is blocked by a structural barrier. In some embodiments, heat maps can be used to determine areas where positioning accuracy is expected to be low (eg, below a threshold accuracy value). The low accuracy area 602 may be an area determined by the coverage evaluation module.One or more low accuracy areas 602 of the map 200 may be determined based on the heat map or other location data associated with the map 200. For example, the low accuracy area 602 may be determined based on one or more grid points of the heat map where RSSI values higher than (or lower than in some embodiments) a threshold value are not available for more than two APs. The threshold RSSI value may be a value in the range of -80db to -40db. For example, the threshold RSSI value may be a value in the range of -70db to -50db, such as -60db.In an illustrative example, the heat map may indicate that the grid points receive signals from three APs (e.g., APs 106 to 110). The signals received from AP 106, 108, 110 are -65db, -55db and -60db. Compare the RSSI to the threshold RSSI value of -60db. The grid point receives a signal from AP 106 that exceeds the threshold. The grid point does not receive a signal exceeding the threshold from AP 108 or AP 110. Because the grid point does not receive RSSI values higher than the threshold value from more than two APs, the grid point is determined as part of the low accuracy area (for example, the low accuracy area 602).In some embodiments, the area indicator is used to display the location of the mobile device 102 when the determined location is within the low accuracy area. The shape of the low accuracy area 602 may be smoothed, rounded, and/or modified in other ways to produce a visually appealing shape. Sensors on the mobile device (via dead reckoning, etc.) and/or other positioning systems and/or technologies can be used to help shape the area indicator. The low accuracy area indicator may be an area indicator 402 having a shape corresponding to the low accuracy area 602. When the location cannot be accurately determined, the low accuracy area indicator may be used to display the location of the mobile device 102.Figure 7 illustrates a reduced scale map for displaying location indicators according to some embodiments. The map 700 shown in FIG. 7 is the map 200 shown in FIG. 2, which has been scaled down (ie, "zoomed out") to show a larger area. In some embodiments, when the location data cannot be accurately determined, the mobile device 102 may display the map 700 instead of the map 200. In this way, the unstable movement of the point indicator 702 (for example, the point indicator as described by the reference point indicator 202) can be made less noticeable because the point indicator 702 is reduced relative to the displayed scale. The map 700 "jumps" a smaller distance.In some embodiments, when the reduced-scale map 700 is displayed, the size of the point indicator 702 displayed on the reduced-scale map 700 is larger than the size of the point indicator 202 displayed on the map 200. In this way, the user can be made less aware of the movement of the position indicator over time due to the decrease in positioning accuracy.Figure 8 is a flowchart illustrating an example process for determining a location indicator according to some embodiments.At operation 802, a receiver, such as the mobile device 102 and/or the server 104, obtains location data. For example, the location data may be obtained via the antenna 1218 of the mobile device 102 or via the communication subsystem 1012 of the computing device 1000. The location data may include, for example, data indicating the location of the mobile device 102 (e.g., the location determined by the server 104), and/or data that can be used by the mobile device 102 to determine its location.At operation 804, the receiver determines a location indicator based on at least one area of the map. In various embodiments, the mobile device 102, the server 104, and/or the computer receiving data from the server 104 may determine the location indicator. The location of the mobile device can be located in at least one area. The location indicator indicates the map feature dependency area of the map. The receiver may determine the map feature dependency area of the map based on one or more features of the map and/or other criteria, as described above.In some embodiments, the location indicator is determined based on at least one of a privacy setting, an accuracy level of location data, a change in location within a time period, or any combination thereof. The accuracy level may include accuracy criteria and/or settings as discussed elsewhere herein.At operation 806, a location indicator is provided. For example, the server 104 may provide the location indicator to the mobile device 102, and/or the mobile device 102 may provide the location indicator to the server 104.9 is a flowchart illustrating an example process for determining a location indicator using a map feature dependency area of a map according to some embodiments. Depending on the accuracy of the location data that the mobile device 102 can determine and/or the setting based on the desired accuracy level of the indication (for example, to protect user privacy), the user may wish the mobile device 102 to display the point indicator 302 and the area indicator. Switch between symbols 402.At operation 902, a device such as the mobile device 102 or the server 104 may determine whether the location data has been received by the mobile device 102. For example, the location data may be received via the antenna 1218 of the mobile device 102 or via the communication subsystem 1012 of the computing device 1000. In response to determining that the location data is not received, the flow may return to operation 902. In response to determining that the mobile device 102 has received the location data, the flow may proceed to operation 904.At operation 904, the processor 1104 of the mobile device 102 may determine whether the accuracy of the received location data meets one or more accuracy criteria. The mobile device 102 may use a method of comparing the location data with the accuracy criteria as discussed above to determine whether the accuracy of the received location data meets one or more accuracy criteria. If one or more accuracy criteria are met, the mobile device 102 can determine that the location data has sufficient accuracy.If the location data lacks sufficient accuracy, the mobile device 102 may display an area indicator 402 associated with the area in which the determined location of the mobile device 102 is located, as indicated at operation 906. For example, the area indicator may be displayed by the display 1222 of the mobile device 102. In some embodiments, when the area indicator 402 is displayed, as indicated at operation 906, the map on which the area indicator 402 is displayed is displayed at a reduced scale (ie, "zoomed out") to show a larger area (for example, as shown in FIG. 7).If the location data has sufficient accuracy, the mobile device 102 may display a point indicator 302 at the determined location of the mobile device 102, as indicated at operation 908. For example, the point indicator may be displayed by the display 1222 of the mobile device 102.In various embodiments, the mobile device 102 may determine whether to display the point indicator and/or the area indicator based on the size of the display, the size of the application program interface on the display, the determined location, the uncertainty of the location, or any combination thereof. For example, some of these techniques can be used or combined to determine indicators that are aesthetically pleasing (eg, avoid displaying area indicators that fill a significant portion of the displayed application program interface).It will be understood that one or more of the operations described with reference to FIG. 9 may be performed by the server 104 or another device.Figure 10 is a flowchart illustrating an example process for determining a location indicator using settings on the mobile device 102 according to some embodiments. For applications in which regional-level location data can be more useful than accurate location data, the server 104 can use settings to protect user privacy. For example, a traveling salesman may only want to see which streets he has been to, or part of his work as a salesman, and his boss may want to see which streets the traveling salesman has visited. Other examples of applications where regional-level location data may be more desirable include, in particular, employee tracking for the following functions: traveling salesman, repairman, street cleaner, garbage collection, truck transportation, etc.At operation 1002, the server 104 accesses the settings of the mobile device 102. The setting can be, for example, a location accuracy setting. The settings may be default values, settings received by the mobile device from a third party, and/or settings received by the mobile device through the user (e.g., via user input received at the mobile device). The settings may include one or more values that indicate the level of accuracy available for the location indicator and/or location data. For example, the location accuracy setting may indicate that the point indicator 202 will be displayed on the user's mobile device, and the area indicator 402 will be displayed on the device belonging to the user's contact. In some embodiments, the location accuracy setting may include the minimum and/or maximum size of the location indicator. For example, the mobile device 102 may receive an input from the user indicating the minimum size of the map feature dependency area indicator 402 to be determined, transmitted, and/or displayed.The setting can include a time validity duration. The time validity duration may indicate the amount of time during which the location indicator is valid, for example, from the time the location indicator is transmitted at operation 1010. When the time validity duration expires, the location indicator may no longer be used (for example, the mobile device 102 stops displaying the location indicator), and/or the server 104 may repeatedly operate one or more of 1002 to 1010 to change the new location The indicator is transmitted to the mobile device 102 and the like. In some embodiments, the time validity duration may only be used when the mobile device 102 is in motion (eg, as determined by the accelerometer 1216 of the mobile device 102).In an alternative embodiment, the server 104 may store settings such as location accuracy settings. For example, the server 104 can access settings stored by the server 104, such as settings associated with the mobile device 102.At operation 1004, the server 104 determines the location of the mobile device 102. For example, the server 104 may receive location data from the mobile device 102. In one embodiment, the server 104 may use the heat map data and the received location data to determine the location of the mobile device 102.At operation 1006, the server 104 accesses the map. For example, the server 104 may use the location determined at operation 1004 to determine the map to be accessed.At operation 1008, the server 104 determines a location indicator. The server 104 may use the settings accessed at operation 1002, the location determined at operation 1004, and/or the map determined at operation 1006 to determine the location indicator. In some embodiments, the location indicator may be an area indicator 402 indicating a dependency area of map features of the map.At operation 1010, the server 104 transmits a location indicator, such as a map feature dependency location indicator, to the mobile device 102. The mobile device 102 can display the location indicator received from the server 104. In some embodiments, information such as setting information may be transmitted together with the location indicator.It will be understood that one or more of the operations described with reference to FIG. 10 may be performed by the mobile device 102 or another device.Figure 11 illustrates an example of a computing system in which one or more embodiments may be implemented. The computer system as illustrated in Figure 11 may be incorporated as part of the computerized device described previously. For example, the computer system 1100 may represent some of the components of the mobile device 102 and/or the server 104. The computer system 1100 may additionally represent any of the APs 106 to 110. The computer system 1100 may further store and/or execute various modules described herein.FIG. 11 provides a schematic illustration of an embodiment of a computer system 1100, which can execute the methods provided by various other embodiments as described herein, and/or can act as a master computer system, remote query All-in-ones/terminals, point-of-sale devices, mobile devices and/or computer systems. Diagram 1100 is intended to provide a generalized description of the various components, any or all of which can be utilized as needed. Therefore, diagram 1100 generally illustrates how individual system elements can be implemented in a relatively separate or relatively integrated manner.The computer system 1100 is shown as including hardware elements that can be electrically coupled via the bus 1102 (or can communicate in other ways as needed). The hardware elements may include: one or more processors 1104, including (but not limited to) one or more general-purpose processors and/or one or more special-purpose processors (for example, digital signal processing chips, graphics acceleration processors, and / Or the like); one or more input devices 1106, which may include (but are not limited to) a mouse, keyboard, and/or the like; and one or more output devices 1108, which may include (but are not limited to) Display device, printer and/or the like. In one embodiment, one or more processors 1104 may be used to compare the location data with at least one accuracy criterion. In one embodiment, one or more processors 1104 may be used to determine a location indicator based on a comparison of location data with at least one accuracy criterion and a map.The computer system 1100 may further include (and/or communicate with) one or more non-transitory storage devices 1110, which may include (but are not limited to) local and/or network-accessible storage devices , And/or may include (but are not limited to) disk drives, drive arrays, optical storage devices, solid-state storage devices such as random access memory ("RAM") and/or read-only memory ("ROM"), which may be Programmable, flashable and/or the like. Such storage devices can be configured to implement any suitable data storage devices, including (but not limited to) various file systems, database structures, and/or the like.The computer system 1100 may also include a communication subsystem 1112, which may include (but is not limited to) a modem, a network card (wireless and/or wired), an infrared communication device, a wireless communication device and/or a chipset (for example, BluetoothTM device, 802.11 device) , Wi-Fi devices, WiMax devices, cellular communication facilities, etc.) and/or similar communication interfaces. The computing system may include one or more antennas for wireless communication as part of the communication subsystem 1112 or as a separate component coupled to any part of the system. The communication subsystem 1112 may permit the exchange of data with a network (eg, as an example, the network described below), other computer systems, and/or any other devices described herein. In many embodiments, the computer system 1100 will further include a non-transitory working memory 1114, which may include RAM and/or ROM devices, as described above. In one embodiment, the communication subsystem 1112 may be used to receive location data and/or determine the location of the computing system.The computer system 1100 may also include software elements shown as currently located in the working memory 1114, which include an operating system 1116, device drivers, executable libraries, and/or other code such as one or more application programs 1118, which may It includes computer programs provided by various embodiments and/or can be designed to implement methods provided by other embodiments and/or configure systems provided by other embodiments, as described herein. Merely as an example, one or more programs and/or modules described in relation to the methods discussed above may be implemented as codes and/or instructions executable by a computer (and/or a processor within a computer); then in an aspect Such codes and/or instructions can be used to configure and/or adapt a general-purpose computer (or other device) to perform one or more operations according to the described methods.The collection of these instructions and/or codes may be stored on a computer-readable storage medium (for example, the storage device 1110 described above). In some cases, the storage medium may be incorporated into a computer system (eg, computer system 1100). In other embodiments, the storage medium may be separated from the computer system (for example, a removable medium (for example, a compact disc)) and/or provided in an installation package, so that the storage medium can be used for programming, configuration, and/or adaptation. A general-purpose computer on which instructions/codes are stored. These instructions may be in the form of executable code that can be executed by the computer system 1100, and/or may be in the form of source and/or installable code that is compiled and/or installed on the computer system 1100 After that (for example, using any of a variety of commonly available compilers, installation programs, compression/decompression utilities, etc.), it immediately takes the form of executable code.Substantial changes can be made according to specific requirements. For example, customized hardware may also be used, and/or specific elements may be implemented in hardware, software (including portable software, such as applets, etc.), or both. In addition, the hardware and/or software components that provide certain functionality may include a dedicated system (with dedicated components) or may be part of a more general system. For example, a system configured to provide some or all of the features described herein may include specialized (e.g., application specific integrated circuit (ASIC), software methods, etc.) or general-purpose (e.g., processor 1104, application Program 1118, etc.) hardware and/or software. In addition, connections to other computing devices (such as network input/output devices) can be utilized.Some embodiments may utilize a computer system (for example, the computer system 1100) to perform the method according to the present invention. For example, some or all of the programs of the described method may be executed by the computer system 1100 in response to the processor 1104 to execute one or more instructions contained in the working memory 1114 (which may be incorporated into the operating system 1116 and/or other code For example, one or more sequences of the application 1118) are executed. Such instructions may be read into the working memory 1114 from another computer-readable medium, such as one or more of the storage devices 1110. For example only, execution of the sequence of instructions contained in the working memory 1114 may cause the processor 1104 to execute one or more programs of the methods described herein.As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any medium that participates in providing data that causes a machine to operate in a specific manner. In an embodiment implemented using the computer system 1100, various computer-readable media may be involved in providing instructions/code to the processor 1104 for execution, and/or various computer-readable media may be used for storage and/or Or carry such instructions/code (for example, as a signal). In many embodiments, the computer-readable medium is an object and/or tangible storage medium. Such media can take many forms, including (but not limited to) non-volatile media and volatile media. Non-volatile media includes, for example, optical disks and/or magnetic disks, such as storage device 1110. Volatile media includes (but is not limited to) dynamic memory, such as working memory 1114.In some embodiments, computer-readable media may include transmission media. Transmission media include (but are not limited to) coaxial cables, copper wires, and optical fibers, including wires including the bus 1102 and various components of the communication subsystem 1112 (and/or the communication subsystem 1112 to provide communication with other devices) ). Therefore, the transmission medium may also take the form of waves (including but not limited to radio, sound waves, and/or light waves, such as those generated during radio-wave and infrared data communications).For example, common forms of physical and/or tangible computer-readable media include floppy disks, flexible disks, hard disks, tapes, and/or any other magnetic media, CD-ROM, any other optical media, punch cards, paper Tape, any other physical media with a hole pattern, RAM, PROM, EPROM, Flash-EPROM, any other memory chip or cassette, carrier as described below, and/or computer can read instructions and/or codes from it Any other media.Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor 1104 for execution. Merely as an example, the instructions may initially be carried on a magnetic disk and/or optical disk of a remote computer. The remote computer can load instructions into its dynamic memory and send the instructions as signals via the transmission medium to be received and/or executed by the computer system 1100. According to various embodiments, these signals (which may be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like) are all examples of carrier waves on which instructions may be encoded.The communication subsystem 1112 (and/or its components) will generally receive the signal, and the bus 1102 can then carry the signal (and/or data, instructions, etc. carried by the signal) to the working memory 1114, and the processor 1104 will work from The memory retrieves and executes instructions. The instructions received by the working memory 1114 may optionally be stored on the non-transitory storage device 1110 before or after execution by the processor 1104.Figure 12 illustrates an example of a mobile device according to some embodiments. The mobile device 102 includes a processor 1202 and a memory 1204. The mobile device 102 may use a processor 1202 that is configured to execute instructions for performing operations at several components, and may be, for example, a general-purpose processor or a microprocessor suitable for implementation in a portable electronic device. The processor 1202 is communicatively coupled with multiple components within the mobile device 102. For example, the processor 1202 may communicate with other illustrated components across the bus 1206. The bus 1206 can be any subsystem suitable for transferring data within the mobile device 102. The bus 1206 may be multiple computer buses and include additional circuits for transferring data. In one embodiment, one or more processors 1202 may be used to compare the position data with at least one accuracy criterion. In one embodiment, one or more processors 1202 may be used to determine a location indicator based on a comparison of location data with at least one accuracy criterion and a map.The memory 1204 may be coupled to the processor 1202. In some embodiments, the memory 1204 provides both short-term and long-term storage and can actually be divided into units. The memory 1204 may be volatile, such as static random access memory (SRAM) and/or dynamic random access memory (DRAM); and/or nonvolatile, such as read only memory (ROM), flash memory And the like. In addition, the memory 1204 may include a removable storage device, such as a secure digital (SD) card. Therefore, the memory 1204 provides storage of computer-readable instructions, data structures, program modules, and other data for the mobile device 102. In some embodiments, the memory 1204 may be distributed in different hardware modules.In some embodiments, the memory 1204 stores a plurality of application program modules 1226 to 1228, which may be any number of application programs. The application module contains specific instructions to be executed by the processor 1202. In alternative embodiments, other hardware modules 1210 may additionally execute certain application programs or parts of application programs 1226 to 1228. In some embodiments, the memory 1204 may additionally include a secure memory, which may include additional security controls to prevent copying or other unauthorized access to secure information.In some embodiments, the memory 1204 includes an operating system 1212. The operating system 1212 can be operated to initiate the execution of instructions provided by the application modules 1226 to 1228, and/or manage other hardware modules 1210, and interface with communication modules that can use the wireless transceiver 1214. The operating system 1212 may be adapted to perform other operations across the components of the mobile device 102, including thread processing, resource management, data storage control, and other similar functionality.In some embodiments, the mobile device 102 includes multiple other hardware modules 1210. Each of the other hardware modules 1210 is a physical module in the mobile device 102. However, although each of the hardware modules 1210 is permanently configured as a structure, the corresponding ones of the hardware modules 1210 may be temporarily configured to perform specific functions or temporarily activated. A common example is an application module that can program a camera module (ie, a hardware module) for shutter release and image capture. The corresponding ones in the hardware module 1210 may be, for example, an accelerometer 1216, a Wi-Fi transceiver, a satellite navigation system receiver (e.g., GPS module), a pressure module, a temperature module, an audio output and/or input module (e.g. Microphone), camera module, proximity sensor, one card dual number (ALS) module, capacitive touch sensor, near field communication (NFC) module, Bluetooth transceiver, cellular transceiver, magnetometer, gyroscope, inertial sensor (e.g. , A module combining accelerometer and gyroscope), ambient light sensor, relative humidity sensor and/or any other similar modules operable to provide sensory output and/or receive sensory input. In some embodiments, one or more functions of the hardware module 1210 may be implemented in software. In one embodiment, the hardware module 1210 may be used to receive location data and/or determine the location of the computing system.The mobile device 102 may include components such as a wireless communication module, which may integrate the antenna 1218 and the wireless transceiver 1214 with any other hardware, firmware, and/or software necessary for wireless communication. This wireless communication module can be configured to receive signals from various devices such as data sources via networks, access points, base stations, SVs, and the like, such as AP 106 to 110, base station 112 To 116, SV 118 to 122, etc. In one embodiment, the wireless communication module can be used to receive location data and/or determine the location of the computing system.In addition to other hardware modules 1210 and application modules 1226 to 1228, the mobile device 102 may have a display 1222 and a user input module 1224. The display 1222 graphically presents the information from the mobile device 102 to the user. This information may be derived from one or more application modules 1226 to 1228, one or more hardware modules 1210, combinations thereof, and/or any other suitable means for parsing graphical content for users (eg, by operating system 1212). The display 1222 may be liquid crystal display (LCD) technology, light emitting polymer display (LPD) technology, and/or some other display technology. In some embodiments, the display 1222 is a capacitive and/or resistive touch screen, and may be sensitive to tactile and/or tactile contact with the user. In such embodiments, the display 1222 may include a multi-touch sensitive display.The methods, systems, and devices discussed above are examples. Various embodiments may omit, replace, or add various programs or components as appropriate. For example, in alternative configurations, the methods described may be performed in a different order than described, and/or stages may be added, omitted, and/or combined. Also, the features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments can be combined in a similar manner. Also, technology is evolving and therefore many elements are examples, which does not limit the scope of the present invention to those specific examples.Specific details are given in the description to provide a thorough understanding of the embodiments. However, the embodiments can be practiced without these specific details. For example, well-known circuits, procedures, algorithms, structures and techniques have been shown without unnecessary details so as not to obscure the described embodiments. This description provides example embodiments and is not intended to limit the scope, applicability, or configuration of the invention. Specifically, the foregoing description of the embodiments will provide those skilled in the art with an enlightening description for implementing the embodiments of the present invention. Various changes may be made to the function and arrangement of elements without departing from the spirit and scope of the present invention.Also, some embodiments are described as processes, which are depicted as flowcharts. Although each flowchart can describe the operations as a sequential process, many operations can be performed in parallel or simultaneously. In addition, the order of operations can be rearranged. The process can have additional steps not included in the diagram. In addition, the embodiments of the method may be implemented by hardware, software, firmware, middleware, microcode, hardware description language, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments used to perform the associated tasks may be stored in a computer-readable medium such as a storage medium. The processor can perform the associated tasks.Where several embodiments have been described, various modifications, alternative constructions and equivalents can be used without departing from the spirit of the invention. For example, the above elements may only be components of a larger system, where other rules may take precedence over the application of the present invention or modify the application of the present invention in other ways. And, several steps may be performed before, during, or after the above elements are considered. Therefore, the above description does not limit the scope of the present invention. |
Methods and apparatus relating to disabling one or more cache portions during low voltage operations are described. In some embodiments, one or more extra bits may be used for a portion of a cache that indicate whether the portion of the cache is capable at operating at or below Vccmin levels. Other embodiments are also described and claimed. |
1.A processor comprising:a cache having a plurality of cache line sets;Replacement logic for detecting access to a set of cache lines in the ultra low power mode ULPM and for speeding at least in part based on the first path corresponding to the set of cache lines in the ULPM One or more disable bits of the cache line to evict the cache line of the first pass, but are not used in the ULPM based at least in part on a second way corresponding to the set of cache lines One or more disable bits of the cache line to evict the cache line of the second pass,Wherein the ULPM uses an ultra-low voltage level to access the cache line at a voltage that is lower than the minimum voltage level at which all of the memory cells of the cache are reliably operating.2.The processor of claim 1 wherein said permutation logic is operative to view a respective cache line in said ULPM based at least in part on one or more disable bits of a cache line of said set of cache lines Most recently used.3.The processor of claim 1 further comprising power on self test (POST) logic for testing said set of cache lines to determine said first pass Whether the cache line is operational at the ultra low voltage level.4.The processor of claim 1 wherein the one or more disable bits comprise one or more redundant bits.5.The processor of claim 1 wherein the cache line set is responsive to the one or more disable bits even if there is a hit on a flag of the cache line corresponding to the set of cache lines The visit is still missing.6.The processor of claim 1 wherein the given address is mapped to a different set of cache lines at different times.7.The processor of claim 6 further comprising a counter for causing said given address to be mapped to a different set of cache lines.8.The processor of claim 1, wherein the permutation logic is to evict the cache line of the second way of the set of cache lines when not in the ULPM, ignoring the corresponding The one or more disable bits of the cache line of the second way.9.A computing system for low power operation, comprising:a memory for storing instructions;a processor, the processor is configured to execute the instruction, and the processor includes:a cache coupled to the memory when in operation, the cache having a plurality of cache line sets;a permutation logic for detecting access to a set of cache lines in the plurality of cache line sets in an ultra low power mode ULPM, and for using, in the ULPM, based at least in part on the ULPM One or more disable bits of the cache line of the first pass of the cache line set to allow access to the cache line of the first pass, but are not used in the ULPM based at least in part on the corresponding One or more disable bits of the second cache line of the set of cache lines to allow access to the cache line of the second pass,Wherein the ULPM uses an ultra-low voltage level to access the cache line at or below a minimum voltage level at which all of the memory cells of the cache can operate reliably.10.The system of claim 9 further comprising power-on self-test POST logic, said POST logic for testing said set of cache lines to determine said cache line of said first path at said ultra low voltage Whether the level is operational.11.The system of claim 10 further comprising logic for updating said one or more disabled bits in response to a test result generated by said POST logic.12.The system of claim 9 wherein said one or more disable bits comprise one or more redundant bits.13.The system of claim 9 wherein even if there is a hit on a tag corresponding to the cache line of the set of cache lines, in response to the one or more disable bits, the set of cache lines The access still results in a missing.14.The system of claim 9 wherein the given addresses are mapped to different sets of cache lines at different times.15.The system of claim 14 further comprising a counter for causing said given address to be mapped to a different set of cache lines.16.The system of claim 9 wherein said cache comprises a level 1 cache, a medium level cache or a last level cache.17.The system of claim 9 wherein said permutation logic is operative to allow access to said cache line of said second path of said set of cache lines when not in said ULPM, ignoring said corresponding to said The one or more disable bits of the cache line of the two way.18.The system of claim 9, wherein the permutation logic is to be based on the one or more of the cache lines corresponding to the second way of the set of cache lines when transitioning to the ULPM A plurality of disable bits are used to dump the cache line of the second way.19.The system of claim 9, wherein the permutation logic is to be based in the ULPM based at least in part on the one or more of the cache lines corresponding to the first way of the set of cache lines Multiple disable bits to evict the cache line of the first way, but are not used in the ULPM based at least in part on the cache of the second way corresponding to the set of cache lines The one or more disable bits of the line to evict the cache line of the second way.20.The system of claim 19, wherein the permutation logic is to treat a respective cache line in the ULPM based at least in part on one or more disable bits of a cache line of the set of cache lines Most recently used. |
Disable the cache portion during low voltage operationThis application is a divisional application for an invention patent application whose application number is 200910222700.4, the application date is September 30, 2009, and the invention name is "Disable the cache portion during low voltage operation".Technical fieldThe present disclosure generally relates to the field of electronics. More specifically, embodiments of the invention relate to disabling one or more cache portions during low voltage operation.Background techniqueThe current mass production of silicon may encounter many manufacturing-induced parameter variations. These variations can cause problems when manufacturing various types of memory cells. These variations are responsible for the phenomenon known as Vccmin, which determines the minimum voltage at which these memory cells can operate reliably. Since a typical microprocessor includes many structures implemented with various types of memory cells, these structures typically determine the minimum voltage at which the microprocessor as a whole can operate reliably. Since voltage regulation can be effectively used to reduce the power consumed by the microprocessor, Vccmin becomes an obstacle to the adoption of a particular design at low voltages.Summary of the inventionAn aspect of the invention resides in a processor comprising: a cache having a plurality of cache line sets; a permutation logic for detecting a cache in an ultra low power mode (ULPM) Access to the set of lines and for eviction of the first way in the ULPM based at least in part on one or more disable bits of a cache line corresponding to the first way of the set of cache lines Cache line, but not used to evict the second way in the ULPM based at least in part on one or more disable bits of a cache line corresponding to a second way of the set of cache lines The cache line, wherein the ULPM uses an ultra-low voltage level to access a cache line at a voltage that is lower than a minimum voltage level at which all of the memory cells of the cache are reliably operating.Another aspect of the invention resides in a computing system for low power operation, comprising: a memory for storing instructions; and a processor for executing the instructions, the processor comprising: a cache coupled to the memory when in operation, the cache having a plurality of sets of cache lines; replacement logic for detecting the in the ultra low power mode (ULPM) Access to a set of cache lines in a plurality of cache line sets, and for disabling bits in the ULPM based at least in part on one or more cache lines corresponding to the first way of the set of cache lines Allowing access to the cache line of the first way, but not for disabling one or more of the cache lines in the ULPM based at least in part on a second way corresponding to the set of cache lines Bits to allow access to the cache line of the second way, wherein the ULPM uses an ultra low voltage level to be at or below all of the cached memory locations Access the cache line with the minimum voltage level at which it operates reliably.Yet another aspect of the present invention is an apparatus for ultra low voltage cache operation, comprising: a cache having a plurality of cache line sets; a replacement logic circuit for detecting the cache Access to the set of lines and determining whether one or more cache lines of the set of lines of the cache are at an ultra low voltage level based on one or more bits corresponding to the set of lines of the cache Operationally wherein the ultra low voltage level is at or below a minimum voltage level and the minimum voltage level corresponds to a voltage level at which all of the memory cells of the cache can operate reliably.Yet another aspect of the present invention is a method for operating a cache at an ultra low voltage, comprising: receiving a request to access a cached set of lines; determining whether the cache is to operate at an ultra low voltage level, The ultra low voltage level is at or below a minimum voltage level, wherein the minimum voltage level corresponds to a voltage at which all of the memory cells of the cache can operate reliably; based on one or more corresponding to the The bits of the set of lines of the cache determine whether one or more cache lines of the set of lines of the cache are operational at the ultra low voltage level.Yet another aspect of the present invention is a computing system for low power operation, comprising: a memory for storing instructions; and a processor core for executing the instructions, the processor core including for detecting a high speed Accessing the cached set of lines and determining one or more cache lines of the set of lines of the cache at an ultra-low voltage level based on one or more bits corresponding to the set of lines of the cache An operable replacement circuit, wherein the ultra low voltage level is at or below a minimum voltage level, and the minimum voltage level corresponds to reliable operation of all memory cells of the cache Voltage level.DRAWINGSThe detailed description will be made with reference to the drawings. In the figures, the left-most digit(s) of the reference number identifies the figure in which the reference number first appears. The same reference numbers are used in the different drawings to refer to the1, 6, and 7 illustrate block diagrams of implementations of computing systems that can be used to implement the various embodiments discussed herein.2A and 2B illustrate a cache implementation in accordance with some embodiments.3A and 3B illustrate voltage voltageing state diagrams for disabling bit testing, in accordance with some embodiments.4A illustrates a schematic diagram of a read operation in a cache, in accordance with an embodiment.4B illustrates a block diagram of address remapping logic in accordance with an embodiment.Figure 5 illustrates a flow chart of a method in accordance with an embodiment of the present invention.Detailed waysIn the following description, numerous specific details are set forth in order However, various embodiments of the invention may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits are not described in detail in order not to obscure the particular embodiments of the invention. Furthermore, the present invention can be implemented in various ways, such as integrated semiconductor circuits ("hardware"), computer readable instructions ("software") organized into one or more programs, or some combination of hardware and software. Various aspects of the implementation. For the purposes of this disclosure, reference to "logic" shall mean hardware, software, or some combination thereof. Moreover, even though some embodiments discussed herein may each consider a set value or a clear value to be logical 0 and 1, these terms may also be interchanged, for example, depending on the implementation.Some embodiments are provided for disabling one or more cache portions (eg, cache lines or sub-blocks of cache lines) during low voltage operation. Overcoming the Vccmin barrier (discussed above) may allow the memory device to operate at or below the Vccmin level, which reduces power consumption, for example, resulting in increased battery life in the mobile computing device. Moreover, in some embodiments, performance loss can be mitigated by maintaining the operation of the memory cells in the cache at a lower granularity than the cache lines during low voltage operation. Furthermore, an embodiment of the present invention maintains the memory cell voltage at a certain voltage level such that the cell is reliably stored, for example, under conditions guaranteed by Intel® documented reliability standards. The information is kept for a while. In general, when a memory cell passes a set of tests at a given voltage level, they are considered to operate reliably at such voltage levels. These tests evaluate the read, write, and hold capabilities of the memory unit. For example, only those units that did not observe an error during the test were considered reliable.In one embodiment, during operation at an ultra-low operating voltage (ULOV), for example based on a determination (such as indicated by a bit value corresponding to the one or more cache lines), the one or Multiple cache lines are inoperative (or cannot operate reliably) and one or more cache lines can be disabled. ULOV may be a lower level than some other current low voltage levels of approximately 750 mV (which may be referred to herein as "minimum voltage level"), such as approximately 150 mV. In one embodiment, in response to determining that one or more cache lines that are not capable of operating under ULOV have been flushed (eg, if necessary, invalidated and/or written back to, for example, main memory The processor can switch to ultra low power mode (ULPM) (eg, operating under ULOV).In one embodiment, such as in a high performance out-of-order processor, performance loss due to reduced cache size (as a result of disabling the cache line) may be mitigated. For example, a medium fault bit rate can be tolerated with relatively low cost performance, low complexity, and high performance predictability. These solutions are considered to be valid within or below the Vccmin operating level range in the event that performance is not affected during high Vcc operation. In one embodiment, for Vccmin or below, fine-grained (eg, 64-bit) faulty sub-blocks may be disabled in such a way that cache lines with one or several failed sub-blocks are still available And thereby reduce the performance overhead due to the cache line disable scheme. Furthermore, high performance predictability is achieved by rotating the address mapped to the cache line in such a way that this is the key to binning the chip, in which the performance depends on A program of cache sets will potentially achieve a performance hit in a similar manner regardless of the location of the failed sub-block in the cache. When operating at high Vcc, it is believed that this technique has little or no performance penalty impact.The techniques described herein may allow for improved performance in various computing devices such as those described, for example, with reference to Figures 1-7. More specifically, FIG. 1 illustrates a block diagram of a computing system 100 in accordance with an embodiment of the present invention. System 100 can include one or more processors 102-1 through 102-N (generally referred to herein as "multiple processors 102" or "processors 102"). Multiple processors 102 can communicate via an internetwork or bus 104. Each processor may include various components, some of which are discussed with reference to processor 102-1 for clarity. Accordingly, each of the remaining plurality of processors 102-2 through 102-N may include the same or similar components discussed with reference to processor 102-1.In one embodiment, processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as "multiple cores 106" or more generally as "cores 106"), Shared cache 108 and/or router 110. Multiple processor cores 106 can be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches (such as cache 108), buses or interconnects (such as bus or internetwork 112), memory controllers (such as discussed with respect to Figures 6 and 7) Those) or other components.In one embodiment, router 110 can be used to communicate between various components of processor 102-1 and/or system 100. Moreover, processor 102-1 can include more than one router 110. In addition, many routers 110 can communicate to route data between various components internal or external to processor 102-1.The shared cache 108 can store data (eg, including instructions) used by one or more components of the processor 102-1, such as multiple cores 106, and the like. For example, the shared cache 108 can locally cache data stored in the memory 114 for faster access by components of the processor 102. In one embodiment, cache 108 may include a mid-level cache (such as level 2 (L2), level 3 (L3), level 4 (L4) or other cache level), last level cache (LLC), and/or Or a combination thereof. Moreover, various components of processor 102-1 can communicate directly with shared cache 108 via a bus (e.g., bus 112), and/or a memory controller or hub. As shown in FIG. 1, in some embodiments, one or more of the plurality of cores 106 can include a level 1 (L1) cache (116-1) (referred to herein generally as "L1 cache 116"). And/or L2 cache (not shown).2A and 2B illustrate an embodiment of a cache in accordance with some embodiments. In some embodiments, the caches shown in Figures 2A and 2B can be used as a cache discussed with reference to other figures herein, such as Figures 1, 6 or 7. More specifically, in some embodiments, a configurable cache can be used in a computing device. These configurable caches can weigh the capacity for low voltage operation.In some embodiments, one or more of the following three parts can be used. First, an additional low power state (referred to herein as ULPM) is introduced that uses a voltage level called ULOV. In one embodiment, the ULOV is about 150 mv, which is less than the current value of Vccmin (we assume it is about 750 mv). Second, the voltage sorting algorithm can be used to determine which cache lines are active under ULOV. Third, each set of cache lines is associated with a disable bit or a d-bit. The voltage sorting algorithm sets the d-bit for each set of cache lines, which is not fully functional at ultra-low operating voltages.In addition, ULPM can be considered an extension of existing power states. For example, when the microprocessor transitions to ultra-low power mode, all cache lines that have the d-bit set will be flushed from the cache that is affected by the transition to a lower voltage. If we assume that the LLC, DCU (L1 data cache) and IFU (L1 instruction cache) will operate under ULOV after conversion, then all cache lines in the DCU and ICU that have been set to d-bit will be flushed. (For example, invalidate or write back to memory 114 if necessary). Next, the LLC is used for ULOV operations by dumping each cache line that has the d-bit set. Once all cache lines with d-bits have been cleared from the system, the corresponding processor can be converted to ULPM.In general, caches are organized into multiple sets, each of which consists of multiple paths. Each way corresponds to a single cache line that is typically 32-64 bytes. A cache lookup is performed when the processor submits an address to the cache. The address can be broken down into three parts: line offset, set selection, and tag. Consider a cache design with 1024 sets, each consisting of 8 channels, each consisting of a single 64-byte line. The entire cache consists of 512KB of storage (1024*8*64). If the cache is designed to handle a 50-bit address, then the cache is indexed as follows. Bits 0-5 are the line offsets that specify the byte in the 64-byte line. In some embodiments, since a plurality of bytes may be accessed depending on the load/store instructions, in part, bits 0-5 may specify a start byte. For example, a single byte (or two bytes, etc.) can be read from the indicated byte, and so on. Bits 6-15 are collection selections that specify the set of stored lines. The remaining bits (16-49) will be stored as tokens. All cache lines with equal set selection bits will compete for one of the 8 ways in the specified set.In one embodiment, a set of cache lines can be associated with a d-bit that specifies whether the set of cache lines is active at a lower voltage. As shown in Figures 2A and 2B, the d-bit has no effect unless the processor is determined by the replacement logic 202, or is in ULPM, or is transitioning to ULPM. Thus, logic 202 can detect access to one or more cache portions, such as a cache line, and determine if the cache portion is operational at or below Vccmin. When transitioning to ULPM, the dump clears all cache lines with d-bits set. This is to prevent data loss after switching to ULPM. During ULPM, the cache functions normally except that only cache lines associated with the d-bit set to zero are considered valid. When searching for a set in the ULPM to find an address, the d-bit prevents an erroneous match with the disabled line. Even though the embodiments discussed herein may refer to set values or clear values as 0 and 1, respectively, these terms may be interchanged depending on the implementation. For example, clearing the d-bit may indicate disabling one or more corresponding cache lines.In addition, when a cache miss occurs, the permutation logic 202 selects a cache line to evict from the cache. The cache line is then overwritten with new data fetched from the memory. In ULPM, permutation logic 202 (Fig. 2B) considers the d-bits to prevent allocation of disabled cache lines. This is achieved by forcing the replacement process to treat the disabled line as an MRU (most recently used). For example, the age-based vector permutation process can be applied to disable individual cache lines. In this process, the scan bit vector (1 bit per cache line) and the first line labeled 0 are identified as LRU (least recently used) and replaced. By forcing a bit associated with a cache line to 1, the line is always treated as an MRU and is not selected for permutation.As for the defect in the d-bit, in the ULPM in which the d-bit affects the cache function, the d-bit defect can manifest itself in one of two ways. The d-bit value of 0 indicates the cache line that is active at low voltage. Conversely, the d-bit value of 1 indicates a cache line that does not function at low voltages. The first case is when the d-bit is stuck at 1 to disable the cache line. In this case, the cache line with all active bits except the broken d-bit is disabled. This ensures the correct function in this case. In the second case, the d-bit persists at zero. Since the damaged d-bit does not correctly indicate the active cache line, this is a problem if the line is defective. To ensure proper functionality, the implementation ensures that no d-bits can be erroneously stuck to zero. One way to address this is to change the cell design that makes the d-bit damaged in this way less likely. The second method will add one or more redundant d-bits. For example, three d-bits can be used. Then all three bits (all 1s or all 0s) are written in the same way. If the d-bit is read and any of the bits is set to 1, it can be considered to disable the cache line. Only d-bits that can be correctly read to contain three zeros are considered cache lines that can be used at ultra-low operating voltages. In this case, since all three bits must fail in order to cause a d-bit failure, a d-bit failure is highly unlikely.3A and 3B respectively illustrate voltage sorting state diagrams for d-bit testing during fabrication and POST (power-on self-test), in accordance with some embodiments. More specifically, voltage sorting can occur in one of two ways. First, voltage sorting can be performed when the processor is fabricated as shown in FIG. 3A. Since the d-bit remains valid even after the power cycle has occurred, the d-bit is stored in a fuse memory (Fuses) or some other type such as BIOS (Basic Input Output System) memory or package flash (on-package) In a non-volatile memory such as flash). An alternative is to store the d-bit in an additional bit included in the tag or status bit associated with the cache line (eg, Modify Dedicated Shared Invalid (MESI) bit). Storing the d-bit in this manner requires that each power drop follows a new voltage sorting, again generating the d-bit. This approach also requires the processor to be able to perform memory tests on its memory structure at low voltages. One way to achieve this is to use the POST shown in Figure 3B (thus setting the appropriate d-bit). More specifically, FIG. 3B shows that when the d-bit is set by POST and is to be regenerated after each power cycle, there are four different states: HFM (High Frequency Mode), LFM (Low Frequency Mode), ULPM, Off. How the processor can switch between these states. In addition, POST is performed after each transition from the off state to one of the three on states.As discussed with respect to Figures 2A through 3B, the cache should be configurable to have different capacities for different performance levels and different Vccmins for different power budgets. Moreover, some embodiments may allow for designing portions of the market for different power needs. This will save costs by allowing the design to target fewer products on a wider market.In one embodiment, the faultless bits of the cache entries are used instead of discarding all entries. Moreover, in order to enable low Vccmin operation in the cache, the medium fault bit rate caused by the lower Vcc is tolerated. This approach can be extended to provide high performance predictability, which ensures that both processors provide the same performance for any given program. Performance variability is based on different chip samples that potentially have different fault locations and thus have different effects on performance.4A illustrates a schematic diagram of a read operation in a cache, in accordance with an embodiment. The illustrated cache is two-way, associated with a set, each cache line having four sub-blocks. In one embodiment, each cache line is expanded with a small number of bits that can be stored with the cache tag (eg, bit 1011 stored with tag 1 in FIG. 4A or bit 0111 stored with tag 2). Each cache line can be logically divided into sub-blocks. Such sub-blocks can be sized to match the smallest portion of the line with its own parity or ECC (Error Correcting Code) protection. For example, a DL0 cache whose content is protected by a 64-bit granular ECC and whose cache line has 8 sub-blocks of those sub-blocks uses 8 extra bits to indicate whether each sub-block is usable. All extra bits are set except that their corresponding sub-blocks have extra bits that are more fault bits than allowed. For example, a SECDED (Single Error Correction, Double Error Detection) protection block with two fault bits should reset its corresponding bit.The cache of Figure 4A operates as follows. Whenever an access is performed, the tags 402 and 403 are read and data from all lines in the set 404 are acquired as needed. Note that the address offset indicates the required sub-block. Offset 406 is used to pick the bits corresponding to the sub-blocks required for each cache line in the set. The cache tag is compared to the requested address (e.g., by comparators 408 and 410). In some cases, there may be a tag hit 411 (output by an OR gate 412 based on the output of AND gates 414 and 422), but an extra bit corresponding to such a sub-block may indicate that it is a fault of. In this case, we have a false hit 418 (eg, via the output of the OR gate 420 based on the output of the AND gates 416 and 424). This situation can be handled as follows:(i) "miss" is reported because there is no data.(ii) In order to write back the cache (write-back caches), the cache line is evicted and bad data is updated at a higher cache level. Note that only valid sub-blocks need to be updated. Write-through caches cause the eviction cache line to load and update a higher cache level for storage.(iii) marking the cache line as the most recently used (MRU) line in the set, ie, whenever data is requested from a higher cache level, it is assigned to a different cache line, This different cache line is likely to have a fault-free block to hold the required data. It is unlikely that if the selected cache line has a failed sub-block at the same location, the process is repeated so that if there is at least one cache line in the set with a faultless sub-block in the desired location, then it Will be found. Only unacceptably high fault bit rates (eg, based on a given design threshold) will cause all sub-block failures in the same location within the cache line in a given set.Thus, in one embodiment, since the extra bits identifying the portion of the cache line are defective, the access cache may hit in the tag but be considered missing. Note that there can be a way to disable any of the cache lines such as those discussed above by using the d-bit. This mechanism can be used to prevent the use of cache lines with fault flags, fault valid bits, or bad bits. In one embodiment, if an extra bit fails, the cache line is also marked as a failure. Further, during high Vcc operation, such as by setting all extra bits to "1" or simply ignoring those bits, the additional mechanisms (such as extra bits) shown in Figure 4A can be bypassed, as well as the comparison logic and associated " With the "door and" or "door."4B illustrates a block diagram of address remapping logic in accordance with an embodiment. To handle performance variability, dynamic address remapping (eg, in a round-robin fashion) can be used to map a given address to a different set of caches at different time intervals. Thus, given program and fault bit rates, performance from one processor to another is hardly changed, regardless of where the fault bit is located.As shown in FIG. 4B, an N-bit counter 452 can be used, where N can be any value between 1 and the number of bits required to identify the cache set. For example, in a 32KB 8-way cache with 64 bytes/line, there are 64 sets, which can be indexed with 6 bits. Thus, a counter having 6 bits or less is sufficient. In the particular implementation shown, a 4-bit counter 452 is used. Such counters are updated periodically or from time to time (eg, every ten million cycles). The N bits of the counter are XORed by XOR gate 454 and N bits in the bits indexing the set. Thus, in one embodiment, a given address can be mapped to a different cache set at different times.In addition, address remapping can be performed either at cache access time or at address calculation time. The wait time is low due to the addition of a single XOR gate level and the half input (the ones from the counter) are set in advance. In one embodiment, whenever the counter is updated, the cached content is flushed to prevent inconsistencies. However, the counters are rarely updated, and thus, the performance impact is negligible. Moreover, the mechanism of Figure 4B can be deactivated for high Vcc operations by merely preventing counter updates.FIG. 5 illustrates a flow diagram of a method 500 for disabling a partial cache during low voltage operation in accordance with an embodiment of the present invention. In some embodiments, the various components discussed with reference to Figures 1-4 and 6-7 can be used to perform one or more of the operations discussed with respect to Figure 5.Referring to Figures 1-5, at operation 502, it is determined (e.g., by logic 202 or the logic illustrated in Figure 4A) whether access to a partial cache request is received or detected. If an access is received, then as described above (e.g., with reference to Figures 1-4B), operation 504 determines if the cache portion is operational at Vccmin or below. If the determination of operation 504 is negative, then a miss is returned (such as discussed with respect to Figures 1-4B). If the determination of operation 504 is affirmative, then operation 508 returns a hit (such as discussed with respect to Figures 1-4B).FIG. 6 illustrates a block diagram of a computing system 600 in accordance with an embodiment of the present invention. Computing system 600 can include one or more central processing units (CPUs) 602 or processors that communicate via an internetwork (or bus) 604. Processor 602 can include a general purpose processor, a network processor (processing data transmitted over computer network 603), or other type of processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)) . Moreover, processor 602 can have a single core or multi-core design. Processor 602 with a multi-core design can integrate different types of processor cores on the same integrated circuit (IC) die. Moreover, processor 602 having a multi-core design can be implemented as a symmetric or asymmetric multi-processor. In one embodiment, one or more processors 602 may be the same as or similar to processor 102 of FIG. For example, one or more processors 602 can include one or more caches as discussed with respect to Figures 1-5. Moreover, the operations discussed with respect to FIGS. 1-5 may be performed by one or more components of system 600.Chipset 606 can also be in communication with internetwork 604. Chipset 606 can include a Memory Control Hub (MCH) 608. MCH 608 can include a memory controller 610 in communication with memory 612 (which can be the same or similar to memory 114 of FIG. 1). Memory 612 can store data including sequences of instructions that can be executed by CPU 602 or any other device included in computing system 600. In one embodiment of the invention, memory 612 may include one or more types of storage such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage. Volatile storage (or storage) devices such as devices. A nonvolatile memory such as a hard disk can also be used. Additional devices may communicate via an internetwork 604, such as multiple CPUs and/or multiple system memories.The MCH 608 can also include a graphical interface 614 that is in communication with the display device 616. In one embodiment of the invention, graphics interface 614 can communicate with display device 616 via an accelerated graphics port (AGP). In an embodiment of the invention, display 616 (such as a flat panel display) can communicate with graphics interface 614 via, for example, a signal converter that converts digital representations of images stored in a storage device, such as a video memory or system memory. The display signal is interpreted and displayed by display 616. The display signals generated by the display device can pass through various control devices before being interpreted by display 616 and subsequently displayed on display 616.Hub interface 618 may allow MCH 608 to communicate with an input/output control hub (ICH) 620. The ICH 620 can provide an interface to I/O devices that communicate with the computing system 600. The ICH 620 can communicate with the bus 622 through a peripheral bridge (or controller) 624, such as a Peripheral Component Interconnect (PCI) bridge, a Universal Serial Bus (USB) controller, or other type of peripheral bridge or controller. The bridge 624 can provide a data path between the CPU 602 and an external device. Other types of topologies can be used. Moreover, multiple buses can communicate with the ICH 620, for example, through multiple bridges or controllers. Moreover, in various embodiments of the invention, other peripherals in communication with the ICH 620 may include an electronic integrated drive (IDE) or a small computer system interface (SCSI) hard drive, a USB port, a keyboard, a mouse, a parallel port, a serial Port, floppy drive, digital output support (such as Digital Visual Interface (DVI)) or other devices.Bus 622 can be in communication with audio device 626, one or more disk drives 628, and network interface device 630 (which is in communication with computer network 603). Other devices can communicate via bus 622. Moreover, in some embodiments of the invention, different components, such as network interface device 630, may be in communication with MCH 608. Additionally, processor 602 and other components (including but not limited to MCH 608, one or more components of MCH 608, etc.) illustrated in FIG. 6 can be combined to form a single chip. Moreover, in other embodiments of the invention, a graphics accelerator may be included within the MCH 608.Moreover, computing system 600 can include volatile and/or nonvolatile memory (or storage). For example, the non-volatile memory can include one or more of the following: a read only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electronic EPROM (EEPROM), a disk drive (eg, 628). , floppy disk, compact disk ROM (CD-ROM), digital versatile disk (DVD), flash memory, magneto-optical disk, or other type of non-transitory machine readable medium capable of storing electronic data (eg, including instructions).FIG. 7 illustrates a computing system 700 arranged in a point-to-point (PtP) configuration, in accordance with an embodiment of the present invention. In particular, Figure 7 illustrates a system in which a processor, memory, and input/output devices are interconnected by a plurality of point-to-point interfaces. The operations discussed with reference to Figures 1-6 may be performed by one or more components of system 700.As illustrated in Figure 7, system 700 can include multiple processors, of which only two processors 702 and 704 are shown for clarity. Processors 702 and 704 can each include local storage control hubs (MCH) 706 and 708 to enable communication with memories 710 and 712. Memory 710 and/or 712 can store various data such as those discussed with reference to memory 612 of FIG.In one embodiment, processors 702 and 704 may be one of processors 602 discussed with reference to FIG. 6, for example, including one or more caches discussed with reference to FIGS. 1-6. Using PtP interface circuits 716 and 718, respectively, processors 702 and 704 can exchange data via point-to-point (PtP) interface 714. Moreover, with point-to-point interface circuits 726, 728, 730, and 732, processors 702 and 704 can each exchange data with chipset 720 via individual PtP interfaces 722 and 724. For example, with PtP interface circuitry 737, chipset 720 can further exchange data with graphics circuitry 734 via graphics interface 736.At least one embodiment of the present invention can be provided within processors 702 and 704. For example, one or more cores 106 of FIG. 1 may be located within processors 702 and 704. However, other embodiments of the invention may exist in other circuits, logic units or devices within the system 700 of FIG. Moreover, other embodiments of the invention may be distributed throughout the various circuits, logic units or devices depicted in FIG.Chipset 720 can communicate with bus 740 using PtP interface circuitry 741. Bus 740 can be in communication with one or more devices, such as bus bridge 742 and I/O device 743. Via bus 744, bus bridge 742 can be associated with, for example, a keyboard/mouse 745, communication device 746 (such as a modem, network interface device, or other communication device that can communicate with computer network 603), audio I/O device 747, and/or data storage. Other devices such as device 748 communicate. Data storage device 748 can store code 749 that is executed by processors 702 and/or 704.In various embodiments of the invention, the operations discussed herein, for example, with reference to Figures 1-7, may be implemented as hardware (e.g., logic circuitry), software, firmware, or combinations thereof, which may be provided as a computer program product. For example, a machine readable or computer readable medium having stored thereon instructions (or software programming procedures) for programming a computer to perform the processes discussed herein. A machine-readable medium can include storage devices such as those discussed herein.Moreover, such tangible computer readable medium can be downloaded as a computer program product in which the program can be transferred from a remote computer via a communication link (eg, a bus, modem, or network connection) by means of a data signal in the propagation medium ( For example, the server) is transferred to the requesting computer (for example, the client).References to "an embodiment", "an embodiment" or "an embodiment" or "an" The appearances of the phrase "in one embodiment" mayIn addition, the terms "coupled" and "connected," along with their derivatives, may be used in the specification and claims. In some embodiments of the invention, "connected" can be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but can still cooperate or interact with each other.Accordingly, while the embodiments of the present invention have been described in the language of the embodiments of the invention, it is understood that the claimed subject matter is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter. |
PROBLEM TO BE SOLVED: To provide a MOS transistor that is hardly damaged by ESD. SOLUTION: This MOS transistor has a region having higher resistivity than the other regions in the p-type well. When an ESD phenomenon occurs, the region having the higher resistivity increases the current It2 which starts a thermal breakdown by destructive local heating by increasing the current gain of a parasitic lateral npn bipolar transistor. |
An integrated circuit manufactured in a semiconductor of a first conductivity type having on its surface at least one lateral MOS transistor bounded on each side by an isolation region and bounded below the surface by a channel stop region. A source and a drain, each of which comprises two regions of opposite conductivity type on said surface, one of said regions being shallow, extending to the gate of a transistor, and the other of said regions being deeper than said gate. , And together constitute an active region of the transistor, have a depletion region when reverse biased, and are provided in the semiconductor and are of another type of the first conductivity type. A semiconductor region, wherein the semiconductor region has a higher resistivity than the other portion of the semiconductor and is near one of the concave regions. An integrated region extending laterally to near the other region, wherein the high resistivity region extends vertically from a predetermined depth below the source and drain depletion regions to approximately the top of the channel stop region. circuit.The resistivity of the p-type semiconductor below the active region of the high-voltage NMOS transistor, extending laterally between the two shallow trench isolation regions, and extending vertically between below the depletion region and the depth of the stop region. Depositing a photoresist layer over the transistor and opening a window in the photoresist layer over the active region of the transistor; and opening the window in the p-type semiconductor through the window. Implanting n-type doping ions for compensation at a high energy to form a deep region having a pure p-type doping lower than the doping of the p-type semiconductor separated from the active region of the transistor. A method of increasing the resistivity of a semiconductor. |
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates generally to the technical field of electronic systems and semiconductor devices, and more particularly to a MOS transistor having an additional buried under a channel as compared to standard techniques. And a method of manufacturing the same.2. Description of the Related Art Integrated circuits (ICs) may be severely damaged by an electrostatic discharge (ESD) phenomenon. The main cause of an IC receiving ESD is from a charged human body ("Human Body Model", HBM), and discharge of the human body produces a peak current of several amps for the IC in about 100 ns. A second cause of ESD is from metal objects ("device models", MM). Metal objects can generate transients with significantly higher rise times than HBM ESD causes. A third cause is described as "Charging Device Model" (CDM), in which the IC itself charges in the opposite direction to the HBM and MM ESD events and discharges to ground. Details of the ESD phenomenon and protection methods in ICs can be found in: Amelasequera and C.I. Dubley, "ESD in Silicon Integrated Circuits" (John Wiley and Sons, London, 1995); Dubry, "ESD: Designing for IC Chip Quality and Reliability" (2000 International Symposium, Quality in Electronics Design, pp. 251-259: Recent References).[0003] The ESD phenomenon in ICs has led to the demand for higher operating speeds, lower operating voltages, higher packing densities, and lower costs, resulting in a reduction in the size of all devices. Is becoming more important as the reduction in This generally means that the insulating film becomes thinner, the doping concentration changes more steeply, the doping level becomes higher, and the electric field strength becomes higher. In addition, it is more likely to be damaged by the ESD phenomenon.The most common protection method used in metal-oxide semiconductor (MOS) relies on parasitic bipolar transistors associated with NMOS devices, where the drain is connected to the pin to be protected. , The source is connected to ground. This level of protection, the fault threshold, can be set by changing the width of the NMOS device from the drain to the source below the oxide gate of the NMOS device. Under stress, the primary current conduction path between the protected pin and ground is associated with the parasitic bipolar transistor of the NMOS device. The parasitic bipolar transistor operates in a snapback region below the positive pin in response to ground stress phenomena.[0005] A major cause of the failure seen in NMOS protection devices that operate as a parasitic bipolar transistor in the snapback condition is that a second breakdown occurs. The second breakdown is a phenomenon in which thermal runaway is induced in a device when a decrease in impact ionization current is offset by heat generation of carriers. This second breakdown starts with the stressed device as a result of its own heating. It has been found that the peak temperature of the NMOS device where the second breakdown occurs increases with increasing stress current level.Many circuits for protecting ICs from ESD have been proposed and realized. One method used to improve protecting the IC from ESD is to bias the substrate of the ESD protection circuit on the IC. This method of biasing the substrate can be effective in improving the response of the multi-finger MOS transistor used to direct the ESD discharge to ground. However, biasing the substrate changes the threshold voltage for the device from a nominal value, which may affect the operation of the device. Further, biasing the substrate under steady state conditions generates heat and increases power loss.[0007] The solutions provided by the known art require additional IC elements, silicon real estate, and / or process steps (particularly photomask alignment steps). Therefore, these manufacturing methods are effective. U.S.A. issued July 23, 1996 to Amelasequera on July 23, 1996, entitled "Controlled Collector Low Breakdown Voltage Vertical Transistor for ESD Protection Circuits" on July 23, 1996. Patent No. 5,539,233, U.S. Pat. U.S. Patent No. 5,940,258 issued August 17, 1999, entitled "Semiconductor ESD Protection Circuit," entitled "On-chip in Dual Voltage CMOS," U.S. Pat. No. 6,137,144 and Oct. 24, 2000, entitled "ESD Protection" U.S. Pat. No. 6,143,594 issued on Nov. 7 and U.S. Pat. Application No. 09 filed on Dec. 3, 1999, entitled "Electrostatic Discharge Devices and Methods," entitled "Electrostatic Discharge Devices and Methods." / 456,036 describes examples of device structures and methods.The effect of substrate well profile on device ESD performance is described, for example, in the paper "Effect of Well Profile and Gate Length on ESD Performance of Fully Silicided 0.25 .mu.m CMOS Technology" (K. Bock, C .; Lus, G. Badenes, G. Gressenecken and L. Deferm, 1997, EOS / ESD Symposium Proceedings, pp. 308-315). However, known techniques recommend lower epitaxy doping or lower implant doses as a way to increase the resistance of the P-type well.[0009] Attempts to reduce costs have been the driving force for minimizing the number of process steps, especially for minimizing the number of photomask steps and, where possible, applying standardized process conditions. These constraints must be taken into account when proposing additional process steps or new process conditions to improve immunity to ESD without sacrificing desirable device characteristics. Accordingly, an urgent need has arisen for a consistent, low cost method of increasing ESD immunity without the need for additional real estate consuming protection devices. These device structures must further provide superior electrical performance, mechanical stability and high reliability. This manufacturing method must be simple and flexible enough for different semiconductor product families and a wide range of design and process variations. Preferably, these innovations can be achieved by using the installed equipment without lengthening the manufacturing cycle time and without having to invest in newer manufacturing equipment.SUMMARY OF THE INVENTION On each side, a lateral NMOS transistor in a p-type window, with a lateral boundary defined by an isolation region and a vertical boundary defined by a stop region, It has an n-type source and an n-type drain, each of which includes a shallow region extending to the gate of the transistor and a deep region recessed from the gate. The transistor also has a higher resistivity region in the p-well than the rest of the well. This region extends laterally from near one of the concave regions to near the other region, and further extends vertically from a predetermined depth below the source and drain depletion regions to the top of the channel stop region.In accordance with the present invention, the region of higher p-type resistivity is obtained by adjusting the threshold voltage and using the same photomask used for ion implantation to form the p-type well and channel stop. It is formed by compensating n-type doping, for example, by arsenic or phosphorus ion implantation.In the event of an ESD event, this region of higher resistivity increases the current gain of the laterally parasitic npn bipolar transistor, thus increasing the current It2 that initiates thermal breakdown due to destructive local heating. .When the gate, source, and substrate terminals are at 0 V and the drain is at a positive potential, the current gain β of the lateral bipolar npn transistor when an ESD phenomenon occurs is expressed by the following equation.Β = (Id-Igen) / (Igen-Isub)Here, Id = drain current, Igen = Ib + Isub, Ib = base current, and Isub = hole current flowing from the collector junction to the backside contact through the substrate.A feature of the present invention is that the higher resistivity region provides a transistor substrate that allows the transistor to function fully without affecting the operation of nearby active devices.Another feature of the present invention is that the higher resistivity regions do not degrade latch-up robustness, ie, inadvertent body current induced body biasing of nearby transistors. It is to improve the ESD protection of the transistor without increasing it.Another feature of the present invention is that it is equally applicable to PMOS transistors, and that the conductivity type of the semiconductor and the type of ion implantation can be easily reversed.A method of fabricating a higher resistivity region below the active region of a high voltage NMOS transistor having a gate comprises depositing a layer of photoresist over the transistor and forming the photoresist layer over the active region of the transistor. Opening a window in the layer, then implanting n-type doping ions at a high level into the p-type semiconductor substrate through the window, and removing a lower doping than the p-type semiconductor doping away from the active region of the transistor. forming a deep region with p-type doping. The preferred depth of this region is between 50 and 150 nm. If this region is too deep, the implant energy must be higher, which increases the damage and thus the junction leakage current or the junction isolation failure.An essential feature of the present invention is to perform such high energy ion implantation without the need for a new photomask step. These economic features make the additional high energy ion implantation process of the present invention surprisingly inexpensive.The following detailed description of the preferred embodiment of the invention, when considered in conjunction with the novel features set forth in the accompanying drawings and claims, makes it clear not only the technical advantages of the invention, but also its features. Can understand.BEST MODE FOR CARRYING OUT THE INVENTION The present invention was filed on January 23, 2001 with the inventor of Saling, entitled "Structure of MOS Transistor with Increased Substrate Resistance and Method of Manufacturing the Same". No. 60 / 263,619.The advantages of the present invention can be most easily understood by clarifying the disadvantages of the known art. The schematic cross-sectional view of FIG. 2 shows an integrated circuit (IC) component 100 commonly used in ESD protection circuits. That is, an NMOS transistor is shown that operates in the mode of a lateral bipolar npn transistor during an ESD event and lowers the impedance of the current path to ground. This IC is formed in a semiconductor of the first conductivity type. In the example of FIG. 1, the first conductivity type is p-type, the MOS transistor is an NMOS transistor, and the lateral bipolar transistor is an npn transistor. In this manufacturing method, the semiconductor of the first conductivity type is formed by pure doping using the substrate and the well.As defined herein, the term "substrate" means a starting semiconductor wafer. In the present manufacturing method, the substrate generally has p-type doping. For clarity, this case is also chosen as the basis for the following description. However, it is emphasized that the invention and all the description also covers the case where the substrate has n-type doping. In FIG. 1, the substrate is indicated as 101. Until now, an epitaxial layer 102 of the same conductivity type as the substrate has often been deposited on the substrate 101, but this is not necessarily the case. In this case, the term “substrate” refers to the epitaxial layer 102 plus the starting semiconductor 102. For the example of conductivity type selected for FIG. 1, a p-type well 103 has been formed by local implantation and annealing of acceptor ions. The n + -type source region 104 (the emitter and drain region 105 of the bipolar transistor (collector of the bipolar transistor) is formed by shallow ion implantation of a donor. The surface between the emitter 104 and the collector 105 is a gate oxide film 106. Films 107, 108, 109 and 110 are the metal contacts to the gate, emitter, collector and backside of the wafer, respectively.FIG. 1 further shows that the emitter 108, gate 107 and wafer backside 110 are electrically connected to ground potential (0V). A reverse bias is applied to the collector / base junction due to a positive voltage spike at the collector as caused by an ESD event. The base is the substrate 101 (epitaxial layer 102 + substrate 101 in some devices), and the depletion layer in the space charge region is denoted by 120. When the electric field in the depletion layer 120 exceeds the breakdown electric field, an avalanche effect occurs, forming an electron / hole pair. Electrons flow into the collector and holes flow into the p-type base.This hole current Isub flows from the collector junction through the substrate to the backside contact 110, reducing the voltage across resistors R-pwell and R-sub, thereby causing the emitter / base junction to Biased forward (forward). The forward bias of this emitter is proportional to the effective "substrate resistance" equal to the sum of the resistance components in the current path, which resistance components are schematically illustrated in FIG. 1 as R-pwell and R-sub. Electrons injected from the emitter to the base and reaching the depletion layer of the collector are involved in the avalanche mechanism.The electron concentration is multiplied according to an avalanche multiplication factor determined by the electric field. The resulting reduction in device impedance is shown as a snapback 202 in the current-voltage characteristic, which corresponds to the turning on of the bipolar transistor. FIG. 2 plots the (logarithmic scale) collector (or drain) current I as a function of the drain voltage V (linear scale). As shown in FIG. 2, this snap bag 201 is generated at a collector / drain voltage Vt1 with an associated collector / drain current It1. The dependence of the avalanche multiplication factor on the electric field serves to set a new stable current / voltage equilibrium 202. At high electron injection levels, modulation of the base conductivity also contributes to making the impedance of the device positive again. It must be stated that the lateral npn transistor also provides protection from negative ESD pulses. Next, the collector 105 (in FIG. 1) acts as an emitter, commutating the ESD current to the backside substrate contact 110 and the reverse biased emitter 104, which then reverse biased emitter 104 as the collector. work.The current carrying capability of this device is limited by thermal effects in the depletion layer of the avalanche operating collector. Numerous effects contribute to the occurrence of the second (thermal) breakdown (203 in FIG. 2), such as increased intrinsic carrier concentration, lower carrier mobility, reduced thermal conductivity and reduced potential barrier to tunnel current. are doing. This second breakdown trigger current It2 is particularly sensitive to device design, especially doping profiles. As a result of the second breakdown, the junction is in a molten state or the leak current increases irreversibly. Therefore, these must be prevented for normal device operation.According to the present invention, it can be understood from the above description of FIGS. 1 and 2 that when the resistor R-pwell and / or R-sub is increased, the emitter is turned on early and the current contribution of the avalanche mechanism is reduced. It is important to conclude. This is indicated by an increase in the second breakdown threshold current It2. K. As noted in the above-cited publication by Bock et al., The resistance R-pwell and It2 of the p-well can be modified by doping the p-well. However, known techniques only recommend the use of less substrate (or epitaxial doping and less implant dose) as a way to increase the resistance of the p-well.According to the present invention, a compensation n-type implantation portion is further provided in the p-type well, and a lightly doped p- region is provided below the depletion region of the MOS transistor and above the channel stopper. , And to improve the gain β of the bipolar current.As defined herein, the terms “vertical”, “below”, “above”, “shallow”, and “deep” geometric and positional terms refer to the active surface of a semiconductor as a reference line. Used with According to such a definition, the surface has a "horizontal" orientation. An integrated circuit is manufactured in this active semiconductor surface. The schematic cross-sectional views of FIGS. 1 and 3 show the mutual relationship of these positions.FIGS. 3 to 6 show a modified p-type well doping and a structure of a p-type well resistor R-pwell according to the present invention, and FIGS. 7 to 12 show an R-pwell according to the present invention. A flexible and economical way is shown to meet specifications. Although the example shown embodies the experimental conditions for an NMOS transistor, similar requirements apply to the conditions for a PMOS transistor.FIG. 3 is a simplified schematic (not to scale) of a small portion of an IC having MOS transistors, generally designated 300, on its surface at a given stage in the manufacturing process according to the present invention. Shown by state. The invention is applicable not only to NMOS transistors manufactured in semiconductor substrate material, but also to PMOS transistors. Here, the substrate includes a p-doped semiconductor wafer 301 (on some devices, a p-doped epitaxial layer 302 is deposited over the wafer). For simplicity, the description of the present invention will be in the following for a p-type semiconductor. However, the present invention is applicable even when an n-type substrate is used. The semiconductor material can be silicon, silicon germanium, gallium arsenide or any other semiconductor material used in IC manufacturing.The resistivity of the semiconductor substrate on which the MOS transistor is manufactured is in the range of about 1 to 50 Ωcm (this value is also the resistivity of the epitaxial layer). A first conductivity type well 303 has already been manufactured in this substrate. In FIG. 3, a window 220a in photoresist 330 is used to form the illustrated p-type well. In other circuit designs, the p-type window may extend further. For NMOS transistors, this first conductivity type means p-type, and for PMOS transistors it means n-type. The silicon dioxide isolation trench 304 forms the active area of the lateral transistor in the p-type window. For the gate 305 of the MOS transistor, usually polysilicon or other conductive material is selected, its thickness 305a is generally between 140 and 180 nm and its width 305b is between 0.2 and 1.0 μm. The gate insulator 306 (silicon dioxide, SiO2 nitride, or others) has a physical thickness between 1 and 10 nm.FIG. 3 shows a deep source 310 and an extended source 311, a deeper drain 312 and an extended drain 313. Extended sources and drains are formed by low energy shallow implants (depths typically between 25 and 40 nm) and deep sources and drains are formed with medium energy levels as part of the process flow shown in FIGS. It is formed by ion implantation (depth is generally between 100 and 140 nm). To perform manufacturing by ion implantation, a window 330a in the photoresist layer 330 is used. This window 330a determines the lateral width and active area of the MOS transistor. The same photoresist and window are used for another p-type ion implantation to form a medium conductivity channel stopper layer 320 and a threshold adjustment implant directly below the gate (not shown in FIG. 3). I do.For higher levels of energy to compensate for the n-type ion implantation of the present invention, an additional window 330a is used. This implantation is performed to change the resistivity of the well in the opening of the window 330a to an average value that is at least one order of magnitude greater than the resistivity value of the semiconductor of the first conductivity type. In FIG. 3, the dotted line indicates the schematic width of the high resistivity region 360. It should be noted that the thickness of the photoresist layer 330 is greater than simply required to block the lower energy implant. The thickness of the photoresist layer is preferably between 1.5 and 2.0 μm. Where a high energy implant is accompanied by a medium level energy implant, a generally non-conductive sidewall 350 is present as part of the gate structure.FIG. 4 shows the position of the compensation injection region in more detail. Here, the area of the compensation injection is indicated by the numeral 401. A deep drain 312 and an extended drain 313 are shown, as well as a deep source 310 and an extended source 311. As can be seen, both the deep source 310 and the deep drain 12 are concave with respect to their respective extended portions 311 and 313. Compensating n-type implant (and thus higher p-type resistivity) region 401 extends laterally from near one of the concave regions indicated by reference numeral 402 to near the other concave region indicated by reference numeral 403. Extends. Furthermore, the high resistivity region 401 extends vertically from a predetermined depth immediately below the source depletion region 410 and the drain depletion region 411a / 411b to almost the top of the channel stopper region 320 (a depth of about 300 nm from the surface). I have.(FIG. 4 shows a shallow trench isolation (STI) 304 only to show the relative depth of the region from the surface, and this isolation may be laterally with other parts of the figure. Is not to scale.)For NMOS transistors, the first conductivity type (p-type) semiconductor well and substrate (including any epitaxial layers) have a dopant species selected from the group consisting of boron, aluminum, gallium and indium. The higher resistivity sources, drains, their extensions and regions in the semiconductor of the first conductivity type have a dopant species selected from the group consisting of arsenic, phosphorus, antimony and bismuth.For a PMOS transistor, the first conductivity type (n-type) semiconductor well has a dopant species selected from the group consisting of arsenic, phosphorus, antimony, and bismuth. The higher resistivity sources, drains, their extensions and regions within the semiconductor of the first conductivity type have a dopant species selected from the group consisting of boron, aluminum, gallium, indium and lithium.As an example for an NMOS transistor, FIG. 5 shows the doping profile resulting from the implantation of the high energy n-type doping of the present invention simulated by a computer program. This figure shows the profile of an arsenic implant implanted into the material of a silicon substrate (p-well) doped with boron. The horizontal axis indicates the doping concentration expressed in logarithmic units, and the vertical axis indicates the penetration depth to the semiconductor surface expressed in μm. In addition to the starting boron and implanted arsenic concentrations, the resulting pure doping profile is shown. The arsenic implantation conditions are preferably a dose of 2 to 4E12 cm-2 and an energy of 125 to 150 keV. Other successful examples of counter-doping of the present invention use phosphorus or antimony.As can be seen from FIG. 5, the net doping is substantially reduced as a result of the high energy arsenic counterdoping (curve 501) into the initial boron doping of the p-type substrate material (curve 502). (Curve 503). Therefore, the resulting resistivity is increasing. In this example, the resulting resistivity is uniform in the first order at a depth of 0.1-0.5 μm.The correct choice of dose and energy for successful compensation implantation depends on the background of the p-well and device operating conditions. For typical conditions, the preferred dose is in the range of 2.0-5.0E12 cm-2, and the preferred energy is in the range of 120-160 keV. The maximum β achieved is between 60 and 100.FIG. 6 schematically shows another embodiment of the invention which is particularly important for MOS transistors having very short channel lengths (less than 0.2 μm). Additional p-type implants forming regions of enhanced p-type doping create halo or pocket regions 610 around source 610 and similar regions 602 around drain 611. Source 610 again comprises deep source 310 and extended source 311. The deep source 310 is concave with respect to the extended source 311. Similarly, the deep drain 312 is concave compared to the extended drain 313. The higher resistivity regions 620 formed according to the present invention extend laterally from near one of the concave regions to near the other region. Region 620 extends vertically from a predetermined depth below the source and drain halo / pocket regions and the depletion region to approximately the top of the channel stop region.FIGS. 7-12, which are simplified schematic diagrams for the example of an NMOS transistor, outline the method and process flow for fabricating an IC MOS transistor having a high substrate resistance. Similar process steps can be applied to the manufacture of PMOS transistors.FIG. 7: A p-type semiconductor 701 is selected as a substrate that can be made of an epitaxial material. FIG. 7: A non-conductive electrical isolation region 704 is formed in the p-type semiconductor 701 to define the lateral boundaries of the active region of the NMOS transistor.FIG. 8: A first photomask layer 801 is deposited, a window 801a is opened in this layer, and the surface of the region between the isolation regions is exposed. FIG. 8: Implant low energy p-type doping ions in the exposed surface area to form a shallow layer 802 suitable for adjusting the threshold voltage. FIG. 8: Implant high energy p-type doping ions into the exposed surface region to form p-type well 803.FIG. 8: Implant medium energy p-type doping ions into the exposed surface regions to form a deep layer 804 suitable as a channel stop. FIG. 8: implanting high-energy compensating n-type doping ions in the exposed surface region, below the surface with a pure p-type doping lower than the doping concentration of the p-type semiconductor remote from the active region of the transistor A region 805 is formed at a predetermined depth. FIG. 8: Remove the first photoresist layer.FIG. 9: An insulating film such as a silicon dioxide film suitable as a gate insulating film 901 is grown on the surface to cover the transistor region. FIG. 9: Deposit a layer of polysilicon or other conductive material on the insulating film. FIG. 9: Protect a portion of the polysilicon and etch another portion of the polysilicon to define the transistor gate region 902.FIG. 10: Deposit a second photoresist layer, open a window in it and expose the surface of the area between the isolation areas. FIG. 10: Implant low energy n-type doping ions into the exposed surface area to form a shallow n-type doped layer below the surface suitable as extended source 1001 and drain 1002 of the transistor. FIG. 10: Excluding the second photoresist layer.FIG. 11: Deposit a conformal insulator of an insulator, for example silicon nitride or silicon dioxide, on the surface and apply this insulator to the directional plasma so that only the sidewalls 1101 remain around the polysilicon gate. Etch. FIG. 11: Deposit a third photoresist layer, open a window in it and expose the surface of the area between the insulating areas. FIG. 11: Implanting n-type doping ions at medium energy into the exposed surface region to form an n-type doped region extending to an intermediate depth below the surface suitable as the deep source 1102 and drain 1103 of the transistor . FIG. 11: Excluding the third photoresist layer.FIG. 12: Form silicides 1201, 1202 and 1203; make contacts; deposit metallization.In FIG. 10, after forming the extended source and drain, p-type doping ions are implanted around the extended source and drain, and p-doped pockets / halons are enhanced around the deep source and doping. The above method can be extended to formIt is desirable to add a process step of annealing the high-energy implant at a high temperature. Of course, this process step can be modified by implanting high energy n-type doping ions after the process step of implanting medium energy n-type doping ions.In accordance with the method of the present invention, the above process steps flow can be applied in the same manner, with the conductivity type reversed, for manufacturing a PMOS transistor.The process flow of FIGS. 7-12 is summarized in the following block diagram of FIG. Step 1301 corresponds to the process step in FIG. Step 1302 corresponds to the process step in FIG. Step 1303 corresponds to the process step in FIG. Step 1304 corresponds to the process step in FIG. Step 1305 corresponds to the process step in FIG. Step 1306 corresponds to the process step in FIG. Step 1307 forms contacts. Step 1308 deposits a metallized film.Although the present invention has been described with reference to the illustrated embodiments, the description is not intended to limit the invention. After reading this description, it will become apparent to one skilled in the art, that various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent. It is therefore intended that the appended claims encompass any such modifications or embodiments.With respect to the above description, the following items are further disclosed. (1) Fabricated in a semiconductor of the first conductivity type having at least one lateral MOS transistor on the surface, the boundary of each side defined by an isolation region and the lower boundary of the surface defined by a channel stop region. Integrated circuit comprising a source and a drain, each of which comprises two regions of opposite conductivity type on said surface, one of said regions being shallow, extending to the gate of the transistor, and the other of said regions being more Deeply concave from the gate, together constituting an active region of the transistor, having a depletion region when a reverse bias is applied, wherein the first conductivity type provided in the semiconductor; One of the concave regions, the semiconductor region having a higher resistivity than the other part of the semiconductor. And the high resistivity region extends vertically from a predetermined depth below the source and drain depletion regions to approximately the top of the channel stop region. Integrated circuit.(2) The circuit according to (1), wherein the semiconductor of the first conductivity type is a semiconductor epitaxial layer.(3) The circuit of claim 1, wherein said semiconductor material is selected from the group consisting of silicon, silicon germanium, gallium arsenide and any other semiconductor material used in integrated circuit fabrication.(4) the higher resistivity region in the semiconductor of the first conductivity type has a resistivity at least one order of magnitude greater than the conductivity of the semiconductor of the first conductivity type; 2. The circuit according to claim 1.The circuit of claim 1, wherein said high resistivity region extends vertically from said surface to about 50-150 nm.(6) The semiconductor of the first conductivity type is made of p-type silicon having a resistivity in the range of about 1 to 50 Ωcm, and the source, drain and their extensions are made of n-type silicon. 2. The circuit of claim 1, wherein:(7) the semiconductor of the first conductivity type has a dopant species selected from the group consisting of boron, aluminum, gallium and indium, and the region of higher resistivity in the semiconductor of the first conductivity type; Has a dopant species selected from the group consisting of arsenic, phosphorus, antimony and bismuth.(8) The semiconductor of the first conductivity type is made of n-type silicon having a resistivity in the range of about 1 to 50 Ωcm, and the source, drain and their extensions are made of p-type silicon. 2. The circuit of claim 1, wherein:(9) The semiconductor of the first conductivity type has a dopant species selected from the group consisting of arsenic, phosphorus, antimony, bismuth and lithium, and the source, the drain, their extensions and the first conductivity type. The circuit of claim 1, wherein the higher resistivity region in the semiconductor of claim 1 comprises a dopant species selected from the group consisting of boron, aluminum, gallium, indium, and lithium.(10) The circuit of claim 1, wherein said gate has a narrow dimension of about 0.2-1.0 μm.The circuit of claim 1, wherein each of said source and drain is surrounded by an enhanced doping zone of a first conductivity type.(12) The region of higher resistivity is the substrate of the transistor, which allows the transistor to function fully, but does not affect the operation of nearby active devices; 2. The circuit according to claim 1.(13) The region of higher resistivity does not reduce latch-up robustness and does not increase the body bias induced by the abrupt substrate current of nearby transistors. 12. The circuit of claim 11, further improving protection.(14) Below the active region of the high voltage NMOS transistor, extending laterally between the two shallow trench isolation regions and extending vertically between below the depletion region and the depth of the stop region. A method of increasing the resistivity of a p-type semiconductor, comprising: depositing a photoresist layer over the transistor and opening a window in the photoresist layer over the active region of the transistor; Implanting compensating n-type doping ions at high energy into the p-type semiconductor through to form a deep region having a pure p-type doping lower than the doping of the p-type semiconductor spaced from the active region of the transistor; A method for increasing the resistivity of a p-type semiconductor, comprising:(15) A method of manufacturing, on a surface of an integrated circuit, an NMOS transistor having a high substrate resistance in a predetermined p-type semiconductor region of the integrated circuit, wherein a non-conductive electrically conductive material is formed in the p-type semiconductor. Forming an isolation region and defining a lateral boundary of an active region of the NMOS transistor; depositing a first photomask layer, opening a window inside the layer, and forming a window between the isolation regions; Exposing a surface of a region; implanting low-energy p-type doping ions into the exposed surface region to form a shallow layer suitable for adjusting a threshold voltage; Implanting high energy p-type doping ions in the region to form a p-type well; and providing medium energy in the exposed surface region. Implanting p-type doping ions to form a deep layer suitable as a channel stop; and implanting high-energy compensating n-type doping ions into the exposed surface region, from the active region of the transistor. Forming a region having a pure p-type doping lower than the spaced doping of the p-type semiconductor at a predetermined depth below the surface; removing the first photoresist layer; and forming a gate insulating film. Growing an insulating film, such as a suitable silicon dioxide film, on the surface to cover the transistor region; depositing a layer of polysilicon or other conductive material on the insulating film; Protecting the portion of the transistor and etching the remaining portion of the polysilicon to define a gate region of the transistor. Depositing a second layer of photoresist, opening a window therein and exposing the surface of the region between the isolation regions; and low energy n-type doping in the exposed surface region. Implanting ions to form a shallow n-type doped layer below the surface suitable as extended sources and drains of the transistor; removing the second photoresist layer; Depositing a conformal insulating layer of silicon nitride or silicon dioxide on said surface and directional plasma etching said insulating film so that only the sidewalls around the polysilicon gate remain; and Depositing, opening a window therein, and exposing a surface of said region between said insulating regions; Implanting n-type doping ions at medium energy into the illuminated surface region to form an n-type doped region extending to a medium depth below the surface, suitable as a deep source and drain of the transistor; Removing the third photoresist layer.(16) After forming the extended source and drain, implant p-type doping ions around the extended source and drain, and enhance the p-type doping pocket around the deep source and doping. 16. The method of claim 15, further comprising the step of:(17) The method according to (15), wherein the thickness of the second photoresist layer is larger than required only for blocking the low energy ion implantation.The method of claim 15, further comprising the step of annealing the high energy implant at a high temperature.19. The method of claim 15, further comprising, after said process step of implanting said n-type doping ions at medium energy, a modified process step of implanting said n-type doping ions at said high energy. .(20) The method according to Item 15, wherein the p-type semiconductor has a peak doping concentration of 4E17 to 1E18 cm-3.(21) said implantation of low energy ions comprises ions having an energy suitable for forming a junction at a depth of 10 to 50 nm and a peak concentration of about 5 × 10E17 to 5 × 10E20 cm-3; 16. The method of claim 15.22. The method of claim 15, wherein said implanting of medium energy ions comprises ions having an energy suitable for forming a junction at a depth of 50-200 nm and ions having a peak concentration of about 5E19-5E20 cm-3. the method of.(23) An energy range of about 120-180 keV and a dose of about 1E12-5E12 cm-2, such that said implantation of high energy ions at a depth greater than 50 nm obtains a concentration of about 1E17-6E17 cm-3. 16. The method according to claim 15, comprising an ion, preferably an arsenic ion, having an amount.24. The method of claim 15 wherein said lightly pure p-type doping includes a peak concentration of about 1-6E17 cm-3 below the pn junction of the deep source and drain regions of said transistor. .(25) Below the active region of the high-voltage PMOS transistor, extending laterally between the two shallow trench isolation regions and vertically extending between below the depletion region and the depth of the stop region. A method of increasing the resistivity of an n-type semiconductor, comprising: depositing a photoresist layer over the transistor and opening a window in the photoresist layer over the active region of the transistor; Implanting compensating p-type doping ions at high energy into the n-type semiconductor through to form a deep region having a pure n-type doping lower than the doping of the n-type semiconductor spaced from the active region of the transistor; A method for increasing the resistivity of an n-type semiconductor, comprising:(26) A method of manufacturing a PMOS transistor having a high substrate resistance in a predetermined n-type semiconductor region of the integrated circuit on a surface of the integrated circuit, the method comprising: Forming an isolation region and defining a lateral boundary of the active region of the PMOS transistor; depositing a first photomask layer, opening a window within the layer, and providing a window between the isolation regions; Exposing a surface of a region; implanting low-energy n-type doping ions into the exposed surface region to form a shallow layer suitable for adjusting a threshold voltage; Implanting high energy n-type doping ions in the region to form an n-type well; and providing medium energy in the exposed surface region. Implanting n-type doping ions to form a deep layer suitable as a channel stop; and implanting high-energy compensating n-type doping ions into the exposed surface region, and from the active region of the transistor. Forming a region having a pure n-type doping lower than the spaced doping of the n-type semiconductor at a predetermined depth below the surface; removing the first photoresist layer; and forming a gate insulating film. Growing an insulating film, such as a suitable silicon dioxide film, on the surface to cover the transistor region; depositing a layer of polysilicon or other conductive material on the insulating film; Protecting the portion of the transistor and etching the remaining portion of the polysilicon to define a gate region of the transistor. Depositing a second layer of photoresist, opening a window therein and exposing the surface of the region between the isolation regions; and providing low energy p-doping ions in the exposed surface region. Implanting a shallow p-type doped layer below the surface suitable as extended sources and drains of the transistor; removing the second photoresist layer; Depositing a conformal insulating layer of silicon or silicon dioxide on said surface and directional plasma etching said insulating film so that only the sidewalls remain around the polysilicon gate; and depositing a third photoresist layer. Opening a window therein and exposing a surface of said region between said insulating regions; Implanting p-type doping ions at medium energy into the exposed surface region to form a p-type doped region extending to a medium depth below the surface, suitable as a deep source and drain of the transistor; Removing the third photoresist layer.27. The method of claim 26, further comprising, after said process step of implanting said p-type doping ions at medium energy, a modified process step of implanting said p-type doping ions at said high energy. .(28) The method according to Item 26, wherein the n-type semiconductor has a peak doping concentration of 4E17 to 1E18 cm-3.(29) The method of paragraph (26), wherein the implantation of low energy ions comprises ions having an energy suitable for forming a junction at a depth of 10-50 nm and a peak concentration of about 5E17-5E20 cm-3. the method of.(30) The method of paragraph 26, wherein said implanting of medium energy ions comprises ions having an energy suitable for forming a junction at a depth of 50 to 200 nm and a peak concentration of about 5E19 to 5E20 cm-3. the method of.(31) An energy range of about 400 to 550 keV and about 5E12 to 2E13 cm, such that said implantation of high energy ions at a depth greater than 50 nm obtains a concentration of about 1 × 10E17 to 6 × 10E17 cm−3. 27. The method of claim 26, comprising ions having a dose of -2.32. The method according to claim 26, wherein the low concentration of the pure n-type doping includes a peak concentration of about 1-6E17 cm-3 below the pn junction of the deep source and drain regions of the transistor. .(33) On each side surface, the lateral NMOS transistor 300 in the p-type window 303 whose lateral boundary is defined by the isolation region 304 and whose vertical boundary is defined by the stop region is an n-type source. 310 and an n-type drain 312, each of which includes a shallow region extending to the gate of the transistor and a deep region recessed from the gate. The transistor also has a region 360 in its p-type well that has a higher resistivity than the rest of the well. This region extends laterally from near one of the concave regions to near the other region, and further extends vertically from a predetermined depth below the source and drain depletion regions to the top of the channel stop region. In accordance with the present invention, the region of higher p-type resistivity is compensated for by using the same photomask that was already used to adjust the threshold voltage and implant the p-type well and channel stop. It is formed by pattern doping, for example by arsenic or phosphorus ion implantation. If an ESD event occurs, this region of higher resistivity will increase the current gain of the parasitic lateral npn bipolar transistor and thus increase the current It2 which will initiate thermal breakdown due to destructive local heating.BRIEF DESCRIPTION OF THE FIGURES1 is a simplified schematic cross-sectional view through a lateral MOS transistor showing the flow of current when an electrostatic discharge phenomenon occurs.FIG. 2 is a schematic plot of the drain (collector) current shown on a logarithmic scale as a function of the drain voltage shown on a linear scale showing the occurrence of the second breakdown phenomenon.FIG. 3 schematically shows a cross section of a lateral MOS transistor with a photoresist window opened for high-energy ion implantation according to the invention.FIG. 4 shows a more detailed schematic cross-section of the region of the compensating ion implantation according to the invention.FIG. 5 shows an example of a doping profile adopted by the present invention.6 shows a schematic cross-sectional view of a MOS transistor showing another embodiment of the present invention.7 is a simplified schematic cross-sectional view of a MOS transistor, showing one of the individual process steps in the manufacturing flow according to the present invention.8 is a simplified schematic cross-sectional view of a MOS transistor, showing one of the individual process steps in the manufacturing flow according to the present invention.9 is a simplified schematic cross-sectional view of a MOS transistor, showing one of the individual process steps in the manufacturing flow according to the present invention.FIG. 10 is a simplified schematic cross-sectional view of a MOS transistor, showing one of the individual process steps in the manufacturing flow according to the invention.FIG. 11 is a simplified schematic cross-sectional view of a MOS transistor, showing one of the individual process steps in the manufacturing flow according to the invention.FIG. 12 is a simplified schematic cross-sectional view of a MOS transistor, showing one of the individual process steps in the manufacturing flow according to the invention.13 is a schematic process flow block diagram corresponding to the process steps shown in FIGS.Explanation of reference numerals300 lateral NMOS transistor 303 p-type well 304 isolation region 310 n-type source 312 n-type drain 360 high resistivity region |
An example wearable device includes a haptic actuator to produce an output haptic vibration in response to a target input signal waveform, a haptic effect sensor located in proximity to the haptic actuator to measure a haptic vibration corresponding to the output haptic vibration and to output a measured haptic vibration waveform and a feedback circuit to modify the target input signal waveform toreduce a difference between the output haptic vibration and a measured haptic vibration waveform. |
1.A wearable device comprising:a haptic actuator for generating an output haptic vibration in response to a target input signal waveform;a haptic effect sensor located adjacent to the haptic actuator, the haptic effect sensor for measuring a haptic vibration corresponding to the output haptic vibration and outputting a measured haptic vibration waveform;a feedback circuit for modifying the target input signal waveform to reduce a difference between the output haptic vibration and the measured haptic vibration waveform.2.The wearable device of claim 1, wherein the haptic effect sensor is disposed adjacent to the haptic actuator.3.The wearable device of claim 1, wherein the haptic effect sensor is for sensing an amplitude and a frequency of the haptic vibration.4.The wearable device of claim 3, wherein the feedback circuit comprises a processor for converting the measured haptic vibration waveform to a frequency domain.5.The wearable device of claim 3, wherein the feedback circuit is operative to convert the measured haptic vibration waveform to a frequency domain using a transform to identify a range of the sensed frequencies.6.The wearable device of claim 1, wherein the wearable device comprises at least one of: clothing, footwear, headwear, glasses, wrist ornaments, vests, belts, treatment devices, orthopedic devices, Medical equipment, watches or soft exoskeletons.7.The wearable device of claim 1, wherein the feedback circuit includes a processor for determining a difference between the target input signal waveform and the measured haptic vibration waveform.8.The wearable device of claim 1, further comprising an indicator for generating a pilot signal based on a difference between the target input signal waveform and the measured haptic vibration waveform to guide a user to modify the The location of the wearable device.9.The wearable device of claim 1, further comprising a second sensor and an indicator for generating a pilot signal based on a difference between the target input signal waveform and the measured haptic vibration waveform to guide The user modifies the position of the second sensor.10.The wearable device according to claim 9, wherein the second sensor is at least one of a biometric sensor, a biosensor, a temperature sensor, a pressure sensor, a heart rate sensor, or a cardiopotential waveform sensor.11.The wearable device of claim 1, further comprising a position actuator for responsive to a difference between the target input signal waveform and the measured haptic vibration waveform At least a portion of the device moves from the first position to the second position.12.The wearable device of claim 11, wherein the position actuator is for moving the at least a portion of the wearable device to modify a fit of the wearable device.13.A method comprising:Storing a target input signal waveform for the haptic actuator;Storing a measured haptic vibration waveform corresponding to the vibration sensed by the haptic effect sensor in the vicinity of the haptic actuator;The target input signal waveform is modified using a feedback circuit to reduce the difference between the target input signal waveform and the measured haptic vibration waveform.14.The method of claim 13 further comprising:The feedback circuit is used to modify the target input signal waveform to reduce a difference between at least one of an amplitude and a frequency between the target input signal waveform and the measured haptic vibration waveform.15.The method of claim 13 further comprising:The feedback circuit is used to modify the target input signal waveform to reduce a difference in amplitude and a difference in frequency between the target input signal waveform and the measured haptic vibration waveform.16.The method of claim 14 further comprising:A feedback circuit processor is used to determine a difference between the target input signal waveform and the measured haptic vibration waveform.17.The method of claim 14 further comprising:Modulating the fit of the wearable device via the position actuator in response to a difference between the target input signal waveform and the measured haptic vibration waveform,Wherein the wearable device comprises at least one of: clothing, footwear, headwear, glasses, wrist ornaments, vests, belts, treatment devices, orthopedic devices, medical devices, watches, or soft exoskeletons.18.A device that includes:Actuating means for generating an output haptic vibration in response to a target input signal waveform;a sensing device located adjacent the actuation device, the sensing device for sensing a measured tactile vibration and outputting a measured tactile vibration waveform;a feedback circuit device for modifying the target input signal waveform to reduce a difference between the target input signal waveform and the measured haptic vibration waveform.19.The device of claim 18, wherein the device comprises at least one of: clothing, footwear, headwear, glasses, wristwear, vests, belts, treatment devices, orthopedic devices, medical devices, watches, or Soft exoskeleton.20.The device of claim 18, further comprising:An actuating device for modifying a position of the device from a first position to a second position in response to the feedback circuit device.21.The apparatus of claim 18, wherein the sensing means is operative to sense at least one of an amplitude of the measured haptic vibration waveform or a frequency of the measured haptic vibration waveform.22.The apparatus of claim 21 wherein said feedback circuit means comprises processing means for converting said measured haptic vibration waveform to a frequency domain.23.The apparatus according to claim 18, wherein said feedback circuit means is further for outputting a visual indication, a tactile indication or an audible indication of a difference between said target input signal waveform and said measured haptic vibration waveform to said indicator means At least one of them to guide the user to adjust the device.24.At least one machine readable medium comprising a plurality of instructions responsive to being executed on a computing device to cause the computing device to:Storing a target input signal waveform corresponding to an output haptic vibration generated by the haptic actuator;Storing a measured haptic vibration waveform corresponding to the measured haptic vibration sensed by the haptic effect sensor;Comparing the target input signal waveform to the measured haptic vibration waveform to determine one or more differences between the target input signal waveform and the measured haptic vibration waveform;A target input signal waveform for the haptic actuator is modified in response to one or more differences between the target input signal waveform and the measured haptic vibration waveform.25.A device comprising:a haptic actuator for generating an output haptic vibration in response to a target input signal waveform;a haptic effect sensor located adjacent to the haptic actuator, the haptic effect sensor for measuring a haptic vibration corresponding to the output haptic vibration and outputting a measured haptic vibration waveform;Hardware processing logic for: storing the target input signal waveform, storing a measured haptic vibration waveform corresponding to the measured haptic vibration sensed by the haptic effect sensor, and the target input signal waveform and the measurement haptic The vibration waveform is compared to determine one or more differences between the target input signal waveform and the measured haptic vibration waveform, and in response to one or more of the target input signal waveform and the measured haptic vibration waveform The target input signal waveform for the haptic actuator is modified with a difference. |
Apparatus and method for modifying a tactile output of a haptic deviceTechnical fieldThe present disclosure relates to haptic devices and, more particularly, to an apparatus and method for modifying a haptic output of a haptic device.Background techniqueIn recent years, haptic devices including haptic actuators have been implemented in smart phones and smart watches to output haptic vibrations to provide a haptic effect of conveying information to users of the device. The haptic actuators output vibrations that stimulate the nerves in the user's skin and create a sensation that can be used to convey information. For example, a haptic actuator of a cellular phone vibrates in a first mode when the call enters and a second mode when the text enters. As another example, a haptic actuator of a cellular telephone can be configured to vibrate in a first mode to indicate that a first contact in the contact list is calling and configured to vibrate in a second mode to indicate in the contact list The second contact is calling. In yet another example, the haptic actuator of the smart watch vibrates in a first mode to indicate an alarm and vibrate in a second mode to indicate an email entry.DRAWINGSFIG. 1 depicts an example user wearing an example wearable device including an example haptic device.2 is a block diagram of an example implementation of a haptic device.3 is a block diagram of an example implementation of a feedback circuit.4A-4B are block diagrams of other example implementations of haptic devices.5 presents a diagram of the example haptic device of FIG. 1 showing acceleration along a first axis measured by an example haptic effect sensor for a first state (loose) and a second state (tight).6 presents a diagram of the example haptic device of FIG. 1 showing acceleration along a second axis measured by an example haptic effect sensor for a first state (loose) and a second state (tight).7 presents a diagram of the example haptic device of FIG. 1 showing an example haptic effect sensor for both first state (loose) and second state (tight) and for transition measurements from the first state to the second state Acceleration along the first axis, the second axis, and the third axis.8 is a flow diagram representative of example machine readable instructions that can be executed to generate manual adjustments to a haptic device.9 is a flow diagram representative of another example machine readable instructions that can be executed to produce automatic adjustments to a haptic device.10 is a block diagram showing an example implementation of an example processor platform that can execute the example instructions of FIGS. 8-9 to implement the example haptic device of FIGS. 1-4B.While the present disclosure is susceptible to various modifications and alternative forms, specific examples are shown and described herein. It is to be understood that the invention is not to be construed as limitedDetailed waysIn the following description, numerous specific details are set forth However, it will be understood by those skilled in the art that the various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits are not shown or described in detail.A haptic actuator applies a force (eg, vibration) to the user's skin to create a tactile sensation that depends on the characteristics of the haptic actuator, the position of contact between the haptic actuator and the user, and the haptic actuator and The degree of coupling between users (eg, the normal force that biases the haptic actuator toward the user's skin). Thus, for the same haptic actuator output (eg, the same input excitation signal waveform), the tactile sensation or haptic effect perceived by the user may be based on how the wearable haptic device including the haptic actuator is worn (eg, for the skin is tight (tight), loose, etc. or how securely holding a hand-held haptic device including a haptic actuator varies. In some cases, the variability in the positioning of the wearable haptic device or the variability in holding or transmitting the handheld haptic device may cause the desired haptic effect from one or more haptic actuators to be unnot perceived or not expected at all. The way is felt by the user. For example, a vibrating phone in a user's jacket may not be felt. In other examples, the variability in the positioning of the haptic device causes the desired haptic effect from one or more haptic actuators to be accompanied by harmonic frequencies that cause annoying tactile sensations and undesired acoustic noise. Ultimately, specific haptic effects from one or more haptic actuators are intended to be felt by the user.As taught herein, some example haptic devices include: a haptic actuator for generating an output haptic vibration in response to a target input signal waveform (eg, a desired haptic vibration waveform, etc.), located adjacent to the haptic actuator a haptic effect sensor for measuring haptic vibration at a position of the haptic effect sensor (eg, haptic vibration associated with output haptic vibration from the haptic actuator), and for modifying a target input signal waveform for the haptic actuator to reduce A feedback circuit that outputs a difference between an output haptic vibration and a measured haptic vibration. The difference between the output haptic vibration and the measured haptic vibration can include any difference. For example, the difference can include any one or more differences that can be defined to cause the feedback circuit to modify the haptic actuator output in a direction toward a desired level, output, and/or target. In some examples, the feedback circuit is operative to modify the target input signal waveform for the haptic actuator while reducing the difference between the output haptic vibration and the measured haptic vibration until the equilibrium point is reached. In some examples, the feedback circuit is used to modify the target input signal waveform for the haptic actuator until the desired target signal known by the haptic effect sensor (eg, in terms of amplitude) is reached.Using one or more haptic effect sensors in conjunction with a feedback circuit enables the haptic device taught herein to dynamically respond to changes in usage variables (eg, changes in the position of the haptic device and/or changes in the orientation of the haptic device relative to the user, the user's Physical activity, etc.) to automatically change the relationship between the haptic device and the user (eg, via a position actuator or via a changed input signal waveform for the haptic actuator, etc.) or to facilitate manual change of the haptic device with The relationship between the users (eg, providing the user with instructions to adjust the fit of the haptic device, etc.) to enhance delivery of the desired haptic effect.In some examples, an example haptic device uses an example haptic effect sensor (eg, a 3-axis accelerometer, a piezoelectric device, an electroactive polymer, etc.) in the vicinity of an example haptic actuator for sensing and provides an example haptic actuator. Feedback of the generated vibration. The intensity and/or frequency of the haptic effect generated by the example haptic actuator is then modified or adjusted in real time using the example haptic effect sensor measurements in the feedback loop. To illustrate, an example haptic device can be used to sense the degree of tightness or looseness of a user wearing a haptic device, and can also be used to guide a user in adjusting the fit of the haptic device to provide a tactile sensation of desired fidelity.FIG. 1 illustrates an example haptic device 100 in the form of a wristwatch (eg, a watch) disposed about a wrist 110 of a user. The example haptic device 100 includes an example strap 120, an example housing 125 attached to the example strap 120, an example indicator 130, an example battery 140 carried in the example housing 125, an example feedback circuit 145, and an example processor 150. One or more example haptic actuators 160 are disposed in or disposed on the example strap 120, and the example haptic actuator 160 can be controlled to generate one or more output haptic vibrations 165. An example haptic effect sensor 170 is disposed adjacent to the example haptic actuator 160 to sense an output haptic vibration (output haptic vibration) 165 from the example haptic actuator 160. In some examples, the output haptic vibration 165 is a pre-set or user-set output haptic vibration corresponding to a particular event (eg, text message, phone call, alert, etc.) to pass a tactile sensation corresponding to the output haptic vibration 165 The effect conveys to the user the occurrence of a particular event. In some examples, a plurality of different pre-set and/or user-set output haptic vibrations 165 are used to represent a plurality of different events.In the example of FIG. 1, example strap 120, example housing 125, example indicator 130, and example battery 140 are conventional and are not discussed in detail herein. The example indicator 130 is used to visually present information, such as by using lights, images, text, audio content, and/or video content. Example indicator 130 can include, for example, a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible OLED display, a thin film transistor (TFT) display, a liquid crystal display (LCD), one or more of one or more colors Discrete LEDs, or speakers. In some examples, example haptic device 100 includes a user input device such as a button, touch button, touch pad, or touch screen to receive input from a user. In some examples, the user input device is integrated with one or more example indicators 130. The example battery 140 includes, in some examples, a rechargeable battery, a watch battery (eg, a silver oxide battery, a silver zinc battery, etc.), a lithium ion (Li-ion) battery, a bendable Li-ion battery, or a lithium polymer (LiPo) )battery.In some examples, example haptic device 100 is a wearable haptic device such as, but not limited to, apparel, footwear, headwear, glasses (eg, glasses, goggles, etc.), wrist ornaments, watches, vests, belts, therapeutic devices (eg, , wearable drug delivery or infusion devices, wearable monitoring devices, wearable respiratory therapy devices, etc.), orthopedic devices (eg, upper limb orthosis, lower limb orthosis, etc.), medical devices (eg, biosensors, pulse oximeters) Etc.) or a soft exoskeleton (for example, a wearable robotic device).In some examples, the robot or robotic device includes an example haptic device 100, such as including a robotic end effector (eg, a robotic hand, a finger, a finger) that is driven by one or more robotic end effector actuators An example haptic device 100 in a tip, grip, etc.). An example haptic device 100 (eg, a haptic actuator and a haptic effect sensor, etc.) disposed in at least a portion of the robotic end effector can be used to provide feedback to a controller of one or more robotic end effector actuators (eg, Outputting a signal to change the output of the actuator in response to the sensed vibration (eg, the firmness of the grip of the robotic end effector on the workpiece or the object held by it and one or more of the haptic device The level of vibration sensed by the haptic effect sensor is associated, etc.).Selection of the type and location of the example indicator 130 and the example battery 140 is within the ordinary skill of the art and is applicable to applications and form factors that are suitable for a particular haptic device 100. In other examples, the example haptic device 100 is a handheld electronic device. In some examples, example haptic device 100 is a wearable device, such as a virtual reality device or an augmented reality device.The example feedback circuit 145 is used to modify the target input signal waveform for the example haptic actuator 160 to change the output haptic vibration of the example haptic actuator 160 to reduce the difference between the output haptic vibration and the measured haptic vibration. In some examples, the example feedback circuit 145 is used to modify the target input signal waveform to the example haptic actuator 160 to reduce the difference between the output haptic vibration and the measured haptic vibration, such as reducing the output haptic vibration and measuring the haptic vibration. The difference between amplitude and / or frequency. The difference between the output haptic vibration and the measured haptic vibration may, for example, include: a difference in root mean square (RMS) level, a difference in pulse pulsation (eg, a difference between a peak level and an RMS level), a difference in acceleration, The difference in speed, the difference in displacement, the difference in symmetry, or the presence or absence of certain specified frequencies. To illustrate, in some examples, a threshold difference between the measured haptic vibration and the output haptic vibration may be selected via the example feedback circuit 145 to reflect one or more desired states of a particular application of the example haptic device 100. For example, the example haptic device 100 includes a first state in which the haptic actuator 160 is in firm contact with the user (eg, a default tight fit), and the haptic actuator 160 contacts the second state of the user loosely or intermittently (eg, loose fit) And a third state in which the haptic actuator 160 is not in contact with the user (eg, a poor fit, a removed state, etc.).In some examples, the example feedback circuit 145 is calibrated empirically to correspond to a particular example haptic actuator 160 and/or use and/or user. To calibrate the example feedback circuit 145 empirically, the example haptic actuator 160 is placed in a first state (eg, a default tight fit) and the haptic vibration sensed by the example haptic effect sensor 170 is set to correspond to the example haptic device 100 a state. In some examples, the example haptic actuator 160 is placed at various different locations corresponding to the first state to obtain a plurality of measured haptic vibrations corresponding to the first state. The example haptic actuator 160 is placed in a second state (eg, a loose fit) and the measured haptic vibration sensed by the example haptic effect sensor 170 is set to correspond to the second state of the example haptic device 100. In some examples, the example haptic actuator 160 is placed at various different locations corresponding to the second state to obtain a plurality of measured haptic vibrations corresponding to the second state. The example haptic actuator 160 is placed in a third state (eg, poor fit, removed state, etc.) and the measured haptic vibration sensed by the example haptic effect sensor 170 is set to correspond to the third state of the example haptic device 100 . In some examples, various different locations corresponding to the third state are used to obtain a plurality of measured haptic vibrations corresponding to the third state. In some examples, the calibration described above is performed for a single output haptic vibration. In some examples, the calibration described above is performed for a plurality of output haptic vibrations. An empirical method of calibration is one example of adjusting the example haptic actuator 160 to achieve the desired effect of one or more output haptic vibrations.In some examples, the example feedback circuit 145 includes an example processor 150 to control the example haptic actuator 160 to output a selected one of the selected output haptic vibrations or the plurality of available output haptic vibrations. The processor 150 of the illustrated example can be implemented, for example, by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired series or manufacturer.The example haptic actuator 160 is disposed in or on the example band 120 of the example haptic device 100 of FIG. In some examples, the example haptic actuator 160 is a mechanical vibrator such as a piezoelectric device (eg, a piezoelectric film, a single layer disc, a multilayer strip, etc.) or an electroactive polymer (EAP). In some examples, the example haptic actuator 160 includes an eccentric rotational mass (ERM), a linear resonant actuator (LRA), and/or another transducer or vibrator to create a tactile sensation. In some examples, example haptic actuator 160 is driven by a haptic driver (eg, a piezoelectric haptic driver, a motor haptic driver, etc.), which may include output haptic vibrations of haptic actuator 160 corresponding to different haptic effects 165 library. In some examples, output haptic vibration 165 is a preset output haptic vibration or user-set output haptic vibration corresponding to a particular event (eg, text message, phone call, alert, etc.) to output haptic vibration and corresponding via The haptic effect conveys to the user the occurrence of that particular event. During the testing of the example haptic device 100 discussed below, a Golden Dragon C1020B217F (Jinlong C1020B217F) flat vibration motor manufactured by Jinlong Machinery & Electronics, Co. Ltd. of Zhejiang Province, China was used as an example haptic actuator 170.In the example haptic device 100 of FIG. 1 , an example haptic effect sensor 170 is disposed adjacent to the example haptic actuator 160 to sense output haptic vibration 165 from the example haptic actuator 160 . The haptic vibration sensed by the example haptic effect sensor 170 provides feedback (eg, waveform, amplitude, frequency, etc.) to the example feedback circuit 145, which determines the output haptic vibration produced by the example haptic actuator 160 and by example The difference between the tactile vibrations sensed by the haptic effect sensor 170. In some examples, example haptic effect sensor 170 includes a load cell, an infrared sensor, an optical sensor, a capacitive sensor, an accelerometer, a temperature sensor, a piezoelectric device, a strain gauge, a gyroscope, an electroactive polymer, and/or another A transducer to sense or measure displacement, force and/or acceleration. As described above, the example haptic effect sensor 170 is for sensing haptic vibration corresponding to the output haptic vibration 165 of the haptic actuator 160. In some examples, the example haptic effect sensor 170 is used to sense or measure the amplitude and frequency of the haptic vibration corresponding to the output haptic vibration 165. During the testing of the example haptic device 100 discussed below, an InvenSense MPU 9150 accelerometer manufactured by InvenSense, Inc. of San Jose, Calif., was used as an example haptic effect sensor 170. In another example, the example haptic effect sensor 170 includes an InvenSense MPU 9250 accelerometer.The spacing between the example haptic effect sensor 170 and the example haptic actuator 160 can include any distance (eg, continuous, contiguous, etc.) within which the measured haptic sensed by the example haptic effect sensor 170 The vibrations include actionable data indicative of the performance of the example haptic actuator 160. In some examples, the distance may be like a similar example haptic device 100 (eg, a dress watch, a sports watch, etc.) in different example haptic devices 100 (eg, watches, apparel, etc.) and/or similar applications. Change between. To illustrate, the spacing between the example haptic effect sensor 170 and the example haptic actuator 160 in the watch may be different than the spacing between the example haptic effect sensor 170 and the example haptic actuator 160 in the garment. In some examples, the example haptic device 100 includes another sensor, such as a biometric sensor, a biosensor, a temperature sensor, a pressure sensor, a heart rate sensor, or a cardiopotential waveform sensor. In some examples, the example haptic effect sensor 170 and feedback circuit 145 are implemented to facilitate proper positioning of such sensors relative to the user (eg, to ensure that the heart rate sensor is properly positioned, etc.).In some examples, the number of example haptic actuators 160 is different than the number of example sensors 170. For example, the example haptic actuator 160 can have two example sensors 170 disposed adjacent to the example haptic actuator 160 (eg, on the opposite side of the example haptic actuator 160, etc.) to sense from an example haptic sensation The output of the actuator 160 is haptic vibration 165. As another example, two example haptic actuators 160 are disposed adjacent to the example haptic effect sensor 170 (eg, on the opposite side of the example haptic effect sensor 170, etc.), the example haptic effect sensor 170 being positioned to receive and receive The corresponding output haptic vibrations 165 of each of the example haptic actuators 160 correspond to haptic vibrations. In general, example haptic device 100 can include one or more types of one or more example haptic actuators 160 and one or more example sensors 170 of one or more types.In some examples, example haptic device 100 includes a haptic effect sensor (eg, haptic effect sensor 170 that includes a load sensor, an infrared sensor, an optical sensor, a capacitive sensor, an accelerometer, a piezoelectric sensor, a strain gauge, or a transducer) At least one to sense or measure at least one of displacement, force or acceleration) and a second sensor, which may be a haptic effect sensor or another type of sensor (eg, a biometric sensor, a biosensor, A temperature sensor, a pressure sensor, a heart rate sensor or a heart potential waveform sensor), and the example indicator 130 is for generating a pilot signal or instruction based on a difference between the output haptic vibration 165 and the haptic vibration sensed or measured by the haptic effect sensor 170. To guide the user to modify the position of the second sensor.In some examples, the example haptic device 100 is a wearable haptic device and includes a first haptic actuator 160 disposed on a first portion of the haptic device, a first haptic effect disposed proximate the first haptic actuator 160 A sensor 170, a second haptic actuator 160 disposed on a second portion of the haptic device, and a second haptic effect sensor 170 disposed adjacent the second haptic actuator 160. In some examples, the example feedback circuit 145 is configured to modify the second haptic sensation based on a difference between the second output haptic vibration 165 and the haptic vibration sensed by the first haptic effect sensor and/or the second haptic effect sensor 170 The target input signal waveform of the actuator 160.2 is a block diagram of an example haptic device 100 in which an adjustable input signal waveform circuit 208 provides a target (eg, desired) input signal waveform 210 waveform to an example haptic actuator 160 that is responsive to a target The output signal waveform 210 is input to generate an output haptic vibration 165. The target input signal waveform 210 can include any desired type of any type of waveform (eg, sine wave, square wave, triangle wave, sawtooth wave, etc.), wavelet, or wave packet to generate a desired output haptic vibration from the example haptic actuator 160. 165. The target input signal waveform 210 can be determined empirically. Output haptic vibration 165 is transmitted along exemplary path 202 by substrate material having or in which exemplary haptic actuator 160 and example haptic effect sensor 170 are disposed. For example, due to modification of the output haptic vibration 165 in the substrate material in or on which the example haptic actuator 160 and the example haptic effect sensor 170 are disposed (eg, one or more characteristics of the vibration (such as amplitude, frequency, phase, The measurement of the haptic vibration 240 may be different from the output haptic vibration 165. The measurement haptic vibration 240 may be different from the output haptic vibration 165, for example, due to a material or constraint that imposes boundary conditions external to the example haptic device 100, such as the fit of the wearable haptic device 100 with the user. .The haptic effect sensor 170 outputs a measured haptic vibration waveform 241 to the feedback circuit 145. The example feedback circuit 145 of FIG. 2 includes a memory 246 (eg, a look-up table (LUX), etc.) that includes waveform information that can be compared to measuring the haptic vibration waveform 241 or a derivative thereof. In some examples, memory 246 includes a library of input signal waveforms or portions thereof that correspond to a plurality of haptic effects to be output by haptic actuator 160. In some examples, memory 246 includes a user-measured haptic vibration waveform 241 or portion thereof (eg, an acceptable measured haptic vibration waveform 241 and an unacceptable measured haptic vibration waveform 241, etc.). In some examples, example feedback circuit 145 is responsive to a difference between target input signal waveform 210 and measured haptic vibration waveform 241 (which corresponds to a difference between output haptic vibration 165 and measured haptic vibration 240 from haptic effect sensor 170) Instructions for modifying the target input signal waveform 210 provided to the example haptic actuator 160 are output to the adjustable input signal waveform circuit 208 to change the output haptic vibration 165 generated by the example haptic actuator 160. In some examples, the change to output haptic vibration 165 produced via modification of target input signal waveform 210 of example haptic actuator 160 may include a change in amplitude, frequency, phase, and/or wavelength of output haptic vibration 165. Or, the output haptic vibration 165 is changed from the first vibration to the second vibration. In some examples, one or more modifications to the target input signal waveform 210 are determined empirically.FIG. 3 is a block diagram of an example feedback circuit 145. The example feedback circuit 145 includes an example memory 246, an example target input signal waveform analyzer 302, an example measurement haptic vibration waveform analyzer 304, an example waveform comparator 306, and an example feedback analyzer 308. In some examples, example target input signal waveform analyzer 302, example measurement haptic vibration waveform analyzer 304, example waveform comparator 306, and example feedback analyzer 308 are operatively associated with one or more processors 150.Each of the example target input signal waveform analyzer 302 and the example measurement haptic vibration waveform analyzer 304 is used to prepare a respective one of the target input signal waveform 210 and the measured haptic vibration waveform 241 for comparison by the example waveform comparator 306. For example, example target input signal waveform analyzer 302 and/or example measurement haptic vibration waveform analyzer 304 applies scaling, offset, gain, offset, or other transformations as needed to achieve target input signal waveform 210 and measure haptic vibration waveforms. Direct comparison between 241. The example target input signal waveform analyzer 302 parses the target input signal waveform 210 into a form suitable for comparison with the measured haptic vibration waveform 241 (eg, via a transform (eg, fast Fourier transform, wavelet transform, etc.) to target The input signal waveform 210 is converted from the time domain to the frequency domain). The example measurement haptic vibration waveform analyzer 304 parses the measurement haptic vibration waveform 241 into a form suitable for comparison with the target input signal waveform 210 (eg, via a transform (eg, fast Fourier transform, wavelet transform, etc.) to measure The haptic vibration waveform 241 is converted from the time domain to the frequency domain).The example waveform comparator 306 applies conventional waveform analysis techniques (eg, frequency domain analysis, analysis of spectral components, etc.) to the target input signal waveform 210 and the measured haptic vibration waveform 241 to identify differences between the waveforms. The example waveform comparator 306 then compares the identified difference between the target input signal waveform 210 and the measured haptic vibration waveform 241 with a predetermined threshold, such as a lookup table set and stored in the memory 246 during calibration of the example haptic device 100. A comparison is made to determine whether the measured haptic vibration 240 sensed by the example haptic effect sensor 170 indicates a physical state of the example haptic device 100 that may be beneficial for adjustment thereof. For example, during calibration of the example haptic device 100 during a device training session, the user positions the example haptic device 100 at one or more acceptable locations of the example haptic device 100 (ie, the user receives a given output haptic vibration 165) One or more locations of the haptic effect are desired) and one or more unacceptable locations of the example haptic device 100 (ie, the user has not received one or more locations of the desired haptic effect of the given output haptic vibration 165). The output haptic vibration 165 and the measured haptic vibration 240 corresponding to these acceptable and unacceptable positions are resolved into representative waveforms via transformations, and representative waveforms (eg, target input signals for various locations of the haptic device 100) The mapping between waveform 210 and measuring haptic vibration waveform 241, etc., or data derived therefrom or related thereto (eg, one or more specific frequencies, one or more specific amplitudes, etc.) is stored in physical memory. The device 246 data structure (e.g., lookup table) is taken as a representative normal and off-normal waveform or derivative thereof (e.g., amplitude, frequency, etc.). In another example, the example haptic device 100 includes a factory settings lookup table having or corresponding to or derived from waveforms corresponding to acceptable and unacceptable locations determined during product development and associated haptic effects. data.The example feedback analyzer 308 determines the corrective action applied to the example haptic device 100 in response to any identified abnormal waveform or derivative thereof. In some examples, the corrective action is included in a lookup table of corrective actions set during calibration of the example haptic device 100. For example, during calibration of the example haptic device 100 during a device training session, the user enters one or more desired corrective actions for one or more locations of the example haptic device 100 that provide an unacceptable haptic effect to the user. To illustrate, for a particular output haptic vibration 165, the user may define a first corrective action for the first threshold difference representing the first abnormal condition of the waveform output by the example waveform comparator 306 for output by the example waveform comparator 306 The second threshold difference representing the second abnormal condition of the waveform defines a second corrective action, and a third corrective action is defined for the third threshold difference representing the third abnormal condition of the waveform output by the example waveform comparator 306. In some examples, the first corrective action includes an output to the one or more indicators 130 for alerting the user of a first threshold difference (eg, an illuminated LED indicator, a tone output from the speaker, etc.), The second corrective action includes an output to the plurality of indicators 130 for alerting the user of a second threshold difference (eg, an instruction for parsing a second threshold difference displayed on the display device and a tone output from the speaker, etc.), and The third corrective action includes modification of the output of the example haptic actuator 160 via an output signal to the adjustable input signal waveform circuit 208 (eg, modification of the frequency of the output haptic vibration, modification of the magnitude or intensity of the output haptic vibration, The use of different output tactile vibrations, etc.). In some examples, the modified parameters of the example feedback analyzer 308 are determined empirically or prompted to the example haptic device 100 to modify the parameters.An example haptic effect sensor 170 output (eg, measured haptic vibration 240 time domain signal, etc.) to an example measurement haptic vibration waveform analyzer 304 may be via an example when a haptic effect (eg, a sinusoidal effect, etc.) that changes over time is to be generated. Waveform comparator 306 samples and transforms according to the instantaneous haptic amplitude to be generated. Thus, the dynamic haptic effect can be detected by the example waveform comparator 306 and the dynamic haptic effect can be adjusted in real time via the example feedback analyzer 308.4A-4B are block diagrams of other example haptic devices 100. 4A shows a variation of the example of FIGS. 2-3, where the example feedback circuit 145 is for outputting a signal to the example indicator 130 to convey to the user a difference between the output haptic vibration 165 and the measured haptic vibration 240. Light, image, text, audio content, and/or video content. To illustrate, if the example feedback circuit 145 determines that there is a difference between the target input signal waveform 210 and the measured haptic vibration waveform 241 and such differences are not acceptable with the example haptic device 100 (eg, the aforementioned third state of the haptic device 100) In association with, the signal is output to the example indicator 130 to prompt the user to perform a corrective action to reconfigure the example haptic device 100 into an acceptable configuration. As an example, if the wearable haptic device 100, such as that shown in FIG. 1, is in a relaxed state associated with an unacceptable configuration (eg, set by a user during an experience "training" operation, etc.), the example feedback circuit 145 makes an example The indicator 130 outputs an audible alarm and/or an example display to the wearable haptic device 100 through a speaker of the wearable haptic device 100 (eg, the leftmost example indicator 130 in FIG. 1) (eg, the most in FIG. 1) The example indicator 130 on the right outputs a visual alert. In some examples, the visual alert may include a particular instruction for the user that corresponds to the particular difference indicated by the example feedback circuit 145. In some examples, different audible alerts may be used to convey the different degrees or differences indicated by the example feedback circuit 145.4B shows an example feedback circuit 145 that outputs a control signal representative of the difference between the target input signal waveform 210 and the measured haptic vibration waveform 241 determined by the example feedback circuit 145 to the example position actuator 400 to position The actuator 400 moves the first portion of the example haptic device 100 (eg, a wearable haptic device, etc.) from a first position to a second position. For example, the position actuator 400 will move the first portion of the wearable haptic device to modify the fit of the wearable haptic device. In some examples, the example position actuator 400 includes a mechanical actuator, an electric actuator, a pneumatic actuator, or a hydraulic actuator that is configured to connect two portions of the example haptic device 100 and to enable an example haptic device The two parts of 100 move relative to each other. In some examples, the example position actuator 400 is a linear actuator, a micro linear actuator, a rotary actuator, a voice coil actuator, or an ultrasonic piezoelectric actuator, an artificial muscle, a pneumatic artificial muscle, or Electroactive polymer. If the example feedback circuit 145 detects a difference between the target input signal waveform 210 and the measured haptic vibration waveform 241 and an unacceptable configuration of the example haptic device 100 (eg, the third state described above of the haptic device 100, wherein the haptic device has a poor fit Corresponding to the unfavorable measurement haptic vibration 240) as compared to outputting the haptic vibration 165 or the like, the example feedback circuit 145 outputs a signal to the example position actuator 400 to actuate the example position actuator 400 and cause the example haptic device 100 The first portion moves relative to the second portion of the example haptic device 100 (eg, closer, farther, etc. from the second portion). This movement of the first portion of the example haptic device 100 relative to the second portion of the example haptic device 100 causes a corresponding change in the position of the example haptic device 100, such as a change in the fit of the wearable haptic device 100 (eg, tightening the loose band 120) ,and many more).5 presents graphs 510, 520 of an example wrist-worn haptic device 100 similar to the graph of FIG. 1, wherein graph 510 illustrates an example haptic effect sensor 170 for a first state (tight) along a first axis (ie, the Y-axis) the measured acceleration, and graph 520 shows the acceleration measured by the example haptic effect sensor 170 along the first axis (ie, the Y-axis) for the second state (loose). 6 presents graphs 610, 620 of the same example wrist-worn haptic device 100, wherein graph 610 shows the example haptic effect sensor 170 along the second axis (ie, the Z-axis measured for the first state (loose) The acceleration of FIG. 6), and the graph 620 shows the acceleration along the second axis measured by the example haptic effect sensor 170 for the second state (tight).In the example wrist-worn device of FIGS. 5-6, an example measurement haptic vibration 240 at an example haptic effect sensor 170 spaced about 2 mm from the haptic actuator 160 is observed to be worn with the wrist-worn device The degree of looseness or closeness is related. As described above, the example haptic effect sensor 170 for these tests is the InvenSense MPU 9150. The graphs 510-520 and 610-620 of Figures 5-6 illustrate the acceleration measured by the example haptic effect sensor 170 for a tightly worn and loosely worn example wrist-worn haptic device 100 (according to a fixed sampling rate in milli-g) For the unit). The measured haptic vibrations, represented as acceleration profiles 510-520 and 610-620, show a decrease in amplitude and frequency for the tight band, or conversely, an increase in amplitude and frequency for the loose band. The difference in Figures 5-6 can indicate that more tactile energy is transmitted to the wrist when the wrist-worn haptic device 100 is tightly held around the user's wrist.In FIG. 5, the magnitude of the tight fit Y-direction acceleration of the example wrist-worn device shown in graph 510 is about 750 milli-g during the test duration (251 milliseconds) shown along the X-axis. The vicinity of the origin is between about +/- 500 milli-g (Y-axis). In contrast, the amplitude of the loose fit Y-direction acceleration of the example wrist-worn device shown in graph 520 is between about +/- 1000 milli-g around the origin of about 750 milli-g.In FIG. 6, the magnitude of the tight fit Z-direction acceleration of the example wrist-worn device shown in graph 610 is between about +/- 50 milli-g near the origin of about 600 milli-g. In contrast, the amplitude of the loosely fitted Z-direction acceleration of the example wrist-worn device shown in graph 620 is between about +/- 400 milli-g near the origin of about 600 milli-g.These differences between the measured haptic vibration 240 sensed at the example haptic effect sensor 170 (eg, loose and tight) and the fixed output haptic vibration 165 from the example haptic actuator 160 are operational and allow example feedback circuitry 145 distinguishes different locations of the example haptic device 100. Based on measuring haptic vibrations 240 (eg, plots 510-520 and/or 610-620), example feedback circuit 145 outputs respective instructions and/or control signals to, for example, example indicator 130, example adjustable input signal waveform 210, or Example position actuator 400.FIG. 7 presents a graph similar to the example wrist-worn haptic device 100 shown in FIG. In particular, FIG. 7 illustrates the first state (wearing loosely in the first direction over a period of 0-256 milliseconds) and the second state (within a period of 256-408 milliseconds) by the example haptic effect sensor 170 The measured acceleration along the X-axis (top graph), Y-axis (middle graph), and Z-axis (bottom graph) is loosely worn in both directions. The transition from the first state to the second state, reflecting the change in orientation of the example haptic device 100 relative to the user's wrist, is represented by a transition 720 in the top graph, a transition 750 in the middle graph, and a transition 780 in the bottom graph. In a graph of acceleration along the X-axis (top graph of FIG. 7), the characteristic magnitude and frequency of the acceleration in the first portion 710 of the graph prior to the transition 720 is different from the second portion 730 of the graph after the transition 720. The characteristic amplitude and frequency of the acceleration in .Similarly, in the graph of the acceleration along the Y-axis (the middle graph of FIG. 7), the characteristic amplitude and frequency of the acceleration in the first portion 740 before the transition 750 is different from the graph after the transition 750 The characteristic amplitude and frequency of the acceleration in the two parts 760. Additionally, in a graph of acceleration along the Z-axis (bottom plot of FIG. 7), the characteristic amplitude and frequency of the acceleration in the first portion 770 prior to the transition 780 is different from the second after the transition 780 of the graph The characteristic amplitude and frequency of the acceleration in section 790. Regarding the above-described example comparison between the measured haptic vibration 240 sensed at the example haptic effect sensor 170 and the output haptic vibration 165 of the example haptic actuator 160, FIG. 7 illustrates: output independent of the example haptic actuator 160 The haptic vibration 165, the relative change in the measured haptic vibration 240 sensed by the example haptic effect sensor 170, may itself be used to cause the feedback circuit 145 to output an instruction and/or control signal to, for example, the example indicator 130, the example adjustable input signal waveform 210, or Example position actuator 400.A flowchart representative of example machine readable instructions for implementing adjustments to the example haptic device 100, such as illustrated by way of example in FIGS. 1-4, is shown in FIGS. 8-9. In the examples of Figures 8-9, the machine readable instructions constitute a program for execution by a processor, such as processor 150 shown in the example processor platform 1000 discussed below in connection with Figure 10. The program may be embodied on a tangible computer readable storage medium such as a CD-ROM, floppy disk, hard disk drive, digital versatile disk (DVD), Blu-ray disk or memory associated with processor 150 or example processor platform 1000. The software, but the entire program and/or portions thereof, may alternatively be executed by devices other than processor 150 and/or embodied in firmware or dedicated hardware. Additionally, although the example program is described with reference to the flowcharts illustrated in FIGS. 8-9, many other methods of implementing example instructions for implementing the adjustments to the example haptic device 100 may alternatively be used. For example, the order of execution of the blocks may be changed and/or some of the described blocks may be changed, eliminated or combined.As described above, the example processes of FIGS. 8-9 for implementing adjustments to the example haptic device 100 or other processes disclosed herein may be implemented using coded instructions (eg, computer and/or machine readable instructions) that are encoded The instructions are stored on a tangible computer readable storage medium such as a hard drive, flash memory, read only memory (ROM), compact disc (CD), digital versatile disc (DVD), cache, random access memory (RAM) and/or The information is stored on any other storage device or storage disk for any duration (eg, extended time period, permanent, transient instance, temporary buffer, and/or cache of information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, "tangible computer readable storage medium" and "tangible machine readable storage medium" are used interchangeably. Additionally or alternatively, the example processes of FIGS. 8-9 for implementing adjustments to the example haptic device 100 or other processes disclosed herein may be implemented using coded instructions (eg, computer and/or machine readable instructions), These encoded instructions are stored on a non-transitory computer and/or machine readable medium (such as a hard drive, flash memory, read only memory, optical disk, digital versatile disk, cache, random access memory and/or where information is stored in any Duration (for example, extended time period, permanent, transient instance, temporary buffering, and/or caching of information on any other storage device or storage disk). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase "at least" is used as a transitional term in the preamble of the claims, it is the same as the term "comprising" in the open aspect, and the term "comprising" is also open-ended.8 shows a flowchart representative of example machine readable instructions that can be executed to produce a manual adjustment of a haptic device. In block 810, haptic actuator 160 produces output haptic vibration 165 in response to target input signal waveform 210 (see FIG. 2) of desired or target haptic vibration generated by adjustable input signal waveform circuit 208. The target input signal waveform 210 can include any desired waveform, wavelet or packet (eg, sine wave, square wave, triangle wave, sawtooth wave, etc.) without limitation. In some examples, block 810 includes storing waveforms or derivatives thereof in physical memory 246 that correspond to desired output haptic vibrations 165 to be generated by haptic actuator 160 in response to target input signal waveform 210. (for example, haptic effects). For example, an eccentric rotating mass (ERM) actuator produces an output haptic vibration 165 in response to a target input signal waveform (eg, an applied voltage).In block 820, haptic vibration 240 is measured by an example haptic effect sensor 170 located near an example haptic actuator 160 such as shown in FIGS. 2-3 and 4A-4B. To illustrate, an example haptic effect sensor 170 including an accelerometer is used to sense haptic vibration having an amplitude and frequency associated with the amplitude and frequency of the output haptic vibration 165. In some examples, block 820 includes a waveform or a derivative thereof that corresponds to measured haptic vibration 240 sensed by example haptic effect sensor 170 or a derivative thereof, stored in physical memory 246.In block 830, the measured haptic vibration waveform 241 corresponding to the measured haptic vibration 240 is compared to the target input signal waveform 210. For example, as shown in the example of FIG. 2, the measured haptic vibration waveform 241 is output to the feedback circuit 145 by the haptic effect sensor 170, and then the measured haptic vibration waveform 241 and the target input signal waveform acquired from the memory 246 can be measured at the feedback circuit 145. 210 for comparison. In some examples, the comparison of measuring the haptic vibration waveform 241 with the target input signal waveform 210 for the haptic actuator 160 includes the amplitude and/or frequency of the target input signal waveform 210 and measuring the amplitude and/or frequency of the haptic vibration waveform 241. The comparison between.The comparison of measuring the haptic vibration waveform 241 with the target input signal waveform 210 can include any conventional technique for comparing waveforms or signals, including but not limited to: comparison of root mean square (RMS) levels, comparison between pulsed pulsations of vibration (For example, the difference between the peak level and the RMS level), the comparison between the accelerations, the comparison between the speeds, or the comparison between the displacements, or the symmetry. In some examples, converting the measured haptic vibration 240 to measuring the haptic vibration waveform 241 for comparison with the target input signal waveform 210 can include converting the measured haptic vibration 240 to the frequency domain using a transform, eg, to identify a range of the sensed frequency Or identify the dominant frequency within this range of the sensing frequency. A comparison between the target input signal waveform 210 that causes the output haptic vibration 165 and the measured haptic vibration waveform 241 corresponding to the measured haptic vibration 240, or a comparison of the first portion of the measured haptic vibration waveform 241 with the second portion of the measured haptic vibration waveform 241 Any conventional technique for vibration, waveform, signal or functional analysis can be used. In some examples, the transform may include a Fourier transform (eg, Fast Fourier Transform (FFT), Indirect Fourier Transform, Fourier Sine Transform, Fractional Fourier Transform, etc.) or wavelet transform (eg, continuous wavelet) Transform, discrete wavelet transform, complex wavelet transform, etc.).After comparing the measured haptic vibration waveform 241 and the target input signal waveform 210 in block 830, block 840 determines if there is a threshold difference between the measured haptic vibration waveform 241 and the target input signal waveform 210. If YES, the instructions are output to the user of the example haptic device 100 in block 850, thereby instructing the user to adjust the fit of the example haptic device 100. The value of the comparison can also inform the user of the instruction to be output. For example, the direction of adjustment of the fit of the example haptic device (eg, tightening, relaxing, etc.) can be indicated by the direction of the difference (eg, negative, positive). If there is no threshold difference between the measured haptic vibration waveform 241 and the target input signal waveform 210 in block 840 (block 840 = "No"), then control passes back to block 810.In some examples, the complexity of the instructions varies depending on the type of indicator 130 (eg, LED, display device, speaker, etc.) selected to provide feedback to the user, and may include: illumination of the light (eg, continuous , pulse, etc.), a generator of audible tones, alarms or messages from the speaker, and/or textual instructions displayed on the display device. For example, if the haptic device 100 is a watch as shown in FIG. 1 and the watch is loose, the indicator 130 (eg, a display device) can display a prompt indicating that the user tightens the watch to increase the fidelity of the desired haptic effect. Thus, if the measured haptic vibration waveform 241 corresponding to the measured haptic vibration 240 indicates that the example haptic device 100 is worn too loose, the user may be directed to tighten the example haptic device 100, and conversely if the measurement haptic corresponding to the measured haptic vibration 240 The vibration waveform 241 indicates that the example haptic device 100 is worn too tightly, and the user can be guided to relax the example haptic device 100. In another example, when the user is wearing the wearable haptic device 100, or during wear of the wearable haptic device 100, the feedback circuit 145 is used to guide the user (eg, via an indicator from, for example, a display device or speaker) The output signal or command of the 130) adjusts the tightness of one or more components of the wearable haptic device 100 to position the wearable haptic device to deliver a desired haptic effect from the one or more haptic actuators 160 .In some examples, the example feedback circuit 145 determines from the measured haptic vibration waveform 241 whether the measured haptic vibration 240 includes an undesired or annoying audible frequency or tone. In such an example, in block 850, the example feedback circuit 145 can output an instruction to a user of the example haptic device 100 instructing the user to adjust the fit of the example haptic device 100 to eliminate such undesired frequencies or tones.As noted above, in some examples, the threshold difference is empirically determined by the user and stored in memory device 246. The user of the example haptic device 100 is enabled to calibrate the example feedback circuit 145 to display, for example, a threshold setting, a suggested correction to the position or fit of the example haptic device 100 via the example indicator 130. In some examples, a user of the example haptic device 100 can be enabled to calibrate the example feedback circuit 145 in response to any threshold difference to adjust the target input signal waveform 210 for the haptic actuator 160 and/or control the example position actuator 400 to The position of one or more portions of the example haptic actuator 160 is adjusted to produce an output haptic vibration 165 that provides a desired haptic effect to the user. In some examples, haptic effect sensor 170 includes an optical sensor integrated into the illustrated haptic feedback loop to facilitate adjustment of the fit of example haptic device 100. In some examples, the example feedback circuit 145 is factory calibrated to shape factors and functions of the example haptic device 100, wherein the threshold difference is set to a value determined during development of the example haptic actuator 160 to provide a desired sample population Haptic effect.In some example use cases, the user wears some wearable devices (eg, biometric sensors, optical heart rate sensors, pressure sensors, temperature sensors, image sensors, etc.) that are too loose or too tight for the skin-facing sensor to function properly. Such use cases may advantageously include an example haptic device 100 that includes a haptic feedback loop via example feedback circuit 145 for providing independent measurements of the fit or positioning of the wearable device at any given time. If the fit or position of the wearable device is incorrect (eg, the position of the wearable device slides during user movement, etc.), the user may be guided via a pilot signal or instruction to adjust the tightness and/or position of the wearable device to improve The performance of skin-facing sensors. Accordingly, the indicator 130 can be used to generate a pilot signal based on the difference between the measured haptic vibration waveform 241 and the target input signal waveform 210 to guide the user in modifying the position of the wearable haptic device 100, such as to guide the user to make the wearable haptic device Tightening or relaxing the wearable haptic device relative to the body part relative to the user's body parts (eg, wrists, arms, legs, head, torso, etc.).9 shows a flow diagram representative of another example machine readable instructions that can be executed to produce automatic adjustments to a haptic device. In block 910, an output haptic vibration 165 is generated in response to a target input signal waveform 210 to the example haptic actuator 160. In some examples, block 910 includes storing, in physical memory 246, a waveform or derivative thereof corresponding to output haptic vibration 165 or a portion thereof produced by haptic actuator 160 in response to target input signal waveform 210.In block 920, haptic vibration 240 is measured by example haptic effect sensor 170 located near example haptic actuator 160, such as shown in Figures 2-3 and 4A-4B. In some examples, block 920 includes storing, in physical memory 246, a waveform or a derivative thereof corresponding to measured haptic vibration 240 or a portion thereof sensed by example haptic effect sensor 170.In block 930, the measured haptic vibration waveform 241 is compared to the target input signal waveform 210. For example, the measured haptic vibration 240 sensed by the example haptic effect sensor 170 is resolved to measure the haptic vibration waveform 241 that is compared to the target input signal waveform 210 used by the example haptic vibrator 160 to generate the output haptic vibration 165. In some examples, the comparison of the measured haptic vibration waveform 241 with the target input signal waveform 210 includes a comparison between the amplitude and/or frequency of the target input signal waveform 210 and the amplitude and/or frequency of the measured haptic vibration waveform 241. This comparison between the target input signal waveform 210 and the measured haptic vibration waveform 241 may include any conventional technique for comparing waveforms or signals, including but not limited to: comparison of root mean square (RMS) levels, between pulsed oscillations The comparison (for example, the difference between the peak level and the RMS level), the comparison between the accelerations, the comparison between the speeds, or the comparison between the displacements, or the symmetry.In some examples, comparing the comparison between the haptic vibration waveform 241 and the target input signal waveform 210 can include converting the measured haptic vibration 240 to the frequency domain using a transform to identify a range of the sensed frequencies or within the range identifying the sensed frequency Main frequency. A comparison between measuring the haptic vibration waveform 241 and the target input signal waveform 210, or comparing the first portion of the measured haptic vibration waveform 241 with the second portion of the measured haptic vibration waveform 241, may be used for vibration, waveform, signal, or functional analysis. Any conventional technique. In some examples, the transform may include a Fourier transform (eg, Fast Fourier Transform (FFT), Indirect Fourier Transform, Fourier Sine Transform, Fractional Fourier Transform, etc.) or wavelet transform (eg, continuous wavelet) Transform, discrete wavelet transform, complex wavelet transform, etc.).In some examples, block 930 includes using a processor to identify an audible frequency, and block 940 includes modifying an example haptic actuator 160 using example feedback circuitry 145 (eg, using an example waveform comparator and example feedback analyzer 308) The target input signal waveform 210 is to eliminate audible frequency components (eg, user audible components, etc.) determined from analysis of the measured haptic vibration waveform 241.After comparing the measured haptic vibration waveform 241 and the target input signal waveform 210 in block 930, block 940 determines if there is a threshold difference between the measured haptic vibration waveform 241 and the target input signal waveform 210. If the output of block 940 is "YES", feedback circuit 145 outputs an instruction to adjustable input signal waveform 210 to adjust the target input signal waveform for haptic actuator 160 and/or adjust the position of haptic actuator 160 (eg, Via the position actuator 400, etc., to effect adjustment of the output haptic vibration 165 to reduce the difference between the measured haptic vibration waveform 241 and the target input signal waveform 210 such that the haptic vibration 165 is outputted by the haptic effect sensor The difference between the measured tactile vibrations of 170 is correspondingly reduced. For example, the example feedback circuit 145 is used to modify the target input signal waveform 210 for the haptic actuator 160 to reduce the difference between the amplitude and/or frequency between the output haptic vibration 165 and the measured haptic vibration 240, or in other words In other words, to reduce the difference between the measured haptic vibration waveform 241 and the target input signal waveform 210, and accordingly, the difference between the output haptic vibration 165 and the haptic vibration measured by the haptic effect sensor 170 is reduced.Example changes for adjusting the output haptic vibration 165 of the example haptic actuator 160 include, but are not limited to, one or more of the following: a change in amplitude or intensity of the target input signal waveform 210, a frequency of the target input signal waveform 210 The change in applied voltage to the example haptic actuator 160 and/or the target input signal waveform 210 from a first waveform (eg, a first vibration mode, etc.) to a second input signal waveform 210 (eg, a second vibration mode) Etc.) changes. In another example, a change in the output haptic vibration 165 of the example haptic actuator 160 can include stopping the target input signal waveform 210 to support another output, such as an output to the example indicator 130 warning the user that there is at least a threshold difference. For example, in the case where the example feedback circuit 145 determines in block 940 that the user is wearing the example haptic device 100 too loosely so that the example haptic actuator 160 cannot communicate the expected haptic effect via the output haptic vibration 165, the example feedback circuit 145 can use one Or a plurality of example indicators 130 (eg, sound, light, etc.) to obtain the user's attention.In some examples, the change to output the output haptic vibration 165 of the example haptic actuator 160 includes controlling the example position actuator 400 to adjust the position of one or more portions of the example haptic actuator 160 to cause an example haptic The adaptation of the device 100 to the user affects the interaction between the output haptic vibration 165 and the user. For example, in block 940, in response to measuring a difference between the haptic vibration waveform 241 and the target input signal waveform 210, the example feedback circuit 145 modifies the wearable device of the example haptic device 100 via the example position actuator 400. Cooperate, including wearables, clothing, footwear, headwear, glasses, wristwear, vests, belts, therapeutic equipment, orthopedic devices, medical devices, watches, or soft exoskeletons.With respect to the adjustment of the position of output haptic vibration 165 in block 950, example haptic device 100, such as the example of FIG. 1, includes a plurality of example haptic actuators 160 spaced along example band 120. If the cooperation of the example haptic device 100 is such that the output haptic vibration 165 of the first example haptic actuator 160 of the plurality of example haptic actuators produces a difference between the measured haptic vibration waveform 241 and the target input signal waveform 210 that exceeds a threshold difference The difference, then example feedback circuit 145 may then, for example, disable first first haptic actuator 160 and cause second example haptic actuator 160 of the plurality of example haptic actuators to produce output haptic vibration 165. Since the second example haptic actuator 160 is at a different position than the first example haptic actuator 160, in the case where the difference between the measured haptic vibration waveform 241 and the target input signal waveform 210 is lower than the threshold difference, the second The degree of contact between the example haptic actuator 160 and the user may be sufficient to convey the desired haptic effect via the output haptic vibration 165.Continuing with the above example, if the cooperation of the example haptic device 100 causes the output haptic vibration 165 of the second example haptic actuator 160 to also produce a difference between the measured haptic vibration waveform 241 and the target input signal waveform 210 that exceeds a threshold difference, the example Feedback circuit 145 may then disable second example haptic actuator 160 and cause third example haptic actuator 160 of the plurality of example haptic actuators to produce output haptic vibration 165. Since the third example haptic actuator 160 is at a different position than the second example haptic actuator 160, in the case where the difference between the measured haptic vibration waveform 241 and the target input signal waveform 210 is lower than the threshold difference, the third The degree of contact between the example haptic actuator 160 and the user may be sufficient to convey the desired haptic effect via the output haptic vibration 165.If the difference between the measured haptic vibration waveform 241 and the target input signal waveform 210 is less than the threshold difference, the output of block 940 is "NO" and control passes back to block 910.10 is a block diagram of an example processor platform that can execute the example instructions of FIGS. 8-9 to implement the example feedback circuit 145 of the example haptic device of FIGS. 1-4B. In some examples, a machine readable medium includes a plurality of instructions that, in response to execution on a computing device, cause the computing device to: store, in memory 246, an output corresponding to output haptic vibration 165 generated by haptic actuator 160 A waveform (eg, target input signal waveform 210) stores a second waveform (eg, measuring haptic vibration waveform 241) corresponding to measured haptic vibration 240 sensed by haptic effect sensor 170 in memory 246, the first waveform Comparing with the second waveform to determine one or more differences between the first waveform and the second waveform, and modifying the haptic actuator 160 in response to one or more differences between the first waveform and the second waveform The target input signal waveform. In some examples, the plurality of instructions cause the computing device to indicate one or more differences between the first waveform and the second waveform via the indicator 130 in response to execution on the computing device.In some examples, when the haptic actuator 160 is active, it may be performed continuously, or only in a particular situation, such as a detected change in the physical state of the user (eg, motion, etc.), or by the device. A change in the manner of wearing triggers (eg, a mechanical sensor, capacitive sensor, or optical sensor detects the wearing and/or detachment of the haptic device) with respect to the difference between the output vibration 165 from the haptic actuator 160 and the measured haptic vibration 240. The trigger of the calculation determines the deviation from the desired haptic effect.In some examples, feedback circuit 145 imposes a saturation (maximum/minimum excitation) condition on target input vibration waveform 210 to be applied to haptic actuator 160 to avoid haptic output that may exceed a threshold, which may be detrimental to the device. Either it is harmful, irritating or annoying to the user of the device (eg, too high a stimulus) or may not be sensed at all (eg, too low stimulation).In various examples, processor platform 1000 is, by way of example, a server, desktop computer, laptop computer, or mobile device (eg, a cellular telephone, smart phone, tablet computer such as iPadTM), or any other type of Computing device.The processor platform 1000 of the illustrated example includes a processor 150. The processor 150 of the illustrated example is hardware. For example, processor 150 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired series or manufacturer.The processor 150 of the illustrated example includes a local memory 1013 (eg, a cache). Processor 150 executes instructions to implement example memory 246 of FIG. 3, example target input signal waveform analyzer 302, example measurement haptic vibration waveform analyzer 304, example waveform comparator 306, and example feedback analyzer 308. Processor 150 of the illustrated example communicates with main memory including volatile memory 1014 and non-volatile memory 1016 via bus 1018. Volatile memory 1014 may be implemented by synchronous dynamic random access memory (SDRAM), dynamic random access memory (DRAM), RAMBUS dynamic random access memory (RDRAM), and/or any other type of random access memory device. Non-volatile memory 1016 may be implemented by flash memory and/or any other desired type of memory device. Access to main memory 1014, 1016 is controlled by a memory controller.The processor platform 1000 of the illustrated example also includes an interface circuit 1020 for connecting an external system to the example haptic device 100. Interface circuit 1020 can be implemented by any type of hardwired or wireless interface standard such as, but not limited to, an Ethernet interface, a universal serial bus (USB), and/or a PCI Express interface.In the illustrated example, one or more input devices 1022 are connected to interface circuit 1020 via bus 1018. One or more input devices 1022 allow a user to input data and commands into the processor 150. The one or more input devices 1022 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touch screen or a touch pad, a track pad, a trackball, an isocenter, and/or a speech recognition system. . In the illustrated example of FIG. 1, example input device 1022 can include a touch screen (eg, a touch screen display device) provided in combination with indicator 130.One or more output devices 1024 are also coupled to interface circuit 1020 of the illustrated example. The output device 1024 can be implemented, for example, by a display device (eg, a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touch screen, a tactile output device, a printer, a speaker, etc.) . In some examples, interface circuit 1020 includes a graphics driver card, a graphics driver chip, or a graphics driver processor.The illustrated interface circuit 1020 also includes communication devices such as transmitters, receivers, transceivers, modems, and/or network interface cards to facilitate communication via the network 1026 (eg, an Ethernet connection, Digital Subscriber Line (DSL)) , telephone line, coaxial cable, cellular telephone system, etc.) Data exchange with external machines (eg, any kind of computing device).In some examples, example feedback circuit 145, example memory 246, example target input signal waveform analyzer 302, example measurement haptic vibration waveform analyzer 304, example waveform comparator 306, and/or example feedback analyzer 308 include hardware processing logic, Such as, but not limited to, Field Programmable Gate Arrays (FPGAs) and/or Application Specific Integrated Circuits (ASICs), where the corresponding functions described herein are hardwired in an FPGA, ASIC, or the like. For example, in some examples, an apparatus includes a haptic actuator for generating an output haptic vibration in response to a target input signal waveform, and a haptic vibration positioned adjacent to the haptic actuator for measuring an output haptic vibration And outputting a tactile effect sensor that measures a tactile vibration waveform. The apparatus also includes hardware processing logic for: storing a target input signal waveform, storing a measured haptic vibration waveform corresponding to the measured haptic vibration sensed by the haptic effect sensor, and measuring the target input signal waveform with the measured haptic vibration The waveforms are compared to determine one or more differences between the target input signal waveform and the measured haptic vibration waveform, and are modified for the haptic actuator in response to one or more differences between the target input signal waveform and the measured haptic vibration waveform The target input signal waveform.Example 1 is a wearable device including: a haptic actuator for generating an output haptic vibration in response to a target input signal waveform, and a tactile vibration located near the haptic actuator for measuring an output haptic vibration And a tactile effect sensor that outputs a tactile vibration waveform, and a feedback circuit for modifying the target input signal waveform to reduce a difference between the output tactile vibration and the measured tactile vibration waveform.Example 2 includes the wearable device as defined in Example 1, wherein the haptic effect sensor is disposed adjacent to the haptic actuator.Example 3 includes the wearable device as defined in any one of examples 1-2, wherein the haptic effect sensor is for sensing the amplitude and frequency of the haptic vibration.Example 4 includes the wearable device as defined in any of examples 1-3, wherein the feedback circuit comprises a processor for converting the measured haptic vibration waveform to the frequency domain.Example 5 includes the wearable device as defined in any of examples 1-4, wherein the feedback circuit is operative to convert the measured haptic vibration waveform to the frequency domain using a transform to identify a range of the sensed frequencies.Example 6 includes the wearable device as defined in any of examples 1-5, wherein the feedback circuit is operative to identify a primary frequency in a range of the sensed frequencies.Example 7 includes the wearable device as defined in any one of examples 1-6, wherein the feedback circuit is operative to identify an audible frequency, and the feedback circuit is operative to modify the target input signal waveform to eliminate the audible vibration from the output Listen to the frequency.Example 8 includes the wearable device as defined in any one of examples 1-7, wherein the wearable device comprises at least one of: clothing, footwear, headwear, glasses, wristwear, vest, belt , treatment equipment, orthopedic equipment, medical equipment, watches, or soft exoskeletons.Example 9 includes the wearable device as defined in any of examples 1-8, wherein the feedback circuit comprises a processor for determining a difference between the target input signal waveform and the measured haptic vibration waveform.Example 10 includes the wearable device as defined in any one of examples 1-9, and further comprising an indicator for generating a pilot signal based on a difference between the target input signal waveform and the measured haptic vibration waveform to Guide the user to modify the location of the wearable device.The example 11 includes the wearable device as defined in any one of examples 1-10, wherein the guiding signal is for guiding the user to perform at least one of: tightening the wearable device relative to a body part of the user, Or the wearable device is relaxed relative to the body part.Example 12 includes the wearable device as defined in any of Examples 1-11, and further comprising a second sensor and an indicator for generating based on a difference between the target input signal waveform and the measured haptic vibration waveform A pilot signal is directed to guide the user to modify the position of the second sensor.Example 13 includes the wearable device as defined in any one of examples 1-12, wherein the second sensor is at least one of a biometric sensor, a biosensor, a temperature sensor, a pressure sensor, a heart rate sensor, or a cardiopotential waveform sensor Kind.Example 14 includes the wearable device as defined in any of examples 1-13, wherein the haptic effect sensor comprises a load sensor, an infrared sensor, an optical sensor, a capacitive sensor, an accelerometer, a piezoelectric sensor, a strain gauge, or a transducer At least one of the devices to sense at least one of displacement, force or acceleration.The example 15 includes the wearable device as defined in any one of examples 1-14, wherein the wearable device is at least one of a virtual reality device or an augmented reality device.Example 16 includes the wearable device as defined in any one of examples 1-15, wherein the haptic actuator is a first haptic actuator disposed on the first portion of the device, the haptic effect sensor comprising being disposed at the first a first sensor in the vicinity of the haptic actuator, and further comprising a second haptic actuator disposed on the second portion of the device and a second sensor disposed adjacent the second haptic actuator, the feedback circuit being based on the second input signal The input to the second haptic actuator is modified by a difference between the waveform and the second measured haptic vibration waveform sensed by the second sensor.Example 17 includes the wearable device as defined in any of examples 1-16, wherein the haptic actuator comprises a piezoelectric device or an electroactive polymer.Example 18 includes the wearable device as defined in any of Examples 1-17, and further comprising a position actuator for responding to a difference between the target input signal waveform and the measured haptic vibration waveform Moving at least a portion of the wearable device from the first position to the second position.Example 19 includes the wearable device as defined in any of examples 1-18, wherein the position actuator is for moving at least a portion of the wearable device to modify the fit of the wearable device.Example 20 is a method comprising: storing a target input signal waveform for a haptic actuator; storing a measured haptic vibration waveform corresponding to a vibration sensed by a haptic effect sensor near the haptic actuator; and using feedback The circuit modifies the target input signal waveform to reduce the difference between the target input signal waveform and the measured haptic vibration waveform.Example 21 includes the method as defined in example 20, and further comprising: modifying a target input signal waveform to reduce between at least one of an amplitude and a frequency between the target input signal waveform and the measured haptic vibration waveform using a feedback circuit difference.Example 22 includes the method as defined in Example 20 or Example 21, and further comprising: modifying the target input signal waveform to reduce a difference in amplitude and a difference in frequency between the target input signal waveform and the measured haptic vibration waveform using a feedback circuit .Example 23 includes the method as defined in any of examples 21-22, and further comprising: converting the measured haptic vibration waveform to the frequency domain using a processor.The example 24 includes the method as defined in any of the examples 21-23, and further comprising: transforming the measured haptic vibration waveform to the frequency domain via the processor to identify a range of the sensed frequency.Example 25 includes the method as defined in any of examples 21-24, and further comprising: using a processor to identify a dominant frequency in a range of sensing frequencies.Example 26 includes the method as defined in any of examples 21-25, and further comprising: using a processor to identify an audible frequency, and using a feedback circuit to modify the target input signal waveform to output from the haptic actuator This audible frequency is eliminated in the haptic vibration.The example 27 includes the method as defined in any one of examples 21-26, and further comprising: using a feedback circuit processor to determine a difference between the target input signal waveform and the measured haptic vibration waveform.Example 28 includes the method as defined in any one of examples 21-27, wherein the haptic actuator, the haptic effect sensor, and the feedback circuit are included in a wearable device, the wearable device comprising at least one of the following One: clothing, footwear, headwear, glasses, wristwear, vests, belts, treatment equipment, orthopedic equipment, medical equipment, watches, or soft exoskeletons.Example 29 includes the method as defined in any one of examples 21-28, and further comprising: displaying a pilot signal via the indicator based on a difference between the target input signal waveform and the measured haptic vibration waveform to guide the user to modify the wearable device s position.The example 30 includes the method as defined in any one of the examples 21-29, and further comprising: generating an audible guidance signal based on a difference between the target input signal waveform and the measured haptic vibration waveform to guide the user to modify the wearable device position.The example 31 includes the method as defined in any one of examples 21-30, wherein the haptic device comprises a biometric sensor, a biosensor, a temperature sensor, a pressure sensor, a heart rate sensor, or a cardiopotential waveform sensor.Example 32 includes the method as defined in any one of examples 21-31, wherein the haptic effect sensor comprises a load sensor, an infrared sensor, an optical sensor, a capacitive sensor, an accelerometer, a piezoelectric sensor, a strain gauge or a transducer At least one of to sense at least one of displacement, force or acceleration.Example 33 includes the method as defined in any one of examples 21-32, wherein the haptic actuator comprises a piezoelectric device or an electroactive polymer.The example 34 includes the method as defined in any of the examples 21-33, and further comprising: driving the haptic actuator using a haptic driver.Example 35 includes the method as defined in any one of examples 21-34, and further comprising: modifying the second haptic actuator in response to a difference between the target input signal waveform and the measured haptic vibration waveform using a feedback circuit The target input signal waveform.The example 36 includes the method as defined in any one of the examples 21-35, and further comprising: modifying the fit of the wearable device via the position actuator in response to a difference between the target input signal waveform and the measured haptic vibration waveform, Wherein the wearable device comprises at least one of the following: clothing, footwear, headwear, glasses, wristwear, vest, belt, treatment device, orthopedic device, medical device, watch, or soft exoskeleton.Example 37 is an apparatus comprising an actuation device for generating an output haptic vibration in response to a target input signal waveform, a sensing device positioned adjacent to the actuation device for sensing a measured haptic vibration and outputting a measured haptic vibration waveform And feedback circuit means for modifying the target input signal waveform to reduce the difference between the target input signal waveform and the measured haptic vibration waveform.Example 38 includes the device as defined in example 37, wherein the sensing device is disposed adjacent the actuation device.Example 39 includes the device as defined in Example 37 or Example 38, wherein the device comprises at least one of: clothing, footwear, headwear, glasses, wristwear, vest, belt, treatment device, orthopedics Equipment, medical equipment, watches, or soft exoskeletons.The example 40 includes the device as defined in any of the examples 37-39, and further comprising: an actuating device for modifying the position of the device from the first position to the second position in response to the feedback circuit device.The example 41 includes the apparatus as defined in any one of examples 37-40, wherein the sensing means is for sensing at least one of measuring an amplitude of the haptic vibration waveform or measuring a frequency of the haptic vibration waveform.The example 42 includes the apparatus as defined in any one of examples 37-41, wherein the feedback circuit means comprises processing means for converting the measured haptic vibration waveform to the frequency domain.Example 43 includes the apparatus as defined in any one of examples 37-42, wherein the processing means is for converting the measured haptic vibration waveform to the frequency domain using a transform to identify a range of the sensed frequencies.The example 44 includes the device as defined in any one of examples 37-43, wherein the processing device is operative to identify a primary frequency in a range of sensing frequencies.Example 45 includes the apparatus as defined in any one of examples 37-44, wherein the processing device is operative to identify an audible frequency, and the feedback circuitry is operative to modify the target input signal waveform to eliminate the audible from the output haptic vibration frequency.The example 46 includes the apparatus as defined in any one of examples 37-45, wherein the feedback circuit means is further for outputting a visual indication, a tactile indication of a difference between the target input signal waveform and the measured haptic vibration waveform to the indicator device Or at least one of an audible indication to guide the user to adjust the device.Example 47 includes the apparatus as defined in any one of examples 37-46, and further comprising a second actuating device disposed on the second portion of the device and a second sense disposed adjacent the second actuating device The measuring device, wherein the feedback circuit device is further configured to modify an input to the second actuating device in response to a difference between the target input signal waveform and the measured haptic vibration waveform.Example 48 includes the apparatus as defined in any one of examples 37-47, the apparatus comprising a robotic end effector having an actuation device and a sense disposed in at least a portion of the robotic end effector Measuring device.Example 49 is at least one machine readable medium comprising a plurality of instructions responsive to execution on a computing device to cause the computing device to: store a target corresponding to an output haptic vibration generated by the haptic actuator Input signal waveform; storing a measured haptic vibration waveform corresponding to the measured haptic vibration sensed by the haptic effect sensor; comparing the target input signal waveform with the measured haptic vibration waveform to determine a target input signal waveform and the measured haptic vibration waveform One or more differences; and modifying the target input signal waveform for the haptic actuator in response to one or more differences between the target input signal waveform and the measured haptic vibration waveform.Example 50 includes at least one machine readable medium as defined in example 49, the plurality of instructions being responsive to execution on a computing device to cause the computing device to indicate between the target input signal waveform and the measured haptic vibration waveform via the indicator One or more differences.Example 51 includes an apparatus comprising: a haptic actuator for generating an output haptic vibration in response to a target input signal waveform; a haptic effect sensor positioned adjacent to the haptic actuator, the haptic effect sensor for measuring and outputting the haptic a tactile vibration corresponding to the vibration and an output measuring a tactile vibration waveform; and hardware processing logic for storing the target input signal waveform, storing the measured tactile vibration waveform corresponding to the measured tactile vibration sensed by the tactile effect sensor, inputting the target The signal waveform is compared to the measured haptic vibration waveform to determine one or more differences between the target input signal waveform and the measured haptic vibration waveform, and in response to one or more differences between the target input signal waveform and the measured haptic vibration waveform. Modify the target input signal waveform for the haptic actuator.Although certain exemplary methods, apparatus, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, the patent covers all methods, devices, and articles that are within the scope of the claims.From at least the foregoing, it will be appreciated that the exemplary methods, apparatus, and articles of manufacture disclosed herein provide example feedback circuits for haptic devices, such as wearable haptic devices. The example feedback circuit will adjust one or more characteristics (eg, amplitude or intensity, frequency, position, etc.) and/or examples of the haptic actuator in response to one or more sensor measurements associated with the desired haptic effect. One or more characteristics of the haptic device (eg, the cooperation of the haptic device with the user, the location of one or more components of the haptic device, etc.) to provide a desired haptic effect. This will be contrasted with traditional LRA driver chips that drive the resonant frequency of the LRA to maximize the amplitude of the vibration, and the traditional ERM driver chip measures the motor's back EMF for automatic overdrive and brake. Such conventional techniques are unable to measure haptic effects, and accordingly the sensed haptic effects cannot be used as control inputs. |
Embodiments of a system and method for generating an image configured to program a parallel machine from source code are disclosed. One such parallel machine includes a plurality of state machine elements (SMEs) grouped into pairs, such that SMEs in a pair have a common output. One such method includes converting source code into an automaton comprising a plurality of interconnected states, and converting the automaton into a netlist comprising instances corresponding to states in the automaton, wherein converting includes pairing states corresponding to pairs of SMEs based on the fact that SMEs in a pair have a common output. The netlist can be converted into the image and published. |
CLAIMS What is claimed is: 1. A computer-implemented method for generating an image configured to program a parallel machine from source code, the method comprising: converting source code into an automaton comprising a plurality of interconnected states; converting the automaton into a netlist comprising instances corresponding to states of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein converting the automaton into a netlist includes grouping states together based on a physical design of the parallel machine; and converting the netlist into the image. 2. The method of claim 1 , wherein the instances include a state machine element (SME) instance corresponding to a SME hardware elements and a SME group instance corresponding to a hardware element comprising a group of SMEs, and wherein grouping includes grouping states into a SME group instance. 3. The method of claim 2, wherein the physical design includes a physical design of the hardware element comprising a group of SMEs. 4. The method of claim 3, wherein the physical design includes one of an input or output limitation on the SMEs in the hardware element comprising a group of SMEs. 5. The method of claim 4, wherein the physical design includes a limitation that the SMEs in the hardware element comprising a group of SMEs share an output. 6. The method of claim 2, wherein a SME group instance includes a group of two (GOT) instance containing two SME instances, and wherein the physical design includes that the SMEs in each GOT are coupled to a common output. 7. The method of claim 6, wherein converting the automaton into a netlist comprises: determining which of the states can be grouped together in a GOT instance; and pairing the states based on the determination. 8. The method of claim 7, wherein a first and a second state can be paired together in a GOT instance when neither the first nor the second state are a final state of the automaton, and one of the first and the second state does not drive any states other than the first or the second states. 9. The method of claim 7, wherein a first and a second state can be paired together in a GOT instance when neither the first nor the second state are a final state of the automaton, and both the first and the second state drive the same external states. 10. The method of claim 7, wherein a first and a second state can be paired together in a GOT instance when one of the first and the second state are a final state of the automaton, and the other of the first and the second states does not drive any external states. 11. The method of claim 7, wherein a first and a second state can be paired together in a GOT instance when both the first and second state are final states of the automaton and both the first and the second state drive the same external states. 12. The method of claim 7, wherein determining which of the states can be grouped together in a GOT instance comprises determining which of the states can be grouped together in a GOT instance using graph theory. 13. The method of claim 12, wherein determining which of the states can be grouped together in a GOT instance using graph theory comprises determining which of the states can be grouped together in a GOT instance using graph theory to identify a maximum matching. 14. The method of claim 1, further comprising: publishing the image. 15. The method of claim 1, wherein the instances comprise general purpose instances and special purpose instances, wherein the general purpose instances correspond to general purpose states of the automaton and the special purpose instances correspond to special purpose states of the automaton. 16. The method of claim 15, wherein the hardware elements corresponding to the general purpose instances include a state machine elements (SME) and a group of two (GOT) and wherein the hardware elements corresponding to the special purpose instances include counters and logic elements. 17. A computer-readable medium including instructions, which when executed by the computer, cause the computer to perform operations comprising: converting source code into an automaton comprising a plurality of interconnected states; converting the automaton into a netlist comprising instances corresponding to states of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein converting the automaton into a netlist includes grouping states together based on a physical design of the parallel machine; and converting the netlist into the image. 18. The computer-readable medium of claim 17, wherein the automaton is a homogonous automaton. 19. The computer-readable medium of claim 17, wherein converting the automaton into a netlist comprises mapping each of the states of the automaton to an instance corresponding to the hardware elements and determining the connectivity between the instances. 20. The computer-readable medium of claim 17, wherein the netlist further comprises a plurality of connections between the instances representing conductors between the hardware elements. 21. The computer-readable medium of claim 17, wherein converting the automaton into a netlist comprises converting the automaton into a netlist comprising instances corresponding to states of the automaton except for a starting state. 22. The computer-readable medium of claim 17, wherein the instructions cause the computer to perform operations comprising: determining the location in the parallel machine of the hardware elements corresponding to the instances of the netlist. 23. The computer-readable medium of claim 22, wherein grouping states together includes grouping states together based on a physical design of a hardware element comprising a group of general purpose elements. 24. The computer-readable medium of claim 22, wherein the instructions cause the computer to perform operations comprising: determining which conductors of the parallel machine will be used to connect the hardware elements; and determining settings for programmable switches of the parallel machine, wherein the programmable switches are configured to selectively couple together the hardware elements. 25. A computer comprising: a memory having software stored thereon; and a processor communicatively coupled to the memory, wherein the software, when executed by the processor, causes the processor to: convert source code into an automaton comprising a plurality of interconnected states; convert the automaton into a netlist comprising instances corresponding to states of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein the instances include a plurality of first instances and a group instance containing two or more first instances, wherein convert the automaton into a netlist includes group states together in a group instance based on a number of unused first instances; and convert the netlist into the image. 26. The computer of claim 25, wherein the group instance includes a group of two (GOT) instance and wherein group states includes pair states as a function of which states the paired states drive. 27. The computer of claim 26, wherein group states in a group instance based on a number of unused first instances includes: determine whether a first state and a second state can be paired based on the following conditions: neither the first state or second state are final states in the automaton, and one of the first state and second state does not drive any states other than the first or second states; neither the first or second state are final states in the automaton, and both the first state and the second state drive the same external states; either the first state or the second state are a final state and the first state or second state that is not a final state does not drive any states except the first state or second state; and both the first state and the second state are final states and both the first state and the second state drive the same external states. 28. The computer of claim 25, wherein convert the automaton into a netlist includes: model the states as a graph wherein vertices of the graph correspond to states and edges of the graph correspond to possible pairings of the states; determine matching vertices for the graph; and pair states corresponding to the matching vertices. 29. The computer of claim 28, wherein convert the automaton into a netlist includes: determine a maximum matching for the graph. 30. The computer of claim 29, wherein convert the automaton into a netlist includes: pair each set of states corresponding to a matching vertices; and map each state that corresponds to an unmatched vertex to a GOT instance wherein one SME instance in the GOT instance is to be unused. 31. A system comprising: a computer configured to: convert source code into an automaton comprising a plurality of interconnected states; convert the automaton into a netlist comprising instances corresponding to states of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein the instances include a plurality of first instances and a group instance containing two or more first instances, wherein convert the automaton into a netlist includes group states together in a group instance based on a number of unused first instances; and convert the netlist into the image; and a device configured to load the image onto a parallel machine. 32. The system of claim 31, wherein group states together includes: pair states as a function of which states the paired states drive. 33. The system of claim 31, wherein group states together in a group instance based on a number of unused first instances includes: determine whether a first state and a second state can be paired based on the following conditions: neither the first state or second state are final states in the automaton, and one of the first state and second state does not drive any states other than the first or second states; neither the first or second state are final states in the automaton, and both the first state and the second state drive the same external states; either the first state or the second state are a final state and the first state or second state that is not a final state does not drive any states except the first state or second state; and both the first state and the second state are final states and both the first state and the second state drive the same external states. 34. The system of claim 31, wherein group states together in a group instance based on a number of unused first instances includes: model the states as a graph wherein vertices of the graph correspond to states and edges of the graph correspond to possible pairings of the states; determine matching vertices for the graph; and pair states corresponding to the matching vertices. 35. The system of claim 34, wherein group states together in a group instance based on a number of unused first instances: determine a maximum matching for the graph. 36. The system of claim 35, wherein group states together in a group instance based on a number of unused first instances includes: pair each set of states corresponding to a matching vertices; and map each state that corresponds to an unmatched vertex to a GOT instance wherein one SME instance in the GOT instance is to be unused. 37. The system of claim 31, wherein the device is configured to implement each pair of states as a group of two hardware element in the parallel machine. 38. A parallel machine programmed by an image produced by the process of claim 1. |
STATE GROUPING FOR ELEMENT UTILIZATION CLAIM OF PRIORITY [0001] This patent application claims the benefit of priority to U.S. Provisional Patent Application Serial Number 61/436,075, titled "STATE GROUPING FOR ELEMENT UTILIZATION," filed on January 25, 2011, which is hereby incorporated by reference herein in its entirety. BACKGROUND [0002] A compiler for a parallel machine converts source code into machine code (e.g., an image) for configuring (e.g., programming) the parallel machine. The machine code can implement a finite state machine on the parallel machine. One stage of the process of converting the source code into machine code includes forming a netlist. A netlist describes the connectivity between instances of the hardware elements of the parallel machine. The netlist can describe connections between the hardware elements such that the hardware elements implement the functionality of the source code. BRIEF DESCRIPTION OF THE DRAWINGS [0003] FIG. 1 illustrates an example of a parallel machine, according to various embodiments of the invention. [0004] FIG. 2 illustrates an example of the parallel machine of FIG. 1 implemented as a finite state machine engine, according to various embodiments of the invention. [0005] FIG. 3 illustrates an example of a block of the finite state machine engine of FIG. 2, according to various embodiments of the invention. [0006] FIG. 4 illustrates an example of a row of the block of FIG. 3, according to various embodiments of the invention. [0007] FIG. 5 illustrates an example of a group of two of the row of FIG. 4, according to various embodiments of the invention. [0008] FIG. 6 illustrates an example of a method for a compiler to convert source code into an image configured to program the parallel machine of FIG. 1 , according to various embodiments of the invention. [0009] FIGs. 7A and 7B illustrate example automatons according to various embodiments of the invention. [0010] FIGs. 8 A and 8B illustrate example netlists according to various embodiments of the invention. [0011] FIG. 9 illustrates an example computer for executing the compiler of FIG. 6 according to various embodiments of the invention. DETAILED DESCRIPTION [0012] The following description and the drawings sufficiently illustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Embodiments set forth in the claims encompass all available equivalents of those claims. [0013] This document describes, among other things, a compiler that generates a netlist based on a physical design of the parallel machine. In an example, the physical design of the parallel machine can include connectivity limitations between state machine elements of the parallel machine. For example, the state machine elements in the parallel machine can be grouped into pairs that share a common output. Accordingly, the compiler can generate a netlist based on a physical design where pairs of SMEs share a common output. [0014] FIG. 1 illustrates an example parallel machine 100. The parallel machine 100 can receive input data and provide an output based on the input data. The parallel machine 100 can include a data input port 110 for receiving input data and an output port 114 for providing an output to another device. The data input port 110 provides an interface for data to be input to the parallel machine 100. [0015] The parallel machine 100 includes a plurality of programmable elements including general purpose elements 102 and special purpose elements 112. A general purpose element 102 can include one or more inputs 104 and one or more outputs 106. A general purpose element 102 can be programmed into one of a plurality of states. The state of the general purpose element 102 determines what output(s) the general purpose elements 102 will provide based on a given input(s). That is, the state of the general purpose element 102 determines how the programmable element will react based on a given input. Data input to the data input port 110 can be provided to the plurality of general purpose elements 102 to cause the general purpose elements 102 to take action thereon. Examples of a general purpose element 102 can include a state machine element (SME) discussed in detail below, and a configurable logic block. In an example, a SME can be set in a given state to provide a certain output (e.g., a high or "1" signal) when a given input is received at the data input port 110. When an input other than the given input is received at the data input port 110, the SME can provide a different output (e.g., a low or "0" signal). In an example, a configurable logic block can be set to perform a Boolean logic function (e.g., AND, OR, NOR, ext.) based on input received at the data input port 110. [0016] The parallel machine 100 can also include a programming interface 111 for loading a program (e.g., an image) onto the parallel machine 100. The image can program (e.g., set) the state of the general purpose elements 102. That is, the image can configure the general purpose elements 102 to react in a certain way to a given input. For example, a general purpose element 102 can be set to output a high signal when the character 'a' is received at the data input port 110. In some examples, the parallel machine 100 can use a clock signal for controlling the timing of operation of the general purpose elements 102. In certain examples, the parallel machine 100 can include special purpose elements 112 (e.g., RAM, logic gates, counters, look-up tables, etc.) for interacting with the general purpose elements 102, and for performing special purpose functions. In some embodiments, the data received at the data input port 110 can include a fixed set of data received over time or all at once, or a stream of data received over time. The data may be received from, or generated by, any source, such as databases, sensors, networks, etc, coupled to the parallel machine 100. [0017] The parallel machine 100 also includes a plurality of programmable switches 108 for selectively coupling together different elements (e.g., general purpose element 102, data input port 110, output port 114, programming interface 111, and special purpose elements 112) of the parallel machine 100. Accordingly, the parallel machine 100 comprises a programmable matrix formed among the elements. In an example, a programmable switch 108 can selectively couple two or more elements to one another such that an input 104 of a general purpose element 102, the data input port 110, a programming interface 111, or special purpose element 112 can be coupled through one or more programmable switches 108 to an output 106 of a general purpose element 102, the output port 114, a programming interface 111 , or special purpose element 112. Thus, the routing of signals between the elements can be controlled by setting the programmable switches 108. Although FIG. 1 illustrates a certain number of conductors (e.g., wires) between a given element and a programmable switch 108, it should be understood that in other examples, a different number of conductors can be used. Also, although FIG. 1 illustrates each general purpose element 102 individually coupled to a programmable switch 108, in other examples, multiple general purpose elements 102 can be coupled as a group (e.g., a block 802, as illustrated in FIG. 8) to a programmable switch 108. In an example, the data input port 110, the data output port 114, and/or the programming interface 111 can be implemented as registers such that writing to the registers provides data to or from the respective elements. [0018] In an example, a single parallel machine 100 is implemented on a physical device, however, in other examples two or more parallel machines 100 can be implemented on a single physical device (e.g., physical chip). In an example, each of multiple parallel machines 100 can include a distinct data input port 110, a distinct output port 114, a distinct programming interface 111, and a distinct set of general purpose elements 102. Moreover, each set of general purpose elements 102 can react (e.g., output a high or low signal) to data at their corresponding input data port 110. For example, a first set of general purpose elements 102 corresponding to a first parallel machine 100 can react to the data at a first data input port 110 corresponding to the first parallel machine 100. A second set of general purpose elements 102 corresponding to a second parallel machine 100 can react to a second data input port 110 corresponding to the second parallel machine 100. Accordingly, each parallel machine 100 includes a set of general purpose elements 102, wherein different sets of general purpose elements 102 can react to different input data. Similarly, each parallel machine 100, and each corresponding set of general purpose elements 102 can provide a distinct output. In some examples, an output port 114 from first parallel machine 100 can be coupled to an input port 110 of a second parallel machine 100, such that input data for the second parallel machine 100 can include the output data from the first parallel machine 100. [0019] In an example, an image for loading onto the parallel machine 100 comprises a plurality of bits of information for setting the state of the general purpose elements 102, programming the programmable switches 108, and configuring the special purpose elements 112 within the parallel machine 100. In an example, the image can be loaded onto the parallel machine 100 to program the parallel machine 100 to provide a desired output based on certain inputs. The output port 114 can provide outputs from the parallel machine 100 based on the reaction of the general purpose elements 102 to data at the data input port 110. An output from the output port 114 can include a single bit indicating a match of a given pattern, a word comprising a plurality of bits indicating matches and non-matches to a plurality of patterns, and a state vector corresponding to the state of all or certain general purpose elements 102 at a given moment. [0020] Example uses for the parallel machine 100 include, pattern- recognition (e.g., speech recognition, image recognition, etc.) signal processing, imaging, computer vision, cryptography, and others. In certain examples, the parallel machine 100 can comprise a finite state machine (FSM) engine, a field programmable gate array (FPGA), and variations thereof. Moreover, the parallel machine 100 may be a component in a larger device such as a computer, pager, cellular phone, personal organizer, portable audio player, network device (e.g., router, firewall, switch, or any combination thereof), control circuit, camera, etc. [0021] FIGs. 2-5 illustrate another parallel machine implemented as a finite state machine (FSM) engine 200. In an example, the FSM engine 200 comprises a hardware implementation of a finite state machine. Accordingly, the FSM engine 200 implements a plurality of selectively coupleable hardware elements (e.g., programmable elements) that correspond to a plurality of states in a FSM. Similar to a state in a FSM, a hardware element can analyze an input stream and activate a downstream hardware element based on the input stream. [0022] The FSM engine 200 includes a plurality of programmable elements including general purpose elements and special purpose elements. The general purpose elements can be programmed to implement many different functions. These general purpose elements include SMEs 204, 205 (shown in FIG. 5) that are hierarchically organized into rows 206 (shown in FIGs. 3 and 4) and blocks 202 (shown in FIGs. 2 and 3). To route signals between the hierarchically organized SMEs 204, 205, a hierarchy of programmable switches is used including inter-block switches 203 (shown in FIGs. 2 and 3), intra-block switches 208 (shown in FIGs. 3 and 4) and intra-row switches 212 (shown in FIG. 4). A SME 204, 205 can correspond to a state of a FSM implemented by the FSM engine 200. The SMEs 204, 205 can be coupled together by using the programmable switches as described below. Accordingly, a FSM can be implemented on the FSM engine 200 by programming the SMEs 204, 205 to correspond to the functions of states and by selectively coupling together the SMEs 204, 205 to correspond to the transitions between states in the FSM. [0023] FIG. 2 illustrates an overall view of an example FSM engine 200. The FSM engine 200 includes a plurality of blocks 202 that can be selectively coupled together with programmable inter-block switches 203. Additionally, the blocks 202 can be selectively coupled to an input block 209 (e.g., a data input port) for receiving signals (e.g., data) and providing the data to the blocks 202. The blocks 202 can also be selectively coupled to an output block 213 (e.g., an output port) for providing signals from the blocks 202 to an external device (e.g., another FSM engine 200). The FSM engine 200 can also include a programming interface 211 to load a program (e.g., an image) onto the FSM engine 200. The image can program (e.g., set) the state of the SMEs 204, 205. That is, the image can configure the SMEs 204, 205 to react in a certain way to a given input at the input block 209. For example, a SME 204 can be set to output a high signal when the character 'a' is received at the input block 209. [0024] In an example, the input block 209, the output block 213, and/or the programming interface 211 can be implemented as registers such that writing to the registers provides data to or from the respective elements. Accordingly, bits from the image stored in the registers corresponding to the programming interface 211 can be loaded on the SMEs 204, 205. Although FIG. 2 illustrates a certain number of conductors (e.g., wire, trace) between a block 202, input block 209, output block 213, and an inter-block switch 203, it should be understood that in other examples, fewer or more conductors can be used. [0025] FIG. 3 illustrates an example of a block 202. A block 202 can include a plurality of rows 206 that can be selectively coupled together with programmable intra-block switches 208. Additionally, a row 206 can be selectively coupled to another row 206 within another block 202 with the interblock switches 203. In an example, buffers 201 are included to control the timing of signals to/from the inter-block switches 203. A row 206 includes a plurality of SMEs 204, 205 organized into pairs of elements that are referred to herein as groups of two (GOTs) 210. In an example, a block 202 comprises sixteen (16) rows 206. [0026] FIG. 4 illustrates an example of a row 206. A GOT 210 can be selectively coupled to other GOTs 210 and any other elements 224 within the row 206 by programmable intra-row switches 212. A GOT 210 can also be coupled to other GOTs 210 in other rows 206 with the intra-block switch 208, or other GOTs 210 in other blocks 202 with an inter-block switch 203. In an example, a GOT 210 has a first and second input 214, 216, and an output 218. The first input 214 is coupled to a first SME 204 of the GOT 210 and the second input 214 is coupled to a second SME 204 of the GOT 210. [0027] In an example, the row 206 includes a first and second plurality of row interconnection conductors 220, 222. In an example, an input 214, 216 of a GOT 210 can be coupled to one or more row interconnection conductors 220, 222, and an output 218 can be coupled to one row interconnection conductor 220, 222. In an example, a first plurality of the row interconnection conductors 220 can be coupled to each SME 204 of each GOT 210 within the row 206. A second plurality of the row interconnection conductors 222 can be coupled to one SME 204 of each GOT 210 within the row 206, but cannot be coupled to the other SME 204 of the GOT 210. In an example, a first half of the second plurality of row interconnection conductors 222 can couple to first half of the SMEs 204 within a row 206 (one SME 204 from each GOT 210) and a second half of the second plurality of row interconnection conductors 222 can couple to a second half of the SMEs 204 within a row 206 (the other SME 204 from each GOT 210). The limited connectivity between the second plurality of row interconnection conductors 222 and the SMEs 204, 205 is referred to herein as "parity". [0028] In an example, the row 206 can also include a special purpose element 224 such as a counter, a programmable Boolean logic element, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a programmable processor (e.g., a microprocessor), and other elements. Additionally, in an example, the special purpose element 224 is different in different rows 206. For example four of the rows 206 in a block 202 can include Boolean logic as the special purpose element 224, and the other eight rows 206 in a block 202 can include a counter as the special purpose element 224. [0029] In an example, the special purpose element 224 includes a counter (also referred to herein as counter 224). In an example, the counter 224 comprises a 12-bit programmable down counter. The 12-bit programmable counter 224 has a counting input, a reset input, and zero-count output. The counting input, when asserted, decrements the value of the counter 224 by one. The reset input, when asserted, causes the counter 224 to load an initial value from an associated register. For the 12-bit counter 224, up to a 12-bit number can be loaded in as the initial value. When the value of the counter 224 is decremented to zero (0), the zero-count output is asserted. The counter 224 also has at least two modes, pulse and hold. When the counter 224 is set to pulse mode, the zero-count output is asserted during the first clock cycle when the counter 224 decrements to zero, and at the following clock cycles the zero-count output is no longer asserted even if the counting input is asserted. This state continues until the counter 224 is reset by the reset input being asserted. When the counter 224 is set to hold mode the zero-count output is asserted during the first clock cycle when the counter 224 decrements to zero, and stays asserted when the counting input is asserted until the counter 224 is reset by the reset input being asserted. [0030] FIG. 5 illustrates an example of a GOT 210. The GOT 210 includes a first SME 204 and a second SME 205 having inputs 214, 216 and having their outputs 226, 228 coupled to an OR gate 230 and a 3-to-l multiplexer 242. The 3-to-l multiplexer 242 can be set to couple the output 218 of the GOT 210 to either the first SME 204, the second SME 205, or the OR gate 230. The OR gate 230 can be used to couple together both outputs 226, 228 to form the common output 218 of the GOT 210. In an example, the first and second SME 204, 205 exhibit parity, as discussed above, where the input 214 of the first SME 204 can be coupled to some of the row interconnect conductors 222 and the input 216 of the second SME 205 can be coupled to other row interconnect conductors 222. In an example, the two SMEs 204, 205 within a GOT 210 can be cascaded and/or looped back to themselves by setting either or both of switches 240. The SMEs 204, 205 can be cascaded by coupling the output 226, 228 of the SMEs 204, 205 to the input 214, 216 of the other SME 204, 205. The SMEs 204, 205 can be looped back to themselves by coupling the output 226, 228 to their own input 214, 216. Accordingly, the output 226 of the first SME 204 can be coupled to neither, one, or both of the input 214 of the first SME 204 and the input 216 of the second SME 205. [0031] In an example, a state machine element 204, 205 comprises a plurality of memory cells 232, such as those often used in dynamic random access memory (DRAM), coupled in parallel to a detect line 234. One such memory cell 232 comprises a memory cell that can be set to a data state, such as one that corresponds to either a high or a low value (e.g., a 1 or 0). The output of the memory cell 232 is coupled to the detect line 234 and the input to the memory cell 232 receives signals based on data on the data stream line 236. In an example, an input on the data stream line 236 is decoded to select one of the memory cells 232. The selected memory cell 232 provides its stored data state as an output onto the detect line 234. For example, the data received at the data input port 209 can be provided to a decoder (not shown) and the decoder can select one of the data stream lines 236. In an example, the decoder can convert an ACSII character to 1 of 256 bits. [0032] A memory cell 232, therefore, outputs a high signal to the detect line 234 when the memory cell 232 is set to a high value and the data on the data stream line 236 corresponds to the memory cell 232. When the data on the data stream line 236 corresponds to the memory cell 232 and the memory cell 232 is set to a low value, the memory cell 232 outputs a low signal to the detect line 234. The outputs from the memory cells 232 on the detect line 234 are sensed by a detect circuit 238. In an example, the signal on an input line 214, 216 sets the respective detect circuit 238 to either an active or inactive state. When set to the inactive state, the detect circuit 238 outputs a low signal on the respective output 226, 228 regardless of the signal on the respective detect line 234. When set to an active state, the detect circuit 238 outputs a high signal on the respective output line 226, 228 when ahigh signal is detected from one of the memory cells 234 of the respective SME 204, 205. When in the active state, the detect circuit 238 outputs a low signal on the respective output line 226, 228 when the signals from all of the memory cells 234 of the respective SME 204, 205 are low. [0033] In an example, an SME 204, 205 includes 256 memory cells 232 and each memory cell 232 is coupled to a different data stream line 236. Thus, an SME 204, 205 can be programmed to output a high signal when a selected one or more of the data stream lines 236 have a high signal thereon. For example, the SME 204 can have a first memory cell 232 (e.g., bit 0) set high and all other memory cells 232 (e.g., bits 1-255) set low. When the respective detect circuit 238 is in the active state, the SME 204 outputs a high signal on the output 226 when the data stream line 236 corresponding to bit 0 has a high signal thereon. In other examples, the SME 204 can be set to output a high signal when one of multiple data stream lines 236 have a high signal thereon by setting the appropriate memory cells 232 to a high value. [0034] In an example, a memory cell 232 can be set to a high or low value by reading bits from an associated register. Accordingly, the SMEs 204 can be programmed by storing an image created by the compiler into the registers and loading the bits in the registers into associated memory cells 232. In an example, the image created by the compiler includes a binary image of high and low (e.g., 1 and 0) bits. The image can program the FSM engine 200 to operate as a FSM by cascading the SMEs 204, 205. For example, a first SME 204 can be set to an active state by setting the detect circuit 238 to the active state. The first SME 204 can be set to output a high signal when the data stream line 236 corresponding to bit 0 has a high signal thereon. The second SME 205 can be initially set to an inactive state, but can be set to, when active, output a high signal when the data stream line 236 corresponding to bit 1 has a high signal thereon. The first SME 204 and the second SME 205 can be cascaded by setting the output 226 of the first SME 204 to couple to the input 216 of the second SME 205. Thus, when a high signal is sensed on the data stream line 236 corresponding to bit 0, the first SME 204 outputs a high signal on the output 226 and sets the detect circuit 238 of the second SME 205 to an active state. When a high signal is sensed on the data stream line 236 corresponding to bit 1, the second SME 205 outputs a high signal on the output 228 to activate another SME 205 or for output from the FSM engine 200. [0035] FIG. 6 illustrates an example of a method 600 for a compiler to convert source code into an image configured to program a parallel machine. Method 600 includes parsing the source code into a syntax tree (block 602), converting the syntax tree into an automaton (block 604), optimizing the automaton (block 606), converting the automaton into a netlist (block 608), placing the netlist on hardware (block 610), routing the netlist (block 612), and publishing the resulting image (block 614). [0036] In an example, the compiler includes an application programming interface (API) that allows software developers to create images for implementing FSMs on the FSM engine 600. The compiler provides methods to convert an input set of regular expressions in the source code into an image that is configured to program the FSM engine 600. The compiler can be implemented by instructions for a computer having a Von Nuemann architecture. These instructions can cause a processor on the computer to implement the functions of the compiler. For example, the instructions, when executed by the processor, can cause the processor to perform actions as described in blocks 602, 604, 606, 608, 610, 612, and 614 on source code that is accessible to the processor. An example computer having a Von Nuemann architecture is shown in FIG. 9 and described below. [0037] In an example, the source code describes search strings for identifying patterns of symbols within a group of symbols. To describe the search strings, the source code can include a plurality of regular expressions (regexes). A regex can be a string for describing a symbol search pattern. Regexes are widely used in various computer domains, such as programming languages, text editors, network security, and others. In an example, the regular expressions supported by the compiler include search criteria for the search of unstructured data. Unstructured data can include data that is free form and has no indexing applied to words within the data. Words can include any combination of bytes, printable and non-printable, within the data. In an example, the compiler can support multiple different source code languages for implementing regexes including Perl, (e.g., Perl compatible regular expressions (PCRE)), PHP, Java, and .NET languages. [0038] Referring back to FIG. 6, at block 602 the compiler can parse the source code to form an arrangement of relationally connected operators, where different types of operators correspond to different functions implemented by the source code (e.g., different functions implemented by regexes in the source code). Parsing source code can create a generic representation of the source code. In an example, the generic representation comprises an encoded representation of the regexes in the source code in the form of a tree graph known as a syntax tree. The examples described herein refer to the arrangement as a syntax tree (also known as an "abstract syntax tree"). In other examples, however, a concrete syntax tree or other arrangement can be used. [0039] Since, as mentioned above, the compiler can support multiple languages of source code, parsing converts the source code, regardless of the language, into a non-language specific representation, e.g., a syntax tree. Thus, further processing (blocks 604, 606, 608, 610) by the compiler can work from a common input structure regardless of the language of the source code. [0040] As noted above, the syntax tree includes a plurality of operators that are relationally connected. A syntax tree can include multiple different types of operators. That is, different operators can correspond to different functions implemented by the regexes in the source code. [0041] At block 604, the syntax tree is converted into an automaton. An automaton (also referred to as a finite-state automaton, finite state machine (FSM), or simply a state machine) is a representation of states, transitions between states and actions and can be classified as deterministic or non- deterministic. A deterministic automaton has a single path of execution at a given time, while a non-deterministic automaton has multiple concurrent paths of execution. The automaton comprises a plurality of states. In order to convert the syntax tree into an automaton, the operators and relationships between the operators in the syntax tree are converted into states with transitions between the states. In an example, the automaton can be converted based partly on the hardware of the FSM engine 200. [0042] In an example, input symbols for the automaton include the symbols of the alphabet, the numerals 0-9, and other printable characters. In an example, the input symbols are represented by the byte values 0 through 255 inclusive. In an example, an automaton can be represented as a directed graph where the nodes of the graph correspond to the set of states. In an example, a transition from state p to state q on an input symbol a, i.e., Sp,a), is shown by a directed connection from node p to node q. In an example, the language accepted (e.g., matched) by an automaton is the set of all possible character strings which when input sequentially into the automaton will reach a final state. Each string in the language accepted by the automaton traces a path from the start state to one or more final states. [0043] In an example, special transition symbols outside the input symbol range may be used in the automaton. These special transition symbols can be used, for example, to enable use of special purpose elements 224. Moreover, special transition symbols can be used to provide transitions that occur on something other than an input symbol. For example, a special transition symbol may indicate that a first state is to be enabled (e.g., transitioned to) when both a second state and a third state are enabled. Accordingly, the first state is activated when both the second state and the third state are activated, and the transition to the first state is not directly dependent on an input symbol. Notably, a special transition symbol that indicates that a first state is to be enabled when both a second state and a third state are enabled can be used to represent a Boolean AND function performed, for example, by Boolean logic as the special purpose element 224. In an example, a special transition symbol can be used to indicate a counter state has reached zero, and thus transitions to a downstream state. [0044] In an example, the automaton comprises general purpose states as well as special purpose states. The general purpose states and special purpose states correspond to general purpose elements and special purpose elements supported by a target device for which the compiler is generating machine code. Different types of target devices can support different types of general purpose elements as well as one or more different types of special purpose elements. A general purpose element can typically be used to implement a broad range of functions, while a special purpose element can typically be used to implement a more narrow range of functions. In an example, however, a special purpose element can achieve, for example, greater efficiency within its narrow range of function. Accordingly, a special purpose element can be used to, for example, reduce the machine cycles or the machine resources required to implement certain functions in the target device. In some examples, the target device supports solely special purpose elements, wherein multiple different types of special purpose elements are supported. [0045] In an example where the compiler is generating machine code for the FSM engine 200, the general purpose states can correspond to SMEs 204, 205 and the general purpose states are accordingly referred to herein as "SME states". Moreover, when the compiler is generating machine code for the FSM engine 200, one example of a special purpose state can correspond to counters 224 and is accordingly referred to herein as a "counter state". Another example of a special purpose state can correspond to a logic element (e.g., programmable logic, Boolean logic) and is accordingly referred to herein as a "logic state". In an example, the SME states in the automaton map 1: 1 to SMEs (e.g., SME 204, 205) in the FSM engine 200 with the exception of the starting state of the automaton which does not map to a SME. The special purpose elements 224 may, or may not, map 1 : 1 to special purpose states. [0046] In an example, an automaton can be constructed using one of the standard techniques such as Glushkov' s method. In an example, the automaton can be an ε-free homogeneous automaton. A homogeneous automaton is a restriction on the general automaton definition. The restriction requires that all transitions entering a state must occur on the same input symbol(s). The homogeneous automaton satisfies the following condition: For any two states, qi and q2, if r<≡5(qi)n5(q2), denote Si = {a I a<≡∑, r<≡5(qi, a) } , S2 = {a I a<≡∑, r<≡5(q2, a)} . Si is the set of symbols that allows qi to transition to r; and S2 is the set of symbols that allows q2 to transition to r. Here, Si = S2, i.e. if state q; and state q2 both transition to state r then the homogeneous restriction is that the transitions must occur on the same symbol(s). [0047] FIGs. 7A and 7B illustrate example automata created from the syntax tree. FIG. 7A illustrates a homogenous automaton 700 and FIG. 7B illustrates a non-homogenous automaton 702. [0048] The homogenous automaton 700 begins at starting state 704 which transitions to state 706 on the input symbol "a". State 706 transitions to state 708 on the input symbol "b" and state 708 transitions to state 710 on the input symbol "b". State 710 transitions to state 712 on the input symbol "c". State 712 transitions to state 710 on the input symbol "b" and transitions to state 714 on the input symbol "d". State 714 is a final state and is identified as such by the double circle. In an example, final states can be significant since activation of a final state indicates a match of a regex corresponding to the automaton. The automaton 700 is a homogeneous automaton since all in- transitions (e.g., a transition into the state) for a given state occur on the same symbol(s). Notably, state 710 has two in-transitions (from state 708 and state 712) and both in-transitions occur on the same symbol "b". [0049] The non-homogeneous automaton 702 includes the same states 704, 706, 708, 710, 712, and 714 as the homogenous automaton 700, however, the state 712 transitions to state 710 on the input symbol "e". Accordingly, the automaton 702 is non-homogeneous since the state 710 has in-transitions on two different symbols; symbol "b" from state 708 and symbol "e" from state 712. [0050] At block 606, after the automaton is constructed, the automaton is optimized to, among other things, reduce its complexity and size. The automaton can be optimized by combining redundant states. [0051] At block 608, the automaton is converted into a netlist. Converting the automaton into a netlist maps the states of the automaton to instances of a hardware element (e.g., SMEs 204, 205, GOT 210, special purpose element 224) of the FSM engine 200, and determines the connections between the instances. In an example, the netlist comprises a plurality of instances, each instance corresponding to (e.g., representing) a hardware element of the FSM engine 200. Each instance can have one or more connection points (also referred to herein as a "port") for connection to another instance. The netlist also comprises a plurality of connections between the ports of the instances which correspond to (e.g., represent) conductors to couple the hardware elements corresponding to the instances. In an example, the netlist comprises different types of instances corresponding to different types of hardware elements. For example, the netlist can include a general purpose instance corresponding to a general purpose hardware element and a special purpose instance corresponding to a special purpose hardware element. As an example, general purpose states can be converted into general purpose instances and special purpose states can be converted into special purpose instances. In an example, the general purpose instances can include an SME instance for an SME 204, 205 and a SME group instance for a hardware element comprising a group of SMEs. In an example, the SME group instance includes a GOT instance corresponding to a GOT 210; in other examples however, the SME group instance can correspond to a hardware element comprising a group of three or more SMEs. The special purpose instances can include a counter instance for a counter 224, and a logic instance for logic elements 224. Since a GOT 210 includes two SMEs 204, 205, a GOT instance contains two SME instances. [0052] To create the netlist, states in the automaton are converted into instances in the netlist, except the starting state does not have a corresponding instance. SME states are converted into GOT instances and counter states are converted into counter instances. Additionally, a corresponding connection from a first instance to a second instance is created for a transition from a state corresponding to the first instance to a state corresponding to the second instance. Since the SMEs 204, 205 in the FSM engine 200 are grouped in pairs referred to as GOTs 210, the compiler can group SME states into pairs in a GOT instance. Due to physical design of a GOT 210, not all SME instances can be paired together to form a GOT 210. Accordingly, the compiler determines which SME states can be mapped together in a GOT 210, and then pairs the SME state into GOT instances based on the determination. [0053] As shown in FIG. 5, a GOT 210 has output limitations on the SMEs 204, 205. In particular, the GOT 210 has a single output 218 shared by the two SMEs 204, 205. Accordingly, each SME 204, 205 in a GOT 210 cannot independently drive the output 218. This output limitation restricts which SMEs states can be paired together in a GOT instance. Notably, two SME states that drive (e.g., transition to, activate) different sets of external SME states (e.g., SME states corresponding to SMEs outside of the GOT instance) cannot be paired together in a GOT instance. This limitation, however, does not restrict whether the two SMEs states drive each other or self loop, since a GOT 210 can internally provide this functionality with the switches 240. Although the FSM engine 200 is described as having certain physical design corresponding to the SMEs 204, 205, in other examples, the SMEs 204, 205 may have other physical designs. For example, the SMEs 204, 205 may be grouped together into three or more sets of SMEs 204, 205. Additionally, in some examples, there may be limitations on the inputs 214, 216 to the SMEs 204, 205, with or without limitations on the outputs 226, 228 from the SMEs 204, 205. [0054] In any case, however, the compiler determines which SME states can be grouped together based on the physical design of the FSM engine 200. Accordingly, for a GOT instance, the compiler determines which SME states can be paired together based on the output limitations for the SMEs 204, 205 in a GOT 210. In an example, there are five situations in which two SME states can be paired together to form a GOT 210 based on the physical design of the GOT 210. [0055] The first situation when a first and a second SME state can be paired together in a GOT 210 occurs when neither the first or second SME state are final states, and when one of the first and second SME states does not drive any states other than the first or second SME states. As an example, a first state is considered to drive a second state when the first state transitions to the second state. When this first situation occurs, at most one of the first and second SME states is driving an external state(s). Accordingly, the first and second SME states can be paired together without being affected by the output limitations of the GOT 210. Due to the ability of the GOT 210 to couple the SMEs 204, 205 to one another internally, however, the first and second SME states are allowed to drive each other and self-loop to drive themselves. In automaton terms, the first SME state (corresponding to state ql) and the second SME state (corresponding to state q2) can be paired together when neither ql nor q2 are final states, and 5(ql) - {ql, q2} is empty, or when 5(q2) - {ql, q2} is empty. [0056] The second situation when a first and a second SME state can be paired together in a GOT 210 occurs when neither the first or second SME state are final states in the automaton, and when both the first and the second SME state drive the same external states. As used herein external states correspond to states outside of the GOT instance, for example, notwithstanding whether first and second SME states in a GOT instance drive each other or self loop. Here again, the output limitations of a GOT 210 do not affect the first and second SME states, since the first and second SME states drive the same external states. Also, due to ability of the GOT 210 to couple the SMEs 204, 205 to one another internally, the restriction on driving the same states does not include whether the first and second states drive each other or self-loop. Using automaton terms, the first SME state (corresponding to state ql) and the second SME state (corresponding to state q2) can be paired together when neither ql nor q2 are final states, and 5(ql) - {ql, q2} =5(q2) - {ql, q2} . [0057] The third and fourth situations in which a first and a second SME state can be paired together in a GOT 210 occur when one of the first and second SME state are a final state and the other of the first and second SME state does not drive any external state. That is, the first SME state (corresponding to state ql) and the second SME state (corresponding to state q2) can be paired together when ql is a final state and 5(q2) - {ql, q2} is empty, or when q2 corresponds to a final state and 5(ql) - {ql, q2} is empty. Since a final state outputs an indication of a match to a regex, a SME state corresponding to a final state should have independent use of the output 218 of the GOT 210 in order to indicate the match. Accordingly, the other SME state in the GOT 210 is not allowed to use the output 218. [0058] The fifth situation when a first and a second SME state can be paired together in a GOT 210 occurs when both the first and second SME states correspond to final states in an automaton and both the first and the second SME states drive the same external states. Using automaton terms, the first state (corresponding to state ql) and the second SME state (corresponding to state q2) can be paired together when both ql and q2 are final states, and 5(ql) - {ql, q2} =5(q2) - {ql, q2} . [0059] Once the compiler determines whether one or more SME states can be paired together, the compiler pairs the SMEs states into GOT instances. In an example, the compiler pairs SME states into GOT instances in the order they are determined to be capable of being paired to form a GOT instance. That is, once two particular SME states are determined to be capable of being paired together, these two SME states can be paired into a GOT instance. Once two SME states have been paired to form a GOT instance, these paired SME states are not available for pairing with other SME states. This process can continue until there are no longer any SME states left to be paired. [0060] In an example, the compiler uses graph theory to determine which SMEs to pair together into a GOT instance. Since only certain SMEs can be paired together, some SME pairing can result in other SMEs having to be implemented in their own GOT instance with the other SME location in the GOT instance unused and hence wasted. Graph theory can be used to optimize SME utilization (e.g., reduce the number of unused SMEs) in the GOTs 210 by reducing the number of unused SME instances in the GOT instances of the netlist. To use graph theory, the compiler first determines all possible pairings between the SME states according to the physical design of the FSM engine 200 discussed above. The compiler then creates a graph where the vertices of the graph correspond to SME states and the edges of the graph correspond to possible pairings of the SME states. That is, if two SME states are determined to be capable of being paired together in a GOT instance, the two corresponding vertices are connected with an edge. Thus, the graph contains all the possible pairings of SME states. [0061] The compiler can then find matching vertices for the graph to identify which SME states to pair together in a GOT 210. That is, the compiler identifies edges (and therefore pairs of vertices) such that no two edges between matching vertices of the graph share a common vertex. In an example, the compiler can find a maximal matching for the graph. In another example, the compiler can find a maximum matching for the graph. A maximum matching is a matching that contains the largest possible number of edges. There may be many maximum matchings. The problem of finding a maximum matching of a general graph can be solved in polynomial time. [0062] Once all the matching vertices have been identified (e.g., as a maximum matching), each pair of SME states corresponding to matching vertices is mapped to a GOT instance. SME states corresponding to vertices that are un-matched are mapped to their own GOT instance. That is, SME states corresponding to vertices that are un-matched are mapped into one of SME location in GOT instance and the other SME location in the GOT instance is unused. Accordingly, given the netlist N and its corresponding set of matching vertices M, a number of GOT instances of N used equals IQI-l-IMI, where Q is the set of states of the automaton, and is because in this example the starting state of the automaton does not correspond to an SME state. [0063] In an example, the netlist N is constructed from the maximum matching M of G uses the least number of GOT instances. This can be proved by the following: if there exists another netlist N' that uses a lesser number of GOT instances, denote the corresponding matching as M'. Since the number of GOT instances of N' equals IQI-1-ΙΜΊ, we have that IMI < IM' I. This conflicts with the fact that M is a maximum matching. Therefore, netlist N uses the least number of GOT instances. [0064] Once the SME states are paired into GOT instances, the GOT instances, counter instances, and logic instances are connected according to the transitions between the states in the automaton. Since each GOT 210 has a single output, each GOT instance in the netlist has a single output port to connect to other instances. Accordingly, if either SME state in a first GOT instance drives an SME state in a second GOT instance, the output port of the first GOT instance is coupled to an input of the second GOT instance. [0065] FIGs. 8A and 8B illustrate example netlists 800, 802 created from the homogeneous automaton 700 of FIG. 7 A. The SME instances 806, 808, 810, 812, and 814 correspond to states 706, 708, 710, 712, and 714 in the automaton 700. The starting state 704 of the automaton does not correspond to an instance as discussed above. [0066] The netlist 800 is an example of a non-optimal netlist. The netlist 800 uses four GOT instances 816 while leaving three SME instances 818 unused. The netlist 802, however, is an example of an optimal netlist created using graph theory to identify a maximum matching. The netlist 802 uses three GOT instances 816 and has a single unused SME instance 818. In the netlist 802, the instance 810 can be connected to instance 812 with connections internal to the GOT instance (e.g., via switch 240). [0067] At block 610, once the netlist has been generated, the netlist is placed to select a specific hardware element of the target device (e.g., SMEs 204, 205, other elements 224) for each instance of the netlist. According to an embodiment of the present invention, placing selects the hardware elements based on general input and output constraints for the hardware elements. [0068] At block 612, the globally placed netlist is routed to determine the settings for the programmable switches (e.g., inter-block switches 203, intra- block switches 208, and intra-row switches 212) in order to couple the selected hardware elements together to achieve the connections describe by the netlist. In an example, the settings for the programmable switches are determined by determining specific conductors of the FSM engine 200 that will be used to connect the selected hardware elements, and the settings for the programmable switches. Routing may adjust the specific hardware elements selected for some of the netlist instances during placement, such as in order to couple hardware elements given the physical design of the conductors and/or switches on the FSM engine 200. [0069] Once the netlist is placed and routed, the placed and routed netlist can be converted into a plurality of bits for programming of a FSM engine 200. The plurality of bits are referred to herein as an image. [0070] At block 614, an image is published by the compiler. The image comprises a plurality of bits for programming specific hardware elements and/or programmable switches of the FSM engine 200. In embodiments where the image comprises a plurality of bits (e.g., 0 and 1), the image can be referred to as a binary image. The bits can be loaded onto the FSM engine 200 to program the state of SMEs 204, 205, the special purpose elements 224, and the programmable switches such that the programmed FSM engine 200 implements a FSM having the functionality described by the source code. Placement (block 610) and routing (block 612) can map specific hardware elements at specific locations in the FSM engine 200 to specific states in the automaton. Accordingly, the bits in the image can program the specific hardware elements and/or programmable switches to implement the desired function(s). In an example, the image can be published by saving the machine code to a computer readable medium. In another example, the image can be published by displaying the image on a display device. In still another example, the image can be published by sending the image to another device, such as a programming device for loading the image onto the FSM engine 200. In yet another example, the image can be published by loading the image onto a parallel machine (e.g., the FSM engine 200). [0071] In an example, an image can be loaded onto the FSM engine 200 by either directly loading the bit values from the image to the SMEs 204, 205 and other hardware elements 224 or by loading the image into one or more registers and then writing the bit values from the registers to the SMEs 204, 205 and other hardware elements 224. In an example, the state of the programmable switches (e.g., inter-block switches 203, intra-block switches 208, and intra-row switches 212). In an example, the hardware elements (e.g., SMEs 204, 205, other elements 224, programmable switches 203, 208, 212) of the FSM engine 200 are memory mapped such that a programming device and/or computer can load the image onto the FSM engine 200 by writing the image to one or more memory addresses. [0072] Method examples described herein can be machine or computer- implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, the code may be tangibly stored on one or more volatile or non- volatile computer-readable media during execution or at other times. These computer- readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like. [0073] FIG. 9 illustrates generally an example of a computer 900 having a Von Nuemann architecture. Upon reading and comprehending the content of this disclosure, one of ordinary skill in the art will understand the manner in which a software program can be launched from a computer-readable medium in a computer-based system to execute the functions defined in the software program. One of ordinary skill in the art will further understand the various programming languages that can be employed to create one or more software programs designed to implement and perform the methods disclosed herein. The programs can be structured in an object-orientated format using an object- oriented language, such as Java, C++, or one or more other languages. Alternatively, the programs can be structured in a procedure-orientated format using a procedural language, such as assembly, C, etc. The software components can communicate using any of a number of mechanisms well known to those of ordinary skill in the art, such as application program interfaces or interprocess communication techniques, including remote procedure calls or others. The teachings of various embodiments are not limited to any particular programming language or environment. [0074] Thus, other embodiments can be realized. For example, an article of manufacture, such as a computer, a memory system, a magnetic or optical disk, some other storage device, or any type of electronic device or system can include one or more processors 902 coupled to a computer-readable medium 922 such as a memory (e.g., removable storage media, as well as any memory including an electrical, optical, or electromagnetic conductor) having instructions 924 stored thereon (e.g., computer program instructions), which when executed by the one or more processors 902 result in performing any of the actions described with respect to the methods above. [0075] The computer 900 can take the form of a computer system having a processor 902 coupled to a number of components directly, and/or using a bus 908. Such components can include main memory 904, static or non-volatile memory 906, and mass storage 916. Other components coupled to the processor 902 can include an output device 910, such as a video display, an input device 912, such as a keyboard, and a cursor control device 914, such as a mouse. A network interface device 920 to couple the processor 902 and other components to a network 926 can also be coupled to the bus 908. The instructions 924 can further be transmitted or received over the network 926 via the network interface device 920 utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Any of these elements coupled to the bus 908 can be absent, present singly, or present in plural numbers, depending on the specific embodiment to be realized. [0076] In an example, one or more of the processor 902, the memories 904, 906, or the storage device 916 can each include instructions 924 that, when executed, can cause the computer 900 to perform any one or more of the methods described herein. In alternative embodiments, the computer 900 operates as a standalone device or can be connected (e.g., networked) to other devices. In a networked environment, the computer 900 can operate in the capacity of a server or a client device in server-client network environment, or as a peer device in a peer-to-peer (or distributed) network environment. The computer 900 can include a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, while only a single computer 900 is illustrated, the term "computer" shall also be taken to include any collection of devices that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. [0077] The computer 900 can also include an output controller 928 for communicating with peripheral devices using one or more communication protocols (e.g., universal serial bus (USB), IEEE 1394, etc.) The output controller 928 can, for example, provide an image to a programming device 930 that is communicatively coupled to the computer 900. The programming device 930 can be configured to program a parallel machine (e.g., parallel machine 100, FSM engine 200). In other examples, the programming device 930 can be integrated with the computer 900 and coupled to the bus 908 or can communicate with the computer 900 via the network interface device 920 or another device. [0078] While the computer-readable medium 924 is shown as a single medium, the term "computer-readable medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers, and or a variety of storage media, such as the processor 902 registers, memories 904, 906, and the storage device 916) that store the one or more sets of instructions 924. The term "computer-readable medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the computer and that cause the computer to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term "computer-readable medium" shall accordingly be taken to include, but not be limited to tangible media, such as solid-state memories, optical, and magnetic media. [0079] The Abstract is provided to comply with 37 C.F.R. Section 1.72(b) requiring an abstract that will allow the reader to ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to limit or interpret the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. Example Embodiments [0080] Example 1 includes a computer-implemented method for generating an image configured to program a parallel machine from source code. The method includes converting source code into an automaton comprising a plurality of interconnected states; converting the automaton into a netlist comprising instances corresponding to states of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein converting the automaton into a netlist includes grouping states together based on a physical design of the parallel machine; and converting the netlist into the image. [0081] Example 2 includes a computer-readable medium including instructions, which when executed by the computer, cause the computer to perform operations. The operations including converting source code into an automaton comprising a plurality of interconnected states; converting the automaton into a netlist comprising instances corresponding to states of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein converting the automaton into a netlist includes grouping states together based on a physical design of the parallel machine; and converting the netlist into the image. [0082] Example 3 includes a computer including a memory having software stored thereon; and a processor communicatively coupled to the memory. Wherein the software, when executed by the processor, causes the processor to: convert source code into an automaton comprising a plurality of interconnected states; convert the automaton into a netlist comprising instances corresponding to states of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein the instances include a plurality of first instances and a group instance containing two or more first instances, wherein convert the automaton into a netlist includes group states together in a group instance based on a number of unused first instances; and convert the netlist into the image. [0083] Example 4 includes a system including a computer configured to: convert source code into an automaton comprising a plurality of interconnected states; convert the automaton into a netlist comprising instances corresponding to states of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein the instances include a plurality of first instances and a group instance containing two or more first instances, wherein convert the automaton into a netlist includes group states together in a group instance based on a number of unused first instances; and convert the netlist into the image. The system also includes a device configured to load the image onto a parallel machine. [0084] In Example 5, the subject matter of any of Examples 1-4 can optionally include wherein the instances include a state machine element (SME) instance corresponding to a SME hardware elements and a SME group instance corresponding to a hardware element comprising a group of SMEs, and wherein grouping includes grouping states into a SME group instance. [0085] In Example 6, the subject matter of any of Examples 1-5 can optionally include wherein the physical design includes a physical design of the hardware element comprising a group of SMEs. [0086] In Example 7, the subject matter of any of Examples 1-6 can optionally include wherein the physical design includes one of an input or output limitation on the SMEs in the hardware element comprising a group of SMEs. [0087] In Example 8, the subject matter of any of Examples 1-7 can optionally include wherein the physical design includes a limitation that the SMEs in the hardware element comprising a group of SMEs share an output. [0088] In Example 9, the subject matter of any of Examples 1-8 can optionally include wherein a SME group instance includes a group of two (GOT) instance containing two SME instances, and wherein the physical design includes that the SMEs in each GOT are coupled to a common output. [0089] In Example 10, the subject matter of any of Examples 1-9 can optionally include wherein converting the automaton into a netlist comprises: determining which of the states can be grouped together in a GOT instance; and pairing the states based on the determination. [0090] In Example 11, the subject matter of any of Examples 1-10 can optionally include wherein a first and a second state can be paired together in a GOT instance when neither the first nor the second state are a final state of the automaton, and one of the first and the second state does not drive any states other than the first or the second states. [0091] In Example 12, the subject matter of any of Examples 1-11 can optionally include wherein a first and a second state can be paired together in a GOT instance when neither the first nor the second state are a final state of the automaton, and both the first and the second state drive the same external states. [0092] In Example 13, the subject matter of any of Examples 1-12 can optionally include wherein a first and a second state can be paired together in a GOT instance when one of the first and the second state are a final state of the automaton, and the other of the first and the second states does not drive any external states. [0093] In Example 14, the subject matter of any of Examples 1-13 can optionally include wherein a first and a second state can be paired together in a GOT instance when both the first and second state are final states of the automaton and both the first and the second state drive the same external states. [0094] In Example 15, the subject matter of any of Examples 1-14 can optionally include wherein determining which of the states can be grouped together in a GOT instance comprises determining which of the states can be grouped together in a GOT instance using graph theory. [0095] In Example 16, the subject matter of any of Examples 1-15 can optionally include wherein determining which of the states can be grouped together in a GOT instance using graph theory comprises determining which of the states can be grouped together in a GOT instance using graph theory to identify a maximum matching. [0096] In Example 17, the subject matter of any of Examples 1-16 can optionally include publishing the image. [0097] In Example 18, the subject matter of any of Examples 1-17 can optionally include wherein the instances comprise general purpose instances and special purpose instances, wherein the general purpose instances correspond to general purpose states of the automaton and the special purpose instances correspond to special purpose states of the automaton. [0098] In Example 19, the subject matter of any of Examples 1-18 can optionally include wherein the hardware elements corresponding to the general purpose instances include a state machine elements (SME) and a group of two (GOT) and wherein the hardware elements corresponding to the special purpose instances include counters and logic elements. [0099] In Example 20, the subject matter of any of Examples 1-19 can optionally include wherein the automaton is a homogonous automaton. [00100] In Example 21, the subject matter of any of Examples 1-20 can optionally include wherein converting the automaton into a netlist comprises mapping each of the states of the automaton to an instance corresponding to the hardware elements and determining the connectivity between the instances. [00101] In Example 22, the subject matter of any of Examples 1-21 can optionally include wherein the netlist further comprises a plurality of connections between the instances representing conductors between the hardware elements. [00102] In Example 23, the subject matter of any of Examples 1-22 can optionally include wherein converting the automaton into a netlist comprises converting the automaton into a netlist comprising instances corresponding to states of the automaton except for a starting state. [00103] In Example 24, the subject matter of any of Examples 1-23 can optionally include determining the location in the parallel machine of the hardware elements corresponding to the instances of the netlist. [00104] In Example 25, the subject matter of any of Examples 1-24 can optionally include, wherein grouping states together includes grouping states together based on a physical design of a hardware element comprising a group of general purpose elements. [00105] In Example 26, the subject matter of any of Examples 1-25 can optionally include determining which conductors of the parallel machine will be used to connect the hardware elements; and determining settings for programmable switches of the parallel machine, wherein the programmable switches are configured to selectively couple together the hardware elements. [00106] In Example 27, the subject matter of any of Examples 1-26 can optionally include wherein the group instance includes a group of two (GOT) instance and wherein group states includes pair states as a function of which states the paired states drive. [00107] In Example 28, the subject matter of any of Examples 1-27 can optionally include wherein group states in a group instance based on a number of unused first instances includes: determine whether a first state and a second state can be paired based on the following conditions: neither the first state or second state are final states in the automaton, and one of the first state and second state does not drive any states other than the first or second states; neither the first or second state are final states in the automaton, and both the first state and the second state drive the same external states; either the first state or the second state are a final state and the first state or second state that is not a final state does not drive any states except the first state or second state; and both the first state and the second state are final states and both the first state and the second state drive the same external states. [00108] In Example 29, the subject matter of any of Examples 1-28 can optionally include wherein convert the automaton into a netlist includes: model the states as a graph wherein vertices of the graph correspond to states and edges of the graph correspond to possible pairings of the states; determine matching vertices for the graph; and pair states corresponding to the matching vertices. [00109] In Example 30, the subject matter of any of Examples 1-29 can optionally include wherein convert the automaton into a netlist includes: determine a maximum matching for the graph. [00110] In Example 31, the subject matter of any of Examples 1-30 can optionally include wherein convert the automaton into a netlist includes: pair each set of states corresponding to a matching vertices; and map each state that corresponds to an unmatched vertex to a GOT instance wherein one SME instance in the GOT instance is to be unused. [00111] In Example 32, the subject matter of any of Examples 1-31 can optionally include wherein group states together includes: pair states as a function of which states the paired states drive. [00112] In Example 33, the subject matter of any of Examples 1-32 can optionally include wherein group states together in a group instance based on a number of unused first instances includes: determine whether a first state and a second state can be paired based on the following conditions: neither the first state or second state are final states in the automaton, and one of the first state and second state does not drive any states other than the first or second states; neither the first or second state are final states in the automaton, and both the first state and the second state drive the same external states; either the first state or the second state are a final state and the first state or second state that is not a final state does not drive any states except the first state or second state; and both the first state and the second state are final states and both the first state and the second state drive the same external states. [00113] In Example 34, the subject matter of any of Examples 1-33 can optionally include wherein group states together in a group instance based on a number of unused first instances includes: model the states as a graph wherein vertices of the graph correspond to states and edges of the graph correspond to possible pairings of the states; determine matching vertices for the graph; and pair states corresponding to the matching vertices. [00114] In Example 35, the subject matter of any of Examples 1-34 can optionally include wherein group states together in a group instance based on a number of unused first instances: determine a maximum matching for the graph. [00115] In Example 36, the subject matter of any one of Examples 1-35 can optionally include wherein group states together in a group instance based on a number of unused first instances includes: pair each set of states corresponding to a matching vertices; and map each state that corresponds to an unmatched vertex to a GOT instance wherein one SME instance in the GOT instance is to be unused. [00116] In Example 37, the subject matter of any of Examples 1-36 can optionally include wherein the device is configured to implement each pair of states as a group of two hardware element in the parallel machine. [00117] Example 38 includes a parallel machine programmed by an image produced by the process of any of Examples 1-37. |
A power management method and mechanism for dynamically determining which of a plurality of blocks of an electrical device may be powered on or off. A device includes one or more power manageable groups (260A-260N). A power management unit (202) associated is configured to detect instructions which are scheduled for execution, identify particular power group(s) which may be required for execution of the instruction, and convey an indication which prevents the particular power group(s) from entering a powered off state, in response to detecting said instruction. If the power management unit does not detect an incoming or pending instructions which requires a particular power group(s) for execution, the power management unit may convey an indication (570A-570N) which causes or permits the corresponding power group(s) to enter a powered off state. Pcwer groups may automatically decay to a powered off or reduced power state. Instructions may be encoded to identify required power groups. |
1、A device including:One or more power manageable groups (260A to 260N); andPower management unit (202), set to:Detect scheduled instructions;Identifying at least one power group of the one or more power manageable groups that may be required to execute the instruction; andIn response to detecting the instruction, an instruction (570A to 570N) to prevent the at least one power group from entering a power-off state is transmitted.2、The device of claim 1, wherein the instructions for detecting execution of the execution include instructions for detecting entry and / or waiting.3、The device of claim 1, wherein the power management unit is further configured to communicate an instruction to bring the at least one power group into a power-off state.4、The device of claim 3, wherein the instruction is further set to disable timing of the at least one power group.5、The device of claim 3, wherein the at least one power group includes a circuit that can turn off the first portion of the power and a circuit that can turn off the second portion of the power.6、The device of claim 5, wherein in response to receiving an instruction that the at least one power group can be powered off, the first part can enter a power-off state, and the second part can enter a reduced power that is not a power-off state status.7、The device according to claim 3, wherein in response to determining that the instruction corresponding to the at least one power group has not been scheduled for execution during a predetermined time, the power management unit is set to communicate that the at least one power group is allowed to enter a power-off state Instructions.8、The device according to claim 7, wherein the power management unit is configured to:Maintaining multiple counts (520A to 520N), each of the multiple counts corresponding to one of the power manageable groups;Reduce each of the multiple counts for each cycle of the received timing signal; andIn response to detecting an instruction corresponding to a power group scheduled for execution, the first count of the plurality of counts is reset to a maximum value, and the first count corresponds to the power group.9、The device of claim 3, wherein the at least one power group is set to communicate a status command to the power management unit, and when the status command is determined, the first power group is prevented from entering a power-off state. .10、The device of claim 1, wherein the at least one power group is set to automatically enter a power-off state in the absence of the instruction.11、A method for managing power in an electronic device, the method comprising:Detect scheduled instructions;Identify at least one power group of one or more power manageable groups (260A to 260N) that may be required to execute the instruction; andIn response to detecting the instruction, an instruction to prevent the at least one power group from entering a power-off state is transmitted.12、The method of claim 11, further comprising communicating an instruction to place the at least one power group into a power-off state.13、The method of claim 12, wherein the at least one power group includes a circuit that can turn off a first portion of the power and a circuit that can turn off a second portion of the power, and wherein in response to receiving the at least one power group An instruction that can be turned off, the method further includes bringing the first part into a power-off state and putting the second part into a reduced power state that is not a power-off state.14、The method of claim 12, wherein an instruction to allow the at least one power group to enter a power-off state is communicated in response to determining that the instruction corresponding to the at least one power group has not been scheduled for execution during a predetermined time period. |
Dynamic self-degradation device architectureTechnical fieldThe present invention relates to the field of processors and computer systems, and more particularly to power management in processors and other devices.Background techniqueAlthough computing and other devices' processing performance has received much attention, the issue of power consumption has become increasingly important. In general, the public has expected that their computing devices will be smaller and more mobile. Regardless of whether the device is a portable computer, a mobile phone, a personal digital assistant (PDA) or other device, a portable power source such as a battery has become commonplace. Given the limited nature of this power supply, it is extremely important to use the available power in an efficient manner. As a result, power management techniques in such devices have become more widespread. In addition, as gate sizes become smaller in processors and other computing devices, it should be inferred that static power consumption may quickly equal dynamic power consumption. Therefore, static power consumption has gradually become an important design consideration in processor and device architectures.Given the importance of managing power in these devices, effective power management methods and mechanisms are urgently needed.Summary of the InventionMethods and mechanisms for managing power in computing devices are considered.Consider a method and mechanism in which only relevant logical blocks of the device are active. Unnecessary blocks can be powered off, and clocking of the unnecessary blocks can be stopped. This method and mechanism dynamically determines how and when the various logical units are allowed to operate or are closed.Consider devices that include one or more power-manageable groups. Set a power management unit that incorporates the device to detect scheduled instructions, identify specific power groups that may be needed to execute the instructions, and communicate to prevent the specific power group from entering the power-off state in response to detecting the instruction instruction. If the power management unit does not detect an instruction to enter or wait for a specific power group for execution during a predetermined time, the power management unit may communicate to enable or allow the corresponding power group to enter a power-off state. instruction. In addition to disabling power for a given power group, timing can also be disabled.A device is also considered in which a power manageable group can be divided into a part that can be turned off and a part that can not be turned off. In such an embodiment, when it is determined that a given power group can be turned off, only a part of the power group that can be turned off is turned off. The rest can then be kept powered. In an alternative embodiment, a portion of a power group that may not be powered off may enter a reduced power state that allows the portion to maintain the state of the portion.Also considered is a power management unit that is configured to maintain a count for each power manageable group. This count can be reduced for each cycle of the received timing. In the event that the count reaches zero, a signal instructing the corresponding power group to be placed in a power-off state may be communicated. If an instruction is detected that may require a particular power group for execution, the count for that power group may be reset to a non-zero value. In one embodiment, the count can naturally "decay" in the absence of a reset signal. Thus, the power group can automatically decay to a state where the power is turned off or reduced. In different embodiments, a power group can be set to communicate a status instruction that prevents the first power group from entering a power-off state.In one embodiment, the instructions incorporate a power code that indicates which power groups may be needed for execution. Instruction opcodes can be encoded to identify one or more power groups that may be needed for execution. Alternatively, the instructions can be mapped to the power code through a mapping mechanism.BRIEF DESCRIPTION OF THE DRAWINGSFor the detailed description above, please refer to the accompanying drawings, which are briefly described below.FIG. 1 is a block diagram of an embodiment of a processor;2 is a block diagram of an embodiment of a portion of the processor shown in FIG. 1;Figure 3 is a part describing a power management mechanism;Figure 4 is a part describing a power management mechanism;5 is an embodiment describing a dynamic power control mechanism and a power group;FIG. 6 illustrates an embodiment of a device instruction and a power code encoding; andFIG. 7 is a block diagram of a second embodiment of a computer system including a processor shown in FIG. 1. FIG.Although the present invention may be modified and replaced in various forms, a specific embodiment thereof is shown by way of example in the drawings, and the specific embodiment will be described in detail herein. It should be understood, however, that the drawings and detailed description herein are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention As defined by the appended claims.detailed descriptionProcessor overviewPlease refer to FIG. 1, which is a block diagram of an embodiment of a display processor 10. Other embodiments are possible and contemplated. As shown in FIG. 1, the processor 10 includes a prefetch / predecode unit 12, a branch prediction unit 14, an instruction cache 16, an instruction alignment unit 18, a plurality of decoding units 20A to 20C, and a plurality of reservations. Stations 22A to 22C, multiple functional units 24A to 24C, load / store unit 26, data cache 28, register file 30, reorder buffer 32, hood read-only memory (MROM) unit 34, and BUS Interface Unit 37. Components referenced in the text following the specific component symbol will be referred to by the single component symbol. For example, the decoding units 20A to 20C will be collectively referred to as the decoding unit 20.The prefetch / predecode unit 12 is coupled to receive instructions from the bus interface unit 37, and is further coupled to the instruction buffer 16 and the branch prediction unit 14. Likewise, the branch prediction unit 14 is coupled to the instruction cache 16. Furthermore, the branch prediction unit 14 is coupled to the decoding unit 20 and the functional unit 24. The instruction buffer 16 is further coupled to the MROM unit 34 and the instruction alignment unit 18. The instruction alignment unit 18 is sequentially coupled to the decoding unit 20. Each decoding unit 20A to 20C is coupled to a load / store unit 26 and to an individual reservation station 22A to 22C. The reservation stations 22A to 22C are further coupled to individual functional units 24A to 24C. In addition, the decoding unit 20 and the reservation station 22 are both coupled to the register file 30 and the rearrangement buffer 32. The functional unit 24 is coupled to the load / store unit 26, the register file 30, and the rearrangement buffer 32. The data buffer 28 is coupled to the load / store unit 26 and to the bus interface unit 37. The bus interface unit 37 is further coupled to the L2 interface of the L2 cache and the bus. Finally, the MROM unit 34 is coupled to the decoding unit 20.The instruction cache 16 is a cache memory provided to store instructions. The instructions are fetched from the instruction buffer 16 and dispatched to the decoding unit 20. In one embodiment, the instruction cache 16 is set to store up to 64 kilobytes in a two-way set associative structure with a 64-byte line (one byte includes 8 binary bits). Instructions. Alternatively, any other desired configuration and size can be used. For example, it should be noted that the instruction cache 16 may perform a fully associative, collective associative, or directly corresponding configuration.The instruction is stored in the instruction buffer 16 by the prefetch / predecode unit 12. Instructions can be prefetched according to a prefetch scheme before they request from the instruction cache 16. Each kind of prefetching scheme can be used by the prefetching / predecoding unit 12. When the pre-fetch / pre-decode unit 12 transmits an instruction to the instruction cache 16, the pre-fetch / pre-decode unit 12 may generate pre-decoded data corresponding to the instruction. For example, in one embodiment, the pre-fetch / pre-decode unit 12 generates three pre-decode bits for each byte of instruction: a start bit, a stop bit, and a function bit. The pre-decoded bits form a tag indicating the boundary of each instruction. The pre-decoding identifier may also convey additional information, such as whether a given instruction can be directly decoded by the decoding unit 20 or whether the instruction is executed by calling a microcode program controlled by the MROM unit 34. Furthermore, a pre-fetch / pre-decode unit 12 may be set to detect a branch instruction and store branch prediction information corresponding to the branch instruction into the branch prediction unit 14. Other embodiments may use any suitable pre-decoding scheme or no pre-decoding as required.An encryption of the pre-decode identifier of the embodiment of the processor 10 using the variable byte length instruction set will be explained next. The variable byte length instruction group is an instruction group in which different instructions can occupy different numbers of bytes. The exemplified variable byte length instruction set used by one embodiment of the processor 10 is the x86 instruction set.In the illustrated encryption, if a given byte is the first byte of an instruction, the start bit of the byte is set. If the byte is the last byte of the instruction, the end bit of the byte is set. Instructions that can be directly decoded by the decoding unit 20 are called "fast path" instructions. According to one embodiment, the remaining x86 instructions are called MROM instructions. For shortcut instructions, this function bit sets each prefix byte included in the instruction and clears other bytes. Alternatively, for MROM instructions, the function bit of each leading byte is cleared, and other bytes are set. The type of the instruction can be determined by examining the function bit corresponding to the termination byte. If the function bit is cleared, the instruction is a shortcut instruction. Conversely, if the function bit is set, the instruction is a MROM instruction. The opcode of the instruction can thus be placed in an instruction that can be directly decoded by the decoding unit 20, like the byte of the first clear function bit incorporated in the instruction. For example, a shortcut instruction containing two leading bytes, MOD, R / M byte, and immediate byte will have start, end, and function bits, as listed below:Starting bit 10000Stop bit 00001Function bit 11000The MROM instruction is determined by the decoding unit 20 as an instruction that is too complicated for decoding. The MROM instruction is executed by calling the MROM unit 34. Specifically, when a MROM instruction is encountered, the MROM unit 34 parses and issues the instruction to a subset of the defined shortcut instructions to perform the required operation. The MROM unit 34 assigns a subset of the shortcut instructions to the decoding unit 20.The processor 10 uses branch prediction to infer a fetch instruction after a conditional branch instruction. The branch prediction unit 14 is included to perform a branch prediction operation. In one embodiment, the branch prediction unit 14 uses the branch to reach the two branch target addresses and the corresponding success / not take prediction of the cache line in each 16-byte portion of the instruction cache 16. Target buffer. The branch target buffer may include, for example, 2048 entries or any other suitable number of entries. The pre-fetch / pre-decode unit 12 determines a first branch target when a specific line is decoded in advance. Due to the execution of the instructions of the cache line, subsequent updates of branch targets corresponding to the cache line may occur. The instruction buffer 16 provides an instruction whose instruction address is fetched, so that the branch prediction unit 14 can determine which branch target address to select to form a branch prediction. The decoding unit 20 and the function unit 24 provide update information to the branch prediction unit 14. The decoding unit 20 detects a branch instruction that is not predicted by the branch prediction unit 14. The functional unit 24 executes the branch instruction and determines whether the predicted branch direction is incorrect. The branch direction may be "taken", where the next instruction is fetched from the target address of the branch instruction. Conversely, the branch direction may be "not taken", where the next instruction is fetched from a memory location following the branch instruction. When a branch instruction with a wrong prediction is detected, the instruction after the branch with the wrong prediction is discarded by various units of the processor 10. In an alternative configuration, the branch prediction unit 14 may be coupled to the rearrangement buffer 32 instead of the decoding unit 20 and the functional unit 24, and may receive branch prediction error information from the rearrangement buffer 32. Various suitable branch prediction algorithms may be used by the branch prediction unit 14.The instruction retrieved from the instruction buffer 16 is transmitted to the instruction alignment unit 18. When an instruction is fetched from the instruction buffer 16, the corresponding pre-decoded data is scanned to provide information on the fetched instruction to the instruction alignment unit 18 (and to the MROM unit 34). The instruction alignment unit 18 uses the scanned data to align instructions to each decoding unit 20. In one embodiment, the instruction alignment unit 18 gives the decoding unit 20 three alignment instructions from eight instruction bytes. The decoding unit 20A receives an instruction before the instructions are simultaneously received by the decoding units 20B and 20C (in program order). Similarly, the decoding unit 20B receives the instructions in program order before the instructions are simultaneously received by the decoding unit 20C. In some embodiments, such as an embodiment using a fixed-length instruction set, the instruction alignment unit 18 may be removed. The decoding unit 20 is set to decode an instruction received from the instruction alignment unit 18. Detect register operand information and route to register file 30 and rearrange buffer 32. In addition, if the instruction needs to perform one or more memory operations, the decoding unit 20 dispatches the memory operations to a load / store unit 26. Each instruction is decoded into a set of control values for the functional unit 24, and these control values are dispatched to the reservation station 22 along with operand address information and permutations or immediate data that may contain instructions. In a specific embodiment, each instruction is decoded into two operations that can be performed by the functional units 24A to 24C, respectively.The processor 10 supports out-of-order execution, and thus uses a rearrangement buffer 32 to keep track of the original program order for register read and write operations, performs register renaming, allows speculative Instruction execution and branch prediction error recovery, and speeding up accurate exceptions. The temporary storage location in the rearrangement buffer 32 is reserved according to the decoding of the instruction, which involves an update of the register to store the speculative register state. If the branch prediction is incorrect, the result of the speculatively executed instruction along the path of the prediction error may be invalidated in the buffer before the result is written to the register file 30. Similarly, if a specific instruction causes an exceptional event, the instruction following the specific instruction may be discarded. In this way, an exceptional event is "precise" (for example, an instruction following the specific instruction that caused the exceptional event does not complete before the exceptional event). It should be noted that if a specific instruction is executed before the instruction and the instruction precedes the specific instruction in program order, the specific instruction is speculatively executed. The previous instruction may be a branch instruction or an instruction caused by an exceptional event. In the above example, the speculative result may be discarded by the rescheduling buffer 32.The decoding instructions provided at the output of the decoding unit 20 are directly routed to the individual reservation stations 22. In one embodiment, each reservation station 22 can retain instruction information (e.g., decode instruction and operand values, operand identifiers, and / or immediate data) up to six pending instructions, the instructions waiting Send to the corresponding functional unit. It should be noted that in the embodiment of FIG. 1, each reservation station 22 is combined with a dedicated functional unit 24. Therefore, three dedicated "sending locations" are formed by the reservation station 22 and the functional unit 24. In other words, the transmission position 0 is formed by the reserved station 22A and the functional unit 24A. The aligned instructions and dispatch to the reservation station 22A are executed by the functional unit 24A. Similarly, the transmission position 1 is formed by the reserved station 22B and the functional unit 24B; and the transmission position 2 is formed by the reserved station 22C and the functional unit 24C.According to the decoding of a specific instruction, if the required operand is a register position, the register address information is simultaneously routed to the rearrangement buffer 32 and the register file 30. The register file 30 includes a storage location of each register of the architecture, and the register is included in a group of instructions executed by the processor 10. Additional storage locations may be included in the register file 30 for use by the MROM unit 34. Rearranging buffer 32 contains temporary storage locations of results that change the contents of these registers to allow out-of-order execution. The temporary storage location of the rearrangement buffer 32 is reserved for each instruction, which is decoded to decide to modify the content of one of the real registers. Therefore, at various points during the execution of a particular program, the rearrangement buffer 32 may have one or more locations containing the supposedly executed content of a given register. If the next decoding of a given instruction determines that the rearrangement buffer 32 has a previous location or a location assigned to a register used as an operand in the given instruction, the rearrangement buffer 32 forwards to Any one of the corresponding reserved stations: 1) the value of the most recently assigned position, or 2) the identifier of the most recently assigned position, if the value has not been generated by the functional unit until the value will eventually execute the previous instruction. If the rearrangement buffer 32 has a location reserved for a given register, the operand value (or rearrangement buffer identifier) is provided from the rearrangement buffer 32 and not from the register file 30. If no place is reserved for the required register in the rearrangement buffer 32, the value is directly retrieved from the register file 30. If the operand corresponds to a memory location, the operand value is provided to the reservation station through the load / store unit 26.In a specific embodiment, the rearrangement buffer 32 is set to store and operate the decoded instructions as a unit synchronously. This configuration is referred to herein as "line-oriented". By operating several instructions together, it is possible to simplify rearranging the hardware used in the buffer 32. For example, whenever one or more instructions are dispatched by the decoding unit 20, the line-oriented rearrangement buffer configuration included in this embodiment is sufficient for being attached to three instructions (each from each decoding unit 20) Storage space for instruction information. In contrast, variable storage is allocated in the traditional rearrangement buffer, which depends on the number of instructions actually dispatched. A larger number of logic gates may need to be configured with this variable amount of storage. When each of the synchronously decoded instructions has been executed, the instruction results are simultaneously stored in the register file 30. Then, the storage space is vacated to configure another group of instructions to be decoded synchronously. In addition, because the control logic is shared via several synchronized decoded instructions, the amount of control logic circuits used by each instruction is reduced. The rearrangement buffer identifier that verifies a specific instruction can be divided into two fields: a line tag and an offset tag. The line segment identifier identifies a group containing the synchronously decoded instructions of the specific instruction, and the displacement identifier identifies which instruction in the group corresponds to the specific instruction. It should be noted that storing the instruction result in the register file 30 and vacating the corresponding storage space is called a “retiring” instruction. It should be noted again that any configuration of the rearrangement buffers may be used in various embodiments of the processor 10.As noted previously, the reservation station 22 stores instructions until the instructions are executed by the corresponding functional unit 24. If: (i) the operand of the instruction has been provided, and (ii) the operand has not been provided to the instruction in the same reserved station 22A to 22C and the instruction before the instruction in the program sequence, the instruction to be executed is selected . It should be noted that when the instruction is executed by one of the functional units 24, the result of the instruction is directly transmitted to any reservation station 22 waiting for the result, and the result is also transmitted to update the rearrangement buffer 32 (this technique This is commonly referred to as "result forwarding". The instructions that can be executed are optionally transmitted to the functional units 24A to 24C during the timing period of the forwarding associated results. In this example, the reservation station 22 routes the forwarding results to This functional unit 24. In an embodiment in which instructions can be decoded into a plurality of operations and executed by the functional unit 24, the operations can be arranged separately from each other.In one embodiment, each of the functional units 24 is configured to perform addition and subtraction of integer arithmetic operations, and shift, rotate, logical operations, and branch operations. The operation is performed in response to a control value decoded by the decoding unit 20 for a specific instruction. It should be noted that floating-point units (not shown) may also be used to incorporate floating-point operations. The floating-point unit is operable as a co-processor and receives instructions from the MROM unit 34 or the rearrangement buffer 32, and then communicates with the rearrangement buffer 32 to complete the instructions. In addition, the function unit 24 may be set to perform address generation for load and store memory operations, and the operations are performed by the load / store unit 26. In a specific embodiment, each function unit 24 may include an address generation unit for generating an address and an execution unit for performing other functions. The two units can operate independently according to different instructions or operations during the timing period.Each of the functional units 24 also provides information about executing a conditional branch instruction to the branch prediction unit 14. If the branch prediction is incorrect, the branch prediction unit 14 flushes the instruction after the branch having entered the instruction processing pipeline with the wrong prediction, and causes the instruction fetched from the instruction cache 16 or the main memory to be needed. It should be noted that in this case, the results of the instructions in the original program sequence that occur after discarding the predicted wrong branch instruction include the results of speculative execution and temporary storage in the load / store unit 26 and rearrangement of the buffer 32. It should be further noted that the branch execution result may be provided by the functional unit 24 to the rearrangement buffer 32, which may indicate that the branch prediction error is to the functional unit 24.If the register value is updated, the result generated by the function unit 24 is sent to the rearrangement buffer 32, and if the content of the memory location is changed, the result is sent to the load / store unit 26. If the result is stored in a register, when the instruction is decoded, the rearrangement buffer 32 stores the result in a position of the value reserved for the register. A plurality of result buses 38 are included for transferring results from the functional unit 24 and the load / store unit 26. The result bus 38 communicates the results produced, and the rearrangement buffer identifier identifies the instruction that was executed.The load / store unit 26 provides an interface between the function unit 24 and the data cache 28. In one embodiment, the setting loading / storing unit 26 is a first loading / storage buffer having a storage location for data and address information for waiting for loading or storage of the data cache 28 that has not yet been accessed. And a second load / store buffer having a storage location for data and address information for loading and storing of the accessed data cache 28. For example, the first buffer may include 12 locations and the second buffer may include 32 locations. The decoding unit 20 arbitrates access to the load / store unit 26. When the first buffer is full, the decoding unit must wait until the load / store unit 26 has space for the waiting load or store request information. The load / store unit 26 also performs a check of the dependency of the load memory operation on the waiting storage memory operation to ensure that the inertia of the data is maintained. The memory operation is a data transfer between the processor 10 and the main memory subsystem (although the transfer can be completed in the data buffer 28). A memory operation may be the result of an instruction using an operand stored in memory, or the result of a load / store instruction that enables the data to be transferred without other operations.The data cache 28 is a cache memory for temporarily storing data transferred between the load / store unit 26 and the main memory subsystem. In one embodiment, the data cache 28 has a capacity of up to 64 kilobytes of data in a two-way set associative structure. It should be understood that the data cache 28 may be implemented in a variety of specific memory configurations, including collectively associative configurations, fully associative configurations, direct corresponding configurations, and any other configurations of any appropriate size.The bus interface unit 37 is set in a computer system through a bus to communicate between the processor 10 and other components. For example, the bus is compatible with the EV-6 bus developed by Digital Equipment Corporation. Alternatively, an appropriate interconnection structure including packet-based, unidirectional or bidirectional connection, etc. may be used. A selective second-level cache (L2) cache interface can also be used to interface with the second-level cache.It should be noted that although the embodiment of FIG. 1 is a superscalar implementation, other embodiments may use a scalar implementation. In addition, the number of functional units may vary from embodiment to embodiment. Any execution circuit for executing shortcuts and microcode (such as MROM) instructions can be used. Other embodiments may use a centralized reservation station instead of the individual reservation station shown in FIG. 1. In addition, other embodiments may use a central scheduler instead of the reserved stations and rearrangement buffers shown in FIG. 1.Please refer to FIG. 2, which illustrates an embodiment of a power management mechanism that can be used in a device with a processor as described above. The following uses microinstruction-based methods and mechanisms for discussion purposes. However, the described methods and mechanisms can also be used in non-micro-instruction-based systems.In this display example, the microcode unit 200 and the microinstruction control unit 202 are displayed. It should be noted that the microcode unit 200 and the microinstruction control unit 202 may generally correspond to the MROM unit 34 described above. However, in other embodiments, the unit (200, 202) may include a circuit separated from the MROM unit 34. The microcode unit 200 includes a memory 210 including a plurality of entries, and each entry can be set to store microinstructions. The memory 210 may include, for example, a read-only memory (ROM) or any other suitable storage device. The control unit 202 shown includes a rearrangement buffer 220 and a dynamic power control unit 240. In addition to the microcode unit 200, a control unit 202 and power groups 260A to 260N are also displayed. In one embodiment, a power group may generally correspond to a collection of power manageable blocks of a logic unit or circuit. Power management means that the power supply (Vdd) supply and / or timing can be turned off or turned on dynamically. For example, the power group 260A may correspond to an address generation unit and / or a load storage unit, the power group 260B may correspond to an arithmetic logic unit, the power group 260C may correspond to a shifter, and the power group 260N may correspond to Part of the floating-point unit and so on.In one embodiment, the microinstructions can be coded to indicate which one or more power groups 260 need to be executed. If it is detected that the micro-instruction that requires a specific power group is to use a specific power group in the pipeline or otherwise, the control unit 202 may maintain the corresponding power group as it is or become active. . If such a micro instruction is not detected, a given power group can be allowed to enter a low power state. As used herein, a low power state may generally include an off or no power state unless otherwise indicated.In one embodiment, a specific power group 260 can automatically enter a low power state in the absence of a certain instruction that prevents the specific power group 260 from entering a low power state. In such an embodiment, the power group may automatically "decay" based on power consumption over time. If it is determined or believed that a given power group may be needed, instructions can be communicated to the power group, while in fact the "refresh" power group is a relatively small decay state. For example, if the power group 260 is not in use and no refresh signal is received within a certain time, the power group 260 may be set to automatically enter a low power state. If the renewal signal is detected, the predetermined time can be "restarted".In the illustrated embodiment, the memory 210 generally includes a table having multiple rows with a given row storing data corresponding to one (possibly multiple) microinstructions. As used herein, the names "microinstructions" and "instructions" are used interchangeably. In general, the data corresponding to a given instruction may include one or more fields that identify a particular operation, register, memory location, or other location. In addition, the data for a given instruction can further include a power code field 212, which can be used to indicate one or more power groups that can request execution of the corresponding instruction. In alternative embodiments, data within the power code domain 212 may be generated during instruction decoding or elsewhere within the processing mechanism. In the embodiment in which the memory 200 and / or the control unit 202 are part of the MROM unit 34, the memory unit 200 can be set to communicate one or more of the memory 200 to the control unit 202 in response to an instruction from the instruction buffer 16. instruction. As described above, some instructions (for example, MROM instructions) in the processor may be considered too complex to be decoded by the decoding unit 20. In this case, the MROM is executed by calling the MROM unit 34, and then the MROM unit 34 sends two or more other instructions to perform the required operation.In one embodiment, the rearrangement buffer 220 includes a plurality of entries, and each entry is set to store data to an instruction received from the memory 200. In the example shown, instructions and other data can be communicated to the rearrangement buffer 220 via the bus 230, but the corresponding power code is transmitted to the rearrangement buffer 220 and the dynamic power control unit 240 via the bus 232. The dynamic power control unit 240 includes a unit 280 configured to store instructions for each of one or more power groups 260. Each instruction in the unit 280 is set to indicate whether the corresponding power group can enter a low power state. For example, the unit 280 may include entries 0 to N, each entry corresponding to a power group 260. In addition, each power group 260 may also receive corresponding commands DPC [0] to DPC- [N], which indicate the power status of the power group. Also included in the dynamic power control unit 240 are circuits (290, 292, and 270) that are used to update the contents of the unit 280. In one embodiment, the circuit 290 may receive the instruction through the bus 232 corresponding to the "incoming" instruction, while the bus 250 may be used to communicate the instruction to the circuit 292 corresponding to the "pending" instruction. In addition, the power group 260 is further coupled through the bus 252 to communicate status commands to the dynamic power control unit 240. As used herein, when any instruction is detected that is incoming and / or waiting, the instruction may be scheduled to execute accordingly.Generally, the instructions received through the bus 232 or 250 may identify one or more power groups 260 that may need to execute the corresponding instructions. If an instruction corresponding to a specific power group 260A to 260N is received, the unit 280 may be updated to instruct the corresponding power group 260 to maintain power on and / or timing generation (or alternatively, it is not allowed to enter a low power state). For example, in one embodiment, the unit 280 may include a global counter, where each entry of the unit 280 may correspond to at least one of the power group 260 and may include a count value that decreases each timing cycle. In response to detecting an incoming 232 and / or waiting 250 instruction corresponding to the power group, the control unit 270 may reset the count for the corresponding power group to a non-zero value. In addition, the control unit 270 may prevent the count from decreasing when an instruction corresponding to a given power group is waiting (for example, in the rearrangement buffer 220). The power group 260 may enter (or be allowed to enter) a low power state when the count reaches zero. The count in the unit 280 reaching zero generally indicates that no power code corresponding to the instruction to enter or wait has been detected after a predetermined period of time. For example, if each count in the unit 280 includes two bits, and detection of an incoming or waiting instruction power code causes the two bits to be set to a binary value of "1" (for example, the count value Yes 3), reducing the count value three times will make the count equal to zero, and can be used to cause (or otherwise allow) the corresponding power group to enter a low power state. Of course, the method and mechanism may perform an increase instead of a decrease, and may indicate a low power state response caused or allowed to detect that the corresponding count is equal to zero or greater than a predetermined maximum value. In such an example, the control unit 270 may redesign the number to zero.FIG. 3 illustrates an embodiment of a power group 320 and a corresponding power state instruction 310. The power supply group 320 may generally correspond to one of the power supply groups 260 drawn in FIG. 2, and the power supply status instruction 310 may generally correspond to the instructions transmitted from the dynamic power supply control unit 240 shown in FIG. 2. In one embodiment, the power groups may be logically separated into two or more groups, each group having a different power management requirement. In the example shown, the power group 320 is divided into a first group 330 for power management and a second group 332 that is not for power management. In one embodiment, the first group 330 may include registers, combinational logic, and / or sequential logic that may be power-managed, while another group 332 includes registers and / or may not be power-managed. logic. The logic that may not turn off the power may generally include logic that needs to retain some type of state.As shown in the example of FIG. 3, a voltage supply 302 is available and a ground supply 306 is available. In addition, a clk source (gclk) 304 is provided. For example, the timing source gclk304 may include synchronous global timing. In one embodiment, the voltage supply is coupled to the first group through the first gate 312 and is also coupled to the second group 332. Similarly, gclk 304 is coupled to the first group through the gate 314 and to the second group 332. The ground supply 306 is also shown coupled to two groups (330, 332). A power state command (DPC [i]) 310 is coupled to each gate 312 and 314. The gates 312 and 314 may include tri-state gates, or other circuits, which may be used to control whether the voltage supply 302 and / or gclk 304 is provided to the first group 330. For example, the power state instruction 310 may include an enable signal that is used to enable or disable an output from each gate 312. Allowing the output of the gate 312 will power the first group 330, and allowing the output of the gate 314 will cause the first group 330 to time. In one embodiment, the power status signal 310 may only include the count of the corresponding power group as described above in FIG. 2. If any bit of the count is not zero, the corresponding gate (312, 314) is allowed. Otherwise, the corresponding gate (312, 314) output can be disabled. Of course, many changes in how the signal 310 controls the circuits 312 and 314 are possible and considered.In addition to the above, the power group 320 also shows a state of transmitting the instruction 340. The status command is generally transmitted to the dynamic power control unit 240 in FIG. 2. The status instruction may be used to indicate when the power group 320 may be powered off. For example, when the power group 320 performs an operation, the status signal 340 may indicate that it requires power and / or generates timing.As mentioned above, some parts of the power group can be power-managed while others are not. In the example of FIG. 3, the second group 332 within the power group 340 is always set to provide power and / or generate timing. This situation is needed because group 332 is required to maintain a state. In alternative embodiments, a power group that does not have its power and / or timing removed may reduce its power state. In this way, the circuit can be set to reduce its power usage and / or leakage while still maintaining its state.FIG. 4 illustrates an embodiment of a circuit similar to that of FIG. 3. Similar items in FIG. 4 have the same numbers as similar items in FIG. 3. In the embodiment of FIG. 4, the additional circuit 400 is coupled between the supply voltage 302, gclk 304 and the group 332 of the power group 320. In general, the circuit 400 can be set to supply two or more power levels to the group 332, but to maintain the minimum power level needed to maintain the state of the circuits within the group 332. In one embodiment, the circuit 400 can be coupled to receive more than one power supply. For example, the circuit 400 may be coupled to receive and communicate a power supply Vdd 302 representing a high supply voltage (eg, 1 volt (V)) and a power supply Vdd2410 representing a low supply voltage (eg, 250 millivolts (mV)). In this manner, the circuit 400 may select a power supply from two or more power supplies and communicate the selected power supply to the group 332. Alternatively, more than one power level can be communicated from a single power supply using known techniques. In such an embodiment, the circuit 400 may be coupled to receive only a single power supply (eg, Vdd 302) and communicate two or more power supplies to the group 332.Please refer to FIG. 5, which illustrates an embodiment of a power management mechanism. In the embodiment of the display, a plurality of coupled power groups 560A to 560N are displayed to receive a power supply (Vdd) and a timing signal (gclk). The drawn power supply Vdd and the timing signal gclk may (or may not) represent a common power supply Vdd and / or the timing signal gclk. A circuit that may generally correspond to a dynamic power control unit (such as unit 240 of FIG. 2) is shown in block 500. However, in other embodiments, the logic and circuits drawn in various parts of FIG. 5 may be placed in a variety of locations within a device or system. In the example of the display, a storage device (for example, a register) 522 that stores a power state command for each power group 560 is displayed. For example, entries 520A to 520N may correspond to each power group 560A to 560N, respectively. The storage device 522 (if not necessary) may generally correspond to the storage device 280 of FIG. 2. In an embodiment where the storage device 522 includes a plurality of counters, an update signal 521 may be generated, and the signal is set to decrease the count as described above. At the same time, each of the power groups 560A to 560N shown is gated logic 532A to 532N and instructions 530A to 530N.In the embodiment shown, the unit 500 is coupled to receive an incoming instruction 502A and a waiting instruction 504A corresponding to at least one power group 560A. The incoming instruction 502A may be transmitted by a device such as the microcode unit 200 of FIG. 2, and the waiting instruction 504A may be transmitted from a device such as the rearrangement buffer 220 of FIG. 2. Alternatively, in non-microcode-based devices, such instructions may be received from any suitable instruction scheduling mechanism or other mechanism (502A, 504A). It is also generally possible to set the unit 500 to receive instructions for entering and / or waiting for the power groups 560B to 560N. Each gate control circuit 532A to 532N is set to control whether the corresponding power group 560A to 560N is used to supply power and / or generate timing, such as the tri-state gates (312 and 314) described in FIG. 4. In such an embodiment, each circuit 532 may then receive an enable / disable signal to control the gate control function. For example, signal 524A may represent an enable / disable signal for power group 560A. In one embodiment, the signal 524A may correspond to the signal 310 of FIG. 4.In general, when an instruction that requires a power group 560A for operation is detected (eg, in an earlier pipeline such as during decoding), the incoming instruction 502A can be asserted. The waiting instruction 504A can be determined, and the instruction requiring the power group 560A is waiting (for example, in the rearrangement buffer of FIG. 2). In response to the signals 502A and 504A, the circuit 506 communicates an instruction 510A that indicates whether the power group 560A can enter a reduced power state. For example, in one embodiment, the circuit 506 may perform a logical OR function based on the values of the received signals 502A and 504A. If any of the signals 502A or 504A is determined, a signal 510A that may indicate that the power group 560A may not be able to enter a reduced power state is also determined. Similar commands 510B to 510N can be generated for each power group 520B to 520N, respectively.In one embodiment, the signal 510A may directly indicate whether the corresponding power group 560A may enter a reduced power state. In such an embodiment, if the signal 510A is not asserted (for example, an incoming signal and a waiting signal corresponding to the power group 560A cannot be detected), the de-assert signal 524A may be canceled and the The output of this Vdd and / or gclk from the gate control logic 532A is disabled. In practice, entry 520A may then store the permitted / disabled state for the corresponding power group 560A. In alternative embodiments, the power state may decay over time as described above. In one embodiment, each entry 520A to 520N can store a count for a corresponding power group 560A to 560N. In response to the update signal 521, or any other suitable signal, each count 520 may be reduced by each cycle of the timing signal (eg, gclk, or any other suitable timing signal). In case the corresponding count reaches a predetermined value such as zero, the corresponding power group may enter a low power state. For example, if the count in entry 520A is equal to zero, the enable / disable signal 524A may indicate that the corresponding power group 560A may enter a low power state.In one embodiment, a separate status command may be maintained for each of the power groups 560. For example, as shown in FIG. 5, a count 520 and a status instruction 530 can be maintained for each power group at the same time (as shown in block 523). In such an embodiment, the corresponding state 530 is set to depend on the corresponding count 520 value. Therefore, if the count 520A is not zero, the state 530A may indicate that the power group 560A is powering. On the other hand, if the count 520A reaches zero, the state 530A can be set to indicate that the power group can enter a low power state. In response to receiving an instruction to enter or wait, the count of the corresponding power group can be reset.Also drawn in FIG. 5 are status commands 570A to 570N communicated through each power group 560A to 560N. The status instruction 570 may indicate that the corresponding power status 560 cannot enter a reduced power status and / or cause it to generate a timing disable. For example, if the status command 570A indicates that the power group 560A cannot enter a reduced power state, the signal 524A can be avoided to indicate a low power state regardless of the values of the entries 520A and / or 530A. Alternatively, the instruction 570A may be provided directly to the gate control logic 532A or other. Numerous such alternative embodiments are possible and contemplated.As mentioned above, the instructions may include instructions regarding one or more power groups that may need to be executed. Such an instruction can be directly encoded as part of the instruction encoding, and the power code instruction can be determined by mapping to an instruction operation code or other operation code. A variety of techniques combining power code instructions with instructions or operations are possible and considered. FIG. 6 illustrates an embodiment of a power code and its relevance to various instructions. The first table 600 includes a power column 602 and a corresponding power code ID 604. Each row of table 600 contains one or more power manageable groups of logic / circuitry and corresponding power code IDs. In the example shown, the power code ID includes eight bits. However, those skilled in the art will appreciate that other encodings are possible. All such alternatives are considered. The first entry instructs the address generation unit (AGU) and the power code ID of the load / store unit to be "00000001". The second entry instructs the arithmetic logic unit (ALU) to have a power code ID of "00000010". Similar entries include shifters, integer multipliers, floating-point (FP) schedulers, FP adders, FP multipliers, and FP division / square root units. It should be noted that the specific power group within a given device will depend on the type and nature of the device, design decisions, etc. Therefore, different power groups may exist in specific general-purpose microprocessors and application-specific devices.The second table 601 of FIG. 6 plots a sample instruction 612 having a corresponding sample code including a power code 614 and other bits 616. The first entry draws a settable addop instruction, for example, adding two operands. In the example shown, addop has registers and memory operands. The power code of this instruction is "00000011". According to the association illustrated by Form 600, the power code "000000011" indicates that the AGU, the load / store power group, and the ALU power group may be required at the same time. The second entry draw sets a moveop instruction that moves data from one location to another. In the example shown, the moveop instruction contains two register operands. The power code of the moveop instruction is "00000010", which corresponds to the ALU power group. Finally, the fpaddop instruction is drawn in table 601. Such instructions can correspond to addition operations in floating-point arithmetic units. In the example shown, the fpaddop instruction includes a floating-point register operand (fpregister) and a memory operand. The power code of the fpaddop instruction is "00110001", which corresponds to the shifter and the integer multiplication power group. The other bits 616 corresponding to each illustrated instruction may provide any other suitable encoded bits of the instruction.As an example of the identification of a power group, the power group may need to wait for instructions, assuming that for discussion purposes, the table 601 represents a portion of data stored in the rearrangement buffer 220 of FIG. 2. Therefore, the instructions in the table 601 generally represent waiting instructions. Please look at the three instructions drawn (it is generally possible to exceed three instructions). A logical OR operation can be performed on the power code of the waiting instruction. Therefore, the following operation is the result of execution as illustrated in block 603.00000011000000100011000100110011Therefore, the power code of "00110011" is an instruction indicating the waiting. As such, this power code corresponds to the AGU, load / store, ALU, shifter, and integer multiplier power groups. Therefore, each of these power groups may be needed and may not have to be turned off / off.Please refer to FIG. 7, which is a block diagram illustrating an embodiment of a computer system 700 including a processor 10 coupled to various system elements through a bus bridge 702. Other embodiments are possible and contemplated. In the depicted system, the main memory 704 is coupled to the bus bridge 702 through a memory bus 706, and the graphics controller 708 is coupled to the bus bridge 702 through an AGP bus 710. Finally, a plurality of PCI devices 712A to 712B are coupled to the bus bridge 702 through a PCI bus 714. The second bus bridge 716 may further provide an electronic interface that accommodates one or more EISA or ISA devices 718 via the EISA / ISA bus 720. The processor 10 is coupled to a bus bridge 702 and to an optional L2 cache 728 through a CPU bus 724. Taken together, the CPU bus 724 and the interface to the L2 cache 728 may include an external interface to which the external interface unit 18 may be coupled.The bus bridge 702 provides an interface between the processor 10, the main memory 704, the graphics controller 708, and devices attached to the PCI bus 714. When an operation is received from one of the devices connected to the bus bridge 702, the bus bridge 702 identifies the target of the operation (eg, a specific device, or an example of a PCI bus 714, which is on the PCI bus 714). The bus bridge 702 routes the operation to the target device. The bus bridge 702 generally translates operations from a communication protocol used by the source device or the bus to a communication protocol used by the target device or the bus.In addition to providing an interface to the PCI bus 714 to the ISA / EISA bus, the second bus bridge 716 can further integrate additional functionality as needed. Input / output controller (not shown in the figure) can also include an external controller connected to or integrated with the second bus bridge 716 in the computer system 700 as required to provide a keyboard and slide Mouse 722 and various serial or parallel port operation support. In other embodiments, an external buffer unit (not shown) may be further coupled to the CPU bus 724 between the processor 10 and the bus bridge 702. Alternatively, the external cache may be coupled to the bus bridge 702 and the cache control logic of the external cache may be integrated into the bus bridge 702. The L2 cache 728 is further shown in the configuration behind the processor 10. It should be noted that the L2 cache 728 may be separated from the processor 10, integrated with the processor 10 into a card holder (such as slot 1 or slot A), or even integrated with the processor 10 to Semiconductor substrate. The L2 cache 728 can be protected by Error Correction Coding (ECC) data, and the ECC errors in the L2 cache 728 can be corrected using microcode routines (as described above) or in hardware as needed .The main memory 704 is a memory storing an application program and the processor 10 mainly executes the application program. A suitable main memory 704 includes Dynamic Random Access Memory (DRAM). For example, multiple synchronous dynamic random access memory (Synchronous DRAM, SDRAM) or Rambus dynamic random access memory (RambusDRAM, RDRAM) banks may be appropriate.The PCI devices 712A to 712B illustrate various interface devices. The peripheral device may include a device communicable with a device of another computer system (for example, a network interface card, a modem, etc.). In addition, peripheral devices can include other devices, such as, for example, video acceleration cards, sound cards, hard disks or floppy disk drives or drive controllers, Small Computer Systems Interfaces (SCSI) adapter cards, and phone cards . Similarly, the ISA device 718 exemplifies various types of peripheral devices, such as modems, sound cards, and various data collection interface cards, such as a General Purpose Interface Bus (GPIB) or a field bus interface card (field bus). interface cards).The graphics controller 708 is provided to control text and images generated on the display 726. The graphics controller can be embedded in a typical graphics accelerator known in the art to generate a three-dimensional data architecture that can be efficiently moved in and out of the main memory 704. The graphics controller 708 can therefore be the main bus of the AGP bus 710, wherein the graphics controller 708 can request and receive access to the target interface within the bus bridge 702, thereby obtaining access to the main memory 704. A dedicated graphics bus stores data quickly retrieved from the main memory 704. For some operations, the graphics controller 708 may be further configured to generate PCI communication protocol transactions on the AGP bus 710. The AGP interface of the bus bridge 702 may thus include functionality to support AGP communication protocol transactions and PCI communication protocol target and initiator transactions. The display 726 is any electronic display based on the image or text that can be rendered. Suitable display devices 726 include a cathode ray tube ("CRT"), a liquid crystal display ("LCD"), and the like.It should be noted that although the AGP, PCI, and ISA or EISA buses have been used as examples above, any bus architecture may be replaced as needed. It should be further noted that the computer system 700 may be a multi-processing computer system including an additional processor (eg, the processor 10a shown as an optional element of the computer system 700). The processor 10 a may be similar to the processor 10. In detail, the processor 10 a may be the same replica of the processor 10. The processor 10 a may be connected to the bus bridge 702 through an independent bus (as shown in FIG. 7) or may share the CPU bus 724 with the processor 10. In addition, the processor 10a may be coupled to a selective L2 cache 728a, which is similar to the L2 cache 728.Various embodiments may further include receiving, sending or storing instructions and / or installed data in accordance with the foregoing description of the computer-readable medium. Computer-readable media can include storage media or storage media such as magnetic or optical media such as magnetic disks, DVDs, CD-ROMs, such as RAM (e.g., SDRAM, RDRAM, SRAM, etc.), volatile or non-volatile ROM Volatile media. In addition, it should be noted that the above various embodiments may be separated from other embodiments as required or may be used in combination with one or more other embodiments. Further, an embodiment combining the operations of all the above embodiments can be considered.Once the above disclosure is fully understood, many variations and modifications will become apparent to those skilled in the art. Although the above description generally describes methods and mechanisms within the context of a general-purpose microprocessor, the methods and mechanisms can be applied to any device where power management may be required, such as routers, switches, graphics devices, bridge chips, portable Device. The following claims are to be construed to encompass all such changes and modifications. |
An apparatus comprises a first PFET (Ml) (210) including a first intrinsic body diode (215); an electrostatic discharge (ESD) subcircuit (ESD1)(222) coupled to a source of the first PFET (210); a reverse bias voltage element, such as a zener diode (240), an anode of which is coupled to a gate (VG) of the first PFET (210); a second PFET (M2)(250) having a source coupled to a cathode of the zener diode (240), a capacitor (Cl)(270) coupled to a gate of the second PFET (250); and a first resistor (Rl)(260) coupled to the gate of the second PFET (250). The apparatus can protect against both positive and negative electrostatic transient discharge events. |
CLAIMS What is claimed is: 1. An apparatus, comprising: a first p-type field effect transistor (PFET) including a first intrinsic body diode; an electrostatic discharge (ESD) subcircuit coupled to a source of the first PFET; a reverse bias voltage element, an anode of which is coupled to a gate of the first PFET; a second PFET having a source coupled to a cathode of the reverse bias voltage element; a capacitor coupled to a gate the second PFET; and a first resistor coupled to the gate of the second PFET. 2. The apparatus of Claim 1, wherein a drain of the first PFET and an anode of the first body diode is coupled to an input node. 3. The apparatus of Claim 1, wherein the ESD subcircuit is coupled between an output node of the apparatus and a ground. 4. The apparatus of Claim 3, wherein a second resistor is coupled between the gate of the first PFET and the ground. 5. The apparatus of Claim 3, wherein the first resistor is coupled to the ground. 6. The apparatus of Claim 3, wherein the capacitor is coupled in parallel between a drain of the second PFET and the gate of the second PFET. 7. The apparatus of Claim 1, wherein the apparatus is embodied within a single integrated circuit. 8. The apparatus of Claim 1, wherein the reverse bias voltage element is a zener diode. 9. An apparatus, comprising: a first PFET having a first body diode; an ESD subcircuit coupled to a source of the first PFET; a reverse bias voltage element, an anode of which is coupled to a gate of the first PFET; a second PFET, having a second body diode, having a source coupled to a cathode of the reverse bias voltage element; a first resistor coupled to a gate of the second PFET; a second resistor coupled between the gate of the first PFET and ground; AND a capacitor coupled to a gate the second PFET; wherein the capacitor is coupled in parallel between a drain of the second PFET and the gate of the second PFET; and wherein the first resistor is also coupled to the ground. 10. The apparatus of Claim 9, wherein the apparatus has an output node at the source of the first PFET. 11. The apparatus of Claim 9, wherein a spike of a positive current passes at least in part through the body diode of the first PFET and through the ESD subcircuit to the ground. 12. The apparatus of Claim 9, wherein a spike of negative current passes through the ESD subcircuit and then through the first PFET to a node coupled to the drain of the first PFET. 13. The apparatus of Claim 9, further comprising a voltage drop of at least a reverse bias voltage of a zener diode between the drain and the source of the first PFET. 14. The apparatus of Claim 9, wherein the apparatus is embodied within a single integrated circuit. 15. The apparatus of Claim 9, wherein the second body diode conveys a voltage drop to the cathode of the reverse bias voltage element. 16. The apparatus of Claim 9, wherein the first resistor and the first capacitor are configured to create a time constant to turn off the second PFET after an elapse of time of a negative voltage spike occurs on an input node. 17. The apparatus of Claim 9, wherein the reverse bias voltage element is a zener diode. 18. An apparatus, comprising: a first PFET having a first body diode; an ESD subcircuit coupled to a source of the first PFET; a reverse bias voltage element, an anode of which is coupled to a gate of the first PFET; a second PFET, having a second body diode, having a source coupled to a cathode of the reverse bias voltage element; a first resistor coupled to a gate of the second PFET; a second resistor coupled between the gate of the first PFET and ground; and a capacitor coupled to a gate the second PFET; wherein the capacitor is coupled in parallel between a drain of the second PFET and the gate of the second PFET; wherein the first resistor is also coupled to the ground; and wherein a first node is coupled to a drain of the first PFET, a drain of the second PFET, and the capacitor. |
POWER SUPPLY WITH ELECTROSTATIC DISCHARGE (ESD) PROTECTION [0001] This application is directed, in general, to electrostatic discharge (ESD) protection, and, more specifically, to ESD protection that protects from both positive and negative current spikes, wherein the ESD protection includes the integration of three features into one circuit: (1) to allow normal DC operation and provide low impedance when positive power supply is applied; (2) to block negative DC voltage when negative voltage is applied; and (3) to provide current path for both positive and negative ESD events. BACKGROUND [0002] During the normal course of use for many systems, a source of power will be removed and reconnected over time. Each time the power is reconnected, there may be an opportunity to connect the power improperly. For example, in battery powered applications, a battery may be inserted backwards. In rechargeable systems, a battery charger may be connected incorrectly, or a non-compatible battery charger may be connected. In other systems, a power supply component may be connected to the system incorrectly. A reverse battery, battery charger or power supply connection is dangerous because parasitic diodes of the internal circuits and even ESD (Electronic Static Discharge) circuits can be forward biased and draw a large current. These large currents may damage the ESD structures and internal circuits. [0003] Therefore, there is a need in the art to address issues associated with conventional power supply circuits. SUMMARY [0004] A first aspect provides an apparatus, comprising a first p-type field effect transistor (PFET) including a first parasitic body diode; an electrostatic discharge (ESD) sub-circuit coupled to a source of the first PFET; a reverse bias voltage element, an anode of which is coupled to a gate of the first PFET; a second PFET having a source coupled to a cathode of the reverse bias voltage element a capacitor coupled to a gate the second PFET; and a first resistive element coupled to the gate of the second PFET.[0005] A second aspect provides an apparatus, comprising: a first PFET having a first parasitic body diode; an ESD coupled to a source of the first PFET; a reverse bias voltage element, an anode of which is coupled to a gate of the first PFET; a second PFET, having a second parasitic body diode, having a source coupled to a cathode of the reverse bias voltage element; a first resistive element coupled to a gate of the second PFET; a second resistive element coupled between the gate of the first PFET and ground; a capacitor coupled to a gate the second PFET, wherein the capacitor is coupled in parallel between a drain of the second PFET and the gate of the second PFET, and wherein the first resistive element is also coupled to the ground. [0006] A third aspect provides an apparatus, comprising: a first PFET having a first parasitic body diode; an ESD coupled to a source of the first PFET; a reverse bias voltage element, an anode of which is coupled to a gate of the first PFET; a second PFET, having a second parasitic body diode, having a source coupled to a cathode of the reverse bias voltage element, a first resistive element coupled to a gate of the second PFET; a second resistance coupled between the gate of the first PFET and ground; a capacitor coupled to a gate the second PFET, wherein the capacitor is coupled in parallel between a drain of the second PFET and the gate of the second PFET, wherein the first resistive element is also coupled to the ground, and a first node is coupled to a drain of the first PFET, a drain of the second PFET, and the capacitor. BRIEF DESCRIPTION OF THE DRAWINGS [0007] FIG. 1 illustrates a prior art electrostatic discharge (ESD) protection circuitry; [0008] FIG. 2 illustrates an embodiment of an ESD protection circuit constructed according to the principles of the present disclosure; [0009] FIG. 3 is an illustration of a comparison of a circuit footprint between the circuitry of FIG. 1 and a circuit of FIG. 2; [0010] FIG. 4 is a voltage vs. time simulation of an ESD strike of a positive polarity between VDDPIN and GND nodes of the ESD protection circuit of FIG. 2; [0011] FIG. 5 is a voltage vs. time simulation of an ESD strike of negative polarity between VDDPIN and GND nodes of the ESD protection circuit of FIG. 2; [0012] FIG. 6 is a voltage vs. time simulation of the response to a non-ESD high voltage positive and negative DC input to the ESD protection circuit of FIG. 2; and [0013] FIG. 7 is a voltage vs. time simulation of the response to a non-ESD low voltage positive and negative DC input to the ESD protection circuit of FIG. 2.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0014] FIG. 1 illustrates one example of a prior art voltage protection circuit, as also discussed in U.S. Patent Application Serial Number 12/702,699, entitled "Reverse Voltage Protection Circuit," filed February 9, 2010, to Weibiao Zhang, ('"699"), which is hereby incorporated by reference in its entirety. [0015] Generally, in FIG. 1, when a positive DC voltage above a threshold voltage of a zener diode ZD is applied between a VDDPIN input node 102 and a ground (GND) of an ESD circuitry 100, a p-type field effect transistor (PFET) Ml is on. Current flows between the VDDPIN 102 and a VDDINT node 104, which is coupled to a functional circuitry 130, wherein the functional circuitry 130 is that which is to be protected. The PFET Ml is on (a "short") because the reverse bias voltage of the zener ZD is applied between the drain and the gate of PFET Ml, and therefore always has PFET Ml turned on. The majority of the rest of the voltage drop between VDDPIN 102 and GND then occurs across R2. [0016] In the ESD circuitry 100, typically there can be two cases of employment: 1. when VDDPIN voltage is lower than the threshold voltage (or breakdown voltage) of the ZD, VG will be pulled to zero voltage or GND voltage by R2.; because drain of Ml is at VDDPIN voltage, source of Ml will be very close to drain voltage due to the parasitic diode, Ml will be on. 2. When VDDPin voltage is higher than breakdown voltage of the ZD, VG voltage will be still lower than that of the VDDPIN. The voltage difference between drain and VG of Ml will be higher than the threshold voltage of the PFET, this will help guarantee Ml will still remain on. [0017] Moreover, regarding an additional reverse voltage tolerant ESD circuit 120 of the ESD circuitry 100, PFETs M4, M5 and M6 are always off when VDDPIN is positive, as the gate and the source of these PFETs are always coupled and therefore less than a threshold value VT of its corresponding PFET, and the drain of each PFET is at zero voltage or lower than its source and the gate voltage, i.e. for M4 and M6, their drain nodes are coupled to ground, for M5, its drain node voltage is lower than the source and gate node voltage, so the PFETs M4, M5, and M6 are off. Moreover, CI blocks the DC component of the positive voltage of VDDPIN 102. N5 is a little lower than the positive power supply.[0018] Furthermore, the PFET M4 blocks any current through PFET M3 from VDDPIN 102 for a positive voltage, so even though the voltage difference between the drain of PFET M3, and the gate N3 can be significantly higher than zero, this leg of the ESD circuit 100 is again off. Therefore, under positive DC conditions, there is no current flowing from VDDPIN 102 through the branches consisted of M3, M4 and of M2, R4, M5, M6 to GND. The voltage drop between VDDPIN 102 and GND occurs substantially between the drain and the source of PFET M4 for the branch consisted of M3 and M4. For the branch consisted of M2, R4, M5 and M6, the voltage drop is shared between M5 and M6. [0019] If a large positive voltage spike/transient strikes VDDPIN 102 (i.e., a large voltage transient occurs), Ml would still stay on, due to the continuing reverse bias of the zener diode ZD discussed above, and convey a positive current from the VDDPIN 102, and an electrostatic discharge subcircuit (ESDI) 122 would convey a positive current pulse to ground through its own protection circuitry, thereby protecting the functional circuitry 130. For more information on subcircuit ESDI 122, please see '699. Additionally, when a large positive voltage spike strikes VDDPIN 102. The parasitic body diode of Ml is forward biased and can shunt current to ESDI 122. [0020] However, if a negative DC voltage were applied between VDDPIN 105 and GND, the circuit 100 could function as follows. Dl would block a current flow from GND to VDDPIN 105. Therefore, VG would be at GND voltage. Drain voltage of Ml would be that of VDDPIN 105, which is negative. Source voltage at Ml would be very close to zero, as derived from the subcircuit ESDI 122, so therefore Ml is "open", blocking current flowing from the ESDI 122 circuit and the functional circuitry 130 to VDDPIN 102. [0021] It should be noted that PFETs have an intrinsic, parasitic "body diode" as part of their internal configuration. For more information regarding body diodes, please see "Analysis and Design of Analog Integrated Circuits, 3rd edition" by Paul R. Gray/ Robert G. Meyer, page 171- 172 and 174, hereby incorporated by reference in its entirety, wherein it discusses how parasitic body diodes are formed by the PN junctions of the MOS transistors. Moreover, please see the "Design of Analog CMOS Integrated Circuits" by Behzad Razavi, Chapter 2: Basic MOS Device Physics", page 12, also incorporated by reference in its entirety, wherein it discusses a junction diode from a drain node to a body node, wherein the cathode node of the junction diode is shorted to the source node.[0022] Regarding the additional reverse voltage tolerant ESDI subcircuit 122, for a negative DC voltage, N2 is two body diode voltage drops from GND, as these are the body drops of M5 and M6, and there would be no current through R3. Therefore, the drain of M2 is less than the gate of M2, and the source of M2 are two body diode voltage drops down from GND (the body diodes of M6 and M5), so therefore, M2 is off. Therefore, the gate at N3 of M3 is also two voltage drops from zero, which is higher than the M3 drain voltage. However, M3 is unable to conduct because M3 is also turned off. [0023] However, if there is a negative ESD transient, the additional reverse voltage tolerant ESD circuit 120 can work as follows. The capacitor CI is pulled down with the transient charge, therefore the gate of PFET M2, node N2, is also pulled down. However, the voltage at the source of PFET M2, node N3, is still close to two body diode voltages lower than zero. Therefore, PFET M2 is turned on and shorted, and N3 is at VDDPIN 102 negative transient voltage, therefore, PFET M3 is turned on and shorted, and a reverse current flows from GND to VDDPIN 102 through PFET M4 and PFET M3. In the circuit 120, the resistor R4 helps to ensure that a reverse current through M6, M5 and M2 is kept below a minimum threshold to avoid overwhelming PFETs M6, M5 and M2. [0024] In the circuit 100, if a negative ESD spike transient occurs, M2 is on, and this pulls down N3, so that M3 is on and dumps a large current through the branch of M4 and M3. M3 and M4 are sized big enough to dump enough current quickly. At the beginning of the negative strike, there is also a voltage drop between node N5, which is two body diode voltage drops from GND, and VDDPIN 102, which becomes distributed across R3 and CI . Therefore, CI starts to charge up until the capacitor has a voltage across it equal to the voltage drop from N5 and VDDPIN. As the voltage across CI reaches the voltage from VDDPIN 102 to N5, the gate of M2, N2 is then pulled equal to its source N3, and therefore PFET M2 becomes open, and N3 is forced close to GND. Then, the gate of the PFET M3 is not lower than its source by more than VT of PFET M3, so PFET M3 of ESD circuitry 100 will be turned off gradually. [0025] FIG. 2 illustrates an ESD protection circuit 200 constructed according to the principles of the present disclosure. In the circuit 200, a VDDPIN 202 is coupled to a drain of a PFET Ml 210 having a body diode 215. A source of the PFET Ml 210 is coupled to a VDDINT node 204, an output node of the apparatus, which is coupled to a functional circuitry 230. An ESDI subcircuit 222 is coupled to the VDDINT 204 and a GND 209. Please note that the ESDI 222 will output avoltage between GND and node VDDINT 204 a voltage from GND to a maximum allowable voltage, such as 40 volts, although other allowable maximum allowable voltage are generally determined by process technologies and devices used. [0026] Please note that the intrinsic, parasitic body diodes, such as body diodes 215 and 255, are illustrated for ease of explanation of the ESD protection circuit 200 in FIG. 2, and not in and of themselves an additional element within the circuit 200; rather, they are employed within the circuit 200 as an intrinsic part of its corresponding PFET. [0027] In a further aspect, the PFET Ml 210 is a Drain Extended PMOS (DEPMOS), which has a non- symmetrical structure. The non- symmetrical structure of the Ml PFET 215 can allow a PFET to survive higher voltage across Drain to Source, Drain to Gate and normal Gate to Source voltages. [0028] In the ESD protection circuit 200, a drain of the PFET 250, having a body diode 255, is also coupled to VDDPIN. A source of the PFET M2 is coupled to a cathode of a zener diode 240. An anode of the zener diode 240 is coupled to a gate of the PFET Ml 210, at a node VG. A resistor R2 235 is also coupled to between the node VG at the gate of the PFET Ml 210 and the GND 209. [0029] Although zener ZD 240 provides reverse blocking voltage, in a further aspect, a reverse bias voltage element can be substituted that when the reverse element biased is reverse biased, it is off, and when it is higher than some threshold voltage like 3V or 7V, it will be forced to be shorted. If it is forward biased, then it is a short. [0030] In the ESD protection circuit 200, a capacitor 270 is coupled in parallel between the VDDPIN 202 node and a gate of the M2 250. A resistor 260 is coupled between the gate of the PFET M2 250 and the GND 209. [0031] In one aspect, the ESD protection circuit 200 can work as follows. [0032] When a positive DC voltage is applied between the VDDPIN 202 and the GND 209, the drain of PFET M2 250 is at VDDPIN. The gate of PFET M2 250 is at GND voltage 209, due to the DC blocking of the CI 270 and Rl conducting between GND 209 to the gate of the M2. Therefore, PFET M2 is "on", and a voltage drop then occurs across the reverse biased ZD 240. The voltage drop across the reverse biased ZD 240, and a voltage drop across PFET M2 250 are then applied between the drain and gate of PFET Ml 210. Total voltage drop is larger than the threshold voltage of Ml . Therefore, PFET Ml 210 is on, VDDINT 204 is at the voltage ofVDDPIN 202 minus a voltage drop across PFET Ml 210. Since the Ml is "on", the impedance of the Ml is low, and therefore the voltage drop between 202 and 204 is small, and therefore a low impedance power supply. [0033] In the event of a positive voltage spike/transient on the VDDPIN 202, such as more than 40 volts, the circuit 200 can work as follows through a mitigation of the voltage spike through a conveyance of current from VDDPIN 202 to GND 209. The positive voltage at the drain of PFET M2 210 will be pulled up to the positive voltage spike of VDDPIN 202. Therefore, there will still be a reverse bias voltage drop across ZD 240 which can be, for example, about 7 volts and a voltage drop across PFET M2 250, which is applied between the drain and the gate of PFET Ml 210. PFET M2 250 will still be on because the drain of M2 255 will still be higher than the gate of M2 255. Therefore, the PFET Ml 210 is still on and conducting from VDDPIN 202 to GND 209 through its body diode 215. Then, a positive current is absorbed by the ESDI subcircuit 222 through an ESD current path for positive strikes 203, mitigating the voltage spike of VDDPIN 202. Even if Ml 210 is not on, the parasitic body diode 215 of Ml 210 will shunt positive ESD current to ESDI 222. [0034] In some aspects of the circuit 200, the values of Rl 260 and CI 270 can be adjustable, such as by a user of the circuit 200. For example, the CI 270 can be a varactor, and the Rl 260 can be a transistor that gives an equivalent variable resistance. [0035] For a negative DC voltage applied to VDDPIN 202, the circuit 200 can work as follows. The gate of PFET M2 250 is at zero volts due to both the DC blocking of CI 270 and being coupled over Rl 260 to GND 209. However, the drain of PFET M2 255 is at the negative DC voltage. The source of PFET M2 255 will also be at a lower voltage potential than the gate of PFET M2 255. Therefore, PFET M2 250 is not conducting. Therefore, VG is at the GND 209 voltage, which means that VG is at a higher voltage than VDDPIN 202, therefore the drain to source voltage is off for PFET Ml 210. Moreover, the source of PFET Ml 210 sees the GND 209 voltage conveyed from subcircuit ESDI 222, so Ml 201 is also off. CI 270 blocks DC negative voltage. [0036] For a negative voltage strike at the VDDPIN 202, the circuit 200 can work as follows to mitigate the voltage strike through conveyance of a current from GND 209 to VDDPIN 202. The voltage across the capacitor CI 270 does not instantaneously change for the negative voltage strike. Therefore, the gate of PFET M2 250 is temporarily brought to the VDDPIN 202 negativestrike voltage. Therefore, there is a positive voltage difference across source to gate of PFET M2 255, and therefore PFET M2 255 starts to conduct source to drain. ZD 240 is forward biased, it will short VG to drain of M2 250. Therefore, current will flow from GND 209 to VDDPIN 202 for this transient through PFET M2 250, but limited by the resistance of M2. When Ml 210's gate is pulled down to close to VDDPIN 202, Ml is ON to convey a transient current from GND, through subcircuit ESDI 122 and Ml to VDDPIN to mitigate the negative voltage strike. [0037] For FIG 2, for negative voltage strikes, there can be an RC time limit as to how long an ESD current path 213 for a negative strike lasts. A subcircuit ESDI 222 and Ml 210 are utilized to perform negative ESD protection to the internal circuit block 230. This time constant can be calculated from the RC values of Rl 260 and CI 270. The larger the resistor value of Rl 260 and capacitance of CI 270, the longer the time it would take before the circuit 200 would stop the negative current path 212 through the Ml 210 to the subcircuit ESDI 222. [0038] Regarding the circuit 200, this circuit 200 can have at least the following advantages. The circuit 200 can have a small silicon area than that of circuit 100 in FIG. 1. Moreover, a number of elements of FIG. 1 are removed. Generally, the most area consuming parts for negative ESD protection in FIG. 1 are M3, M4, which are not needed in the circuit of FIG. 2 anymore. The standalone physical elements of M5, M6 and R4 are not needed either. One example layout of the implementation showed 27% area saving. The PFET Ml 210 has low impedance during positive voltage operation that is within the voltage parameters of the circuit 200, which in one aspect, can be a positive 40 volt rail applied at the VDDPIN 202, which can block negative voltage. Moreover, the circuit 200 can provide ESD protection to the functional circuitry 230 for both positive and negative strikes. [0039] As compared to the '699 Application, ESD protection 200 has a simpler topology that, nonetheless, still offers protection against positive and negative voltage strikes. The circuit 200 can eliminate a need for discrete components on a printed circuit board. In some prior art circuits, various components for ESD protection needed to be off the chip, since they have to be outside of the integrated circuit IC. Also, the circuit 200 can consume a smaller silicon area when compared to circuit 100, as will be described in more detail in FIG. 3. [0040] The circuit 200 can be customized to meet different ESD targets, for example through varying the values of Rl 260 and CI 270. The circuit 200, with or without the functionalcircuitry 230, may also be packaged into a stand-alone integrated circuit (IC) or be part of a design that offers a conditioned voltage for an internal circuitry. [0041] In the circuit 200, Ml 210 can have a "large" total finger width to reduce impedance. The low value depends on how low the impedance which the circuit 200 is designed, and the process with which it is implemented. [0042] Generally, in one aspect, the ESD circuit 200 of FIG. 2 has consolidated the functionality of Rl of FIG. 1 into PFET M2 210 of FIG. 2, the diode Dl of FIG. 1 is functionally incorporated into the body diode of Ml 210, and the functionality of PFET transistors M3, M4, M5 and M6 from ESD circuitry 100 of FIG. 1, has consolidated into PFET Ml 215 and its controlling circuitry of ESD protection circuit 200. Therefore, when comparing ESD protection circuit 200 to prior art ESD protection circuit 100, there has been a retention of functionality of omitted elements of ESD protection circuit 100 of FIG. 1 within ESD protection circuit 200. [0043] Moreover, in the ESD 200, PFET Ml 210 is employed for a current pathway for a negative strike 213, which in the prior art of FIG. 2, would have been conveyed through M3 and M4 of ESD circuitry 100. However, in the ESD circuit 200, PFET Ml 210 is advantageously employable as a conduction path for both positive and negative strikes, reducing the elements of an ESD circuit when compared to ESD protection circuitry 100, yet without these elements, and negative strike protection has been integrated into PFET Ml 210. Indeed, when compared to ESD protection circuitry 100, a dedicated C1/R3/M2/M3/R4/M5/M6 current path has been eliminated, and a number of these elements emitted in the circuit 200, yet their functionality is retained. [0044] FIG. 3 is a layout example of the circuit 100, and how the circuit 200 can take up less of the IC footprint. The circuit 300 (I assume it means the entire area in FIG. 3) has an area of 900*800 um*um; 301 corresponds to ESDI 122 in Fig 1 with an area of 400* 130um*um; 303 corresponds to Ml in Fig. 1; 305 is M3 in Fig. 1; 307 and 309 are the areas no longer needed for circuit 200, which correspond to M4, M5, M5, R4, R3 M2, and part of CI and M3. Total area of 307 and 309 is ~ 550*400um*um. [0045] FIG. 4 illustrates an example ESD protection 200 performance simulation for a positive polarity ESD strike. In the illustration, a 2kV Human Body Model (HBM), which assumes a human body is a charged capacitor with 2000 Volt voltage, and when one uses one's hand to touch the circuit accidentally, the circuit under attack will suffer from this strike. A strike wassimulated from VDDPIN to ground. The ESD protection circuit 200 selected for this illustration can sustain 40V DC voltage VDDPIN has a peak voltage at 19V and VDDINT has a peak voltage at 16 V, and as these voltages have an absolute value of less than 40V, so the circuit 200 can survive the positive 2kV HBM strike. The two graphs represent the voltage at VDDPIN 202 and VDDINT 204, respectively, at various times. [0046] FIG. 5 illustrates an example ESD protection 200 performance simulation for a negative polarity ESD strike. In the illustration, a 2kV HBM. A negative strike was simulated from VDDPIN to ground. VDDPIN clamped at -15.4V and VDDINT clamped at -2.4 V, as the absolute value of these voltages are less than 40V, so the ESD circuit 200 can survive the negative 2kV HBM strike. The two graphs represent the voltage at VDDPIN 202 and VDDINT 204, respectively, at various times. As is illustrated, the VDDINT 204 has a significant protection from a negative voltage transient applied to VDDPIN 202. [0047] FIG. 6 illustrates an example of a simulation of both a low impedance positive voltage and a negative overvoltage protection, which in the illustrated simulation is +/-40 Volts, although this can change according to CMOS processes. A 50 ohm load is applied , although other loads can be used. The load can be a resistor of 50 ohm, although it can also be some other value, and can also be such elements as a current sink, etc. As is illustrated when VDDPIN is 40V, VDDINT is 39.01 volts. In the illustrated example, VDDINT tracks VDDPin within IV, signifying the low impedance or low voltage drop nature of the circuit in positive DC mode. However, advantageously, when VDDPIN is -40V, VDDINT is nonetheless clamped at -2.854 uV. In other words, there is significant negative voltage protection for the load on VDDINT 302. [0048] FIG. 7 illustrates an example of a typical usage of the circuit 200. As is illustrated, with a 50 ohm load on VDDINT 202, when BDDPIN 202 is 2V, VDDINT 204 is 1.81 V. When VDDINT is -2V, VDDING is clamped at -1.25 uV. [0049] The ESDI 222 circuit provides current shunt property when VDDIN is stressed both positive and negative to GND. In the negative direction, it may have e characteristics of a forward biased. Any circuit with these characteristics can be used for ESDI 222. [0050] Those skilled in the art to which this Application relates will appreciate that other and further additions, deletions, substitutions and modifications may be made to the described embodiments. |
Systems, methods, and apparatus for communicating a control signal between device components are provided. Within an apparatus, an integrated circuit (IC) sends a control signal to a system on chip (SoC). The control signal requests enablement or disablement of one or more resources corresponding to the IC. Thereafter, a converting circuit within the SoC converts the control signal from the IC into a command to be transmitted to one or more devices. The converting circuit then transmits the command to the one or more devices via a bus coupling the SoC to the one or more devices. The one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources. As such, the one or more PMICs enable or disable the one or more resources corresponding to the IC based on the command. |
CLAIMS1. A method performed at an apparatus for communicating a control signal between device components, comprising:sending a control signal from an integrated circuit (1C) to a system on chip (SoC), the control signal for requesting enablement or disablement of one or more resources corresponding to the IC;converting, via a converting circuit within the SoC, the control signal from the IC into a command to be transmitted to one or more devices; andtransmitting the command from the converting circuit to the one or more devices via a bus coupling the SoC to the one or more devices.2. The method of claim 1, wherein the IC is a circuit external to the SoC.3. The method of claim 1, wherein the requesting enablement or disablement of the one or more resources includes at least one of:a request for a voltage change;a request for a clock signal;a request for a mode change; ora request for a state change.4. The method of claim 1, wherein the command is:a single message transmitted on the bus; ormultiple messages transmitted on the bus.5. The method of claim 1, wherein the command is transmitted via the bus according to a system power management interface (SPMI) protocol.6. The method of claim 1, wherein the converting circuit is configured to convert the control signal into the command while a host processor of the SoC is in a sleep or low-power state.7. The method of claim 1, wherein the converting circuit is configured to convert the control signal into the command by translating a signal transition of the control signal into a stream of bits representing the command.8. The method of claim 1, wherein the one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources, the method further including:enabling or disabling, via the one or more PMICs, the one or more resources corresponding to the IC based on the command.9. The method of claim 8, wherein the 1C includes the one or more PMICs.10. The method of claim 8, wherein the command is transmitted from the converting circuit to the one or more PMICs via an arbiter that provides access to the one or more PMICs.11. The method of claim 8, wherein the command is:a global command transmitted to all PMICs of the one or more PMICs; or a command transmitted to a core PMIC of the one or more PMICs, wherein the core PMIC includes a PMIC controller for routing the command to at least one PMIC of the one or more PMICs intended to receive the command.12. The method claim 8, further including:sending a second command from a requesting PMIC of the one or more PMICs to the SoC via the bus, the second command for requesting enablement or disablement of one or more resources corresponding to the requesting PMIC and controlled by at least one controlling PMIC of the one or more PMICs;converting, via the converting circuit, the second command from the requesting PMIC into a third command to be transmitted to the at least one controlling PMIC; transmitting the third command from the converting circuit to the at least one controlling PMIC via the bus; andenabling or disabling, via the at least one controlling PMIC, the one or more resources corresponding to the requesting PMIC based on the third command.13. An apparatus for communicating a control signal between device components, comprising:one or more devices;a system on chip (SoC);a bus coupling the SoC to the one or more devices;an integrated circuit (IC) configured to send a control signal to the SoC, the control signal for requesting enablement or disablement of one or more resources corresponding to the IC; anda converting circuit formed within the SoC and configured to convert the control signal from the IC into a command and transmit the command to the one or more devices via the bus.14. The apparatus of claim 13, wherein the IC is a circuit external to the SoC.15. The apparatus of claim 13, wherein the one or more resources includes at least one of:a voltage regulator regulating a voltage of the IC;a clock buffer providing a clock signal to the IC;a mode change; ora state change.16. The apparatus of claim 13, wherein the command is:a single message transmitted on the bus; ormultiple messages transmitted on the bus.17. The apparatus of claim 13, wherein the command is transmitted via the bus according to a system power management interface (SPMI) protocol.18. The apparatus of claim 13, wherein the converting circuit is configured to convert the control signal into the command while a host processor of the SoC is in a sleep or low-power state.19. The apparatus of claim 13, wherein the converting circuit is configured to convert the control signal into the command by translating a signal transition of the control signal into a stream of bits representing the command.20. The apparatus of claim 13, wherein the one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources,wherein the one or more PMICs enable or disable the one or more resources corresponding to the IC based on the command.21. The apparatus of claim 20, wherein the IC includes the one or more PMICs.22. The apparatus of claim 20, wherein the converting circuit is configured to transmit the command to the one or more PMICs via an arbiter configured to provide access to the one or more PMICs.23. The apparatus of claim 20, wherein the command is:a global command transmitted to all PMICs of the one or more PMICs; or a command transmitted to a core PMIC of the one or more PMICs, wherein the core PMIC includes a PMIC controller configured to route the command to at least one PMIC of the one or more PMICs intended to receive the command.24. The apparatus claim 20, wherein:a requesting PMIC of the one or more PMICs is configured to send a second command to the SoC via the bus, the second command for requesting enablement or disablement of one or more resources corresponding to the requesting PMIC and controlled by at least one controlling PMIC of the one or more PMICs;the converting circuit is configured to convert the second command from the requesting PMIC into a third command and transmit the third command to the at least one controlling PMIC via the bus; andthe at least one controlling PMIC is configured to enable or disable the one or more resources corresponding to the requesting PMIC based on the third command.25. An apparatus for communicating a control signal between device components, comprising:means for sending a control signal from an integrated circuit (IC) to a system on chip (SoC), the control signal for requesting enablement or disablement of one or more resources corresponding to the IC;means for converting, within the SoC, the control signal from the IC into a command to be transmitted to one or more devices; andmeans for transmitting the command from the means for converting to the one or more devices via a bus coupling the SoC to the one or more devices.26. The apparatus of claim 25, wherein the means for converting is configured to convert the control signal into the command while a host processor of the SoC is in a sleep or low-power state.27. The apparatus of claim 25, wherein the one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources, the apparatus further including:means for enabling or disabling, via the one or more PMICs, the one or more resources corresponding to the IC based on the command.28. A non-transitory computer-readable medium storing computer-executable code at an apparatus for communicating a control signal between device components, comprising code for causing a computer to:send a control signal from an integrated circuit (IC) to a system on chip (SoC), the control signal for requesting enablement or disablement of one or more resources corresponding to the 1C;convert, within the SoC, the control signal from the IC into a command to be transmitted to one or more devices; andtransmit the command to the one or more devices via a bus coupling the SoC to the one or more devices.29. The non-transitory computer-readable medium of claim 28, wherein the code for causing the computer to convert is configured to convert the control signal into the command while a host processor of the SoC is in a sleep or low-power state.30. The non-transitory computer-readable medium of claim 28, wherein the one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources, the non-transitory computer-readable medium further including code for causing the computer to:enable or disable, via the one or more PMICs, the one or more resources corresponding to the IC based on the command. |
GENERAL PURPOSE INPUT OUTPUT TRIGGERED INTERFACE MESSAGECROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to and the benefit of Non-Provisional PatentApplication No. 16/037,802, filed in the United States Patent and Trademark Office on July 17, 2018 and Provisional Patent Application No. 62/659,034, filed in the United States Patent and Trademark Office on April 17, 2018, the entire contents of which are incorporated herein by reference as if fully set forth below in their entirety and for all applicable purposes.TECHNICAL FIELD[0002] The present disclosure relates generally to serial communication and, more particularly, to communicating control signals between device components over a serial data link to reduce the number of hardware pins needed to connect the device components.BACKGROUND[0003] Mobile communication devices may include a variety of components including circuit boards, integrated circuit (IC) devices and/or System-on-Chip (SoC) devices. The components may include processing devices, user interface components, storage and other peripheral components that communicate through a shared data communication bus, which may include a serial bus or a parallel bus. General-purpose serial interfeces known in the industry include the Inter-Integrated Circuit (I2C or PC) serial bus and its derivatives and alternatives, including interfaces defined by the Mobile Industry Processor Interface (MIPI) Alliance, such as I3C and the Radio Frequency Front-End (RFFE) interface.[0004] General purpose input/output (GPIO) provided in an integrated circuit (IC) device enable an IC designer to define and configure pins that may be customized for particular applications. For example, a GPIO pin may be programmable to operate as an output or as an input pin depending upon a user’s needs. A GPIO module or peripheral may control groups of pins which can vary based on the interface requirement. GPIO pins are commonly included in microprocessor and microcontroller applications because they offer flexibility and programmability. For example, an applications processor in mobile devices may use a number of GPIO pins to conduct handshake signaling such as inter- processor communication (IPC) with a modem processor. [0005] In many instances, a number of command and control signals are employed to connect different component devices in mobile communication devices. These connections increase the number of general-purpose input/output (GPIO) pins within the mobile communication devices, which increases the wiring between the different component devices and an overall printed circuit board (PCB) complexity. Accordingly, it would be desirable to reduce the number of GPIO pins needed to connect the different component devices by transmitting the command and control signals over an existing serial data link.[0006] As mobile communication devices continue to include a greater level of functionality, improved techniques are needed to support low-power control signaling between components that reduce the number of GPIO pins in a mobile communication device.SUMMARY[0007] Certain aspects of the disclosure relate to systems, apparatus, methods and techniques that can communicate a control signal between device components.[0008] In various aspects of the disclosure, a method performed at an apparatus for communicating a control signal between device components is provided. The method includes sending a control signal from an integrated circuit (IC) to a system on chip (SoC), the control signal for requesting enablement or disablement of one or more resources corresponding to the IC, converting, via a converting circuit within the SoC, the control signal from the IC into a command to be transmitted to one or more devices, and transmitting the command from the converting circuit to the one or more devices via a bus coupling the SoC to the one or more devices. The one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources. Accordingly, the method further includes enabling or disabling, via the one or more PMICs, the one or more resources corresponding to the 1C based on the command.[0009] In an aspect, the method may also include sending a second command from a requestingPMIC of the one or more PMICs to the SoC via the bus, the second command for requesting enablement or disablement of one or more resources corresponding to the requesting PMIC and controlled by at least one controlling PMIC of the one or more PMICs, converting, via the converting circuit, the second command from the requesting PMIC into a third command to be transmitted to the at least one controlling PMIC, transmitting the third command from the converting circuit to the at least one controlling PMIC via the bus, and enabling or disabling, via the at least one controlling PMIC, the one or more resources corresponding to the requesting PMIC based on the third command.[0010] In an aspect, the IC is a circuit external to the SoC. In a further aspect, the one or more resources includes a voltage regulator regulating a voltage of the IC, a clock buffer providing a clock signal to the IC, a mode change, and/or a state change.[0011] In an aspect, the command is a single message transmitted on the bus or multiple messages transmitted on the bus. In another aspect, the command is transmitted via the bus according to a system power management interface (SPMI) protocol.[0012] In an aspect, the converting circuit is configured to convert the control signal into the command while a host processor of the SoC is in a sleep or low-power state. In a further aspect, the converting circuit is configured to convert the control signal into the command by translating a signal transition of the control signal into a stream of bits representing the command.[0013] In an aspect, the IC includes the one or more PMICs. In another aspect, the command is transmitted from the converting circuit to the one or more PMICs via an arbiter circuit/module that provides access to the one or more PMICs. In a further aspect, the command is a global command transmitted to all PMICs of the one or more PMICs, or a command transmitted to a core PMIC of the one or more PMICs, wherein the core PMIC includes a PMIC controller for routing the command to at least one PMIC of the one or more PMICs intended to receive the command.[0014] In another aspect of the disclosure, an apparatus for communicating a control signal between device components is provided. The apparatus includes one or more devices, a system on chip (SoC), a bus coupling the SoC to the one or more devices, an integrated circuit (IC) configured to send a control signal to the SoC, the control signal for requesting enablement or disablement of one or more resources corresponding to the IC, and a converting circuit formed within the SoC and configured to convert the control signal from the IC into a command and transmit the command to the one or more devices via the bus.[0015] In an aspect, the one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources, wherein the one or more PMICs enable or disable the one or more resources corresponding to the IC based on the command. In a further aspect, a requesting PMIC of the one or more PMICs is configured to send a second command to the SoC via the bus, the second command for requesting enablement or disablement of one or more resources corresponding to the requesting PMIC and controlled by at least one controlling PMIC of the one or more PMICs, the converting circuit is configured to convert the second command from the requesting PMIC into a third command and transmit the third command to the at least one controlling PMIC via the bus, and the at least one controlling PMIC is configured to enable or disable the one or more resources corresponding to the requesting PMIC based on the third command.[0016] In a further aspect of the disclosure, an apparatus for communicating a control signal between device components is provided. The apparatus includes means for sending a control signal from an integrated circuit (IC) to a system on chip (SoC), the control signal for requesting enablement or disablement of one or more resources corresponding to the IC, means for converting, within the SoC, the control signal from the IC into a command to be transmitted to one or more devices, and means for transmitting the command from the means for converting to the one or more devices via a bus coupling the SoC to the one or more devices. In an aspect, the one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources. As such, the apparatus may further include means for enabling or disabling, via the one or more PMICs, the one or more resources corresponding to the IC based on the command.[0017] In another aspect of the disclosure, a non-transitory computer-readable medium storing computer-executable code at an apparatus for communicating a control signal between device components. The apparatus includes code for causing a computer to send a control signal from an integrated circuit (IC) to a system on chip (SoC), the control signal for requesting enablement or disablement of one or more resources corresponding to the IC, convert, within the SoC, the control signal from the IC into a command to be transmitted to one or more devices, and transmit the command to the one or more devices via a bus coupling the SoC to the one or more devices. In an aspect, the one or more devices includes one or more power management integrated circuits (PMICs) configured to control the one or more resources. As such, the non-transitory computer- readable medium further includes code for causing the computer to enable or disable, via the one or more PMICs, the one or more resources corresponding to the IC based on the command. BRIEF DESCRIPTION OF THE DRAWINGS[0018] FIG. 1 illustrates an apparatus employing a data link between IC devices that is selectively operated according to one of plurality of available standards.[0019] FIG. 2 illustrates a system architecture for an apparatus employing a data link betweenIC devices.[0020] FIG. 3 illustrates a device that employs an RFFE bus to couple various radio frequency front-end devices.[0021] FIG. 4 illustrates an apparatus that includes an Application Processor and multiple peripheral devices that may be adapted according to certain aspects disclosed herein.[0022] FIG. 5 illustrates a system that employs physical GPIO pins for a variety of purposes.[0023] FIG. 6 illustrates an example of a system which includes one or more communication links that employ sideband GPIO.[0024] FIG. 7 is a diagram illustrating an example architecture for communicating signals between devices.[0025] FIG. 8 is a diagram illustrating another example architecture for communicating signals between devices according to aspects of the present disclosure.[0026[ FIG. 9 is a diagram illustrating an example on-SoC architecture for communicating signals between devices according to aspects of the present disclosure.[0027] FIG. 10 is a flowchart of a method that may be performed at a device for communicating a control signal between device components according to aspects of the present disclosure.[0028] FIG. 11 is a diagram illustrating a simplified example of a hardware implementation for an apparatus adapted in accordance with certain aspects disclosed herein.DETAILED DESCRIPTION[0029] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. [0030] Several aspects of the invention will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.Overview[0031] Devices that include multiple SoC and other IC devices often employ a shared communication interface that may include a serial bus or other data communication link to connect processors with modems and other peripherals. The serial bus or other data communication link may be operated in accordance with multiple standards or protocols defined. In one example, a serial bus may be operated in accordance with I2C, I3C, SPMI, and/or RFFE protocols.[0032] A number of different protocol schemes may be used for reducing a number of GPIO pins needed to connect different component devices by transmitting command and/or control signals between the different component devices over an existing serial data link. Existing protocols have well-defined and immutable structures in the sense that their structures cannot be changed to optimize transmission latencies based on variations in use cases, and/or coexistence with other protocols, devices and applications. It is an imperative of real-time embedded systems that certain deadlines be met In certain real- time applications, meeting transmission deadlines is of paramount importance. When a common bus supports different protocols it is generally difficult or impossible to guarantee optimal latency under all use cases. In some examples, an I2C, I3C, RFFE, or System Power Management Interface (SPMI) serial communication bus may be used to tunnel different protocols with different latency requirements, different data transmission volumes, and/or different transmission schedules.[0033] Certain aspects disclosed herein provide methods, circuits, and systems that are adapted to communicate control signals between device components over a serial data link. The disclosed techniques allow a device to support low-power control signaling between the device components while reducing the number of GPIO pins in the device. Examples Of Apparatus That Employ Serial Data Links[0034] According to certain aspects, a serial data link may be used to interconnect electronic devices that are subcomponents of an apparatus such as a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a notebook, a netbook, a smartbook, a personal digital assistant (PDA), a satellite radio, a global positioning system (GPS) device, a smart home device, intelligent lighting, a multimedia device, a video device, a digital audio player (e.g., MP3 player), a camera, a game console, an entertainment device, a vehicle component, a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.), an appliance, a sensor, a security device, a vending machine, a smart meter, a drone, a multicopter, or any other similar functioning device.[0035] FIG. 1 illustrates an example of an apparatus 100 that may employ a data communication bus. The apparatus 100 may include a processing circuit 102 having multiple circuits or devices 104, 106, and/or 108, which may be implemented in one or more application-specific integrated circuits (ASICs) or in a SoC. In one example, the apparatus 100 may be a communication device and the processing circuit 102 may include a processing device provided in an ASIC 104, one or more peripheral devices 106, and a transceiver 108 that enables the apparatus to communicate with a radio access network, a core access network, the Internet, and/or another network.[0036] The ASIC 104 may have one or more processors 112, one or more modems 110, on- board memory 114, a bus interface circuit 116, and/or other logic circuits or functions. The processing circuit 102 may be controlled by an operating system that may provide an application programming interface (API) layer that enables the one or more processors 112 to execute software modules residing in the on-board memory 114 or other processor-readable storage 122 provided on the processing circuit 102. The software modules may include instructions and data stored in the on-board memory 114 or processor-readable storage 122. The ASIC 104 may access its on-board memory 114, the processor-readable storage 122, and/or storage external to the processing circuit 102. The on-board memory 114, the processor-readable storage 122 may include read-only memory (ROM) or random-access memory (RAM), electrically erasable programmable ROM (EEPROM), flash cards, or any memory device that can be used in processing systems and computing platforms. The processing circuit 102 may include, implement, or have access to a local database, a cloud-based storage, or other parameter storage that can maintain operational parameters and other information used to configure and operate the apparatus 100 and/or the processing circuit 102. The local database may be implemented using registers, a database module, flash memory, magnetic media, EEPROM, soft or hard disk, or the like. The processing circuit 102 may also be operably coupled to external devices such as a display 126, operator controls, such as switches or buttons 128, 130, and/or an integrated or external keypad 132, among other components. A user interface module may be configured to operate with the display 126, keypad 132, etc. through a dedicated communication link or through one or more serial data interconnects.[0037] The processing circuit 102 may provide one or more buses 118a, 118b, 120 that enable certain devices 104, 106, and/or 108 to communicate. In one example, the ASIC 104 may include a bus interface circuit 116 that includes a combination of circuits, counters, timers, control logic, and other configurable circuits or modules. In one example, the bus interface circuit 116 may be configured to operate in accordance with communication specifications or protocols. The processing circuit 102 may include or control a power management function that configures and manages the operation of the apparatus 100.[0038] FIG. 2 illustrates certain aspects of an apparatus 200 that includes multiple devices 202,220, and 222a-222n connected to a serial bus 230. The devices 202, 220, and 222a-222n may include one or more semiconductor IC devices, such as an applications processor, SoC or ASIC. Each of the devices 202, 220, and 222a-222n may include, support or operate as a modem, a signal processing device, a display driver, a camera, a user interface, a sensor, a sensor controller, a media player, a transceiver, and/or other such components or devices. Communications between devices 202, 220, and 222a-222n over the serial bus 230 are controlled by a bus master 220. Certain types of bus can support multiple bus masters 220.[0039] The apparatus 200 may include multiple devices 202, 220, and 222a-222n that communicate when the serial bus 230 is operated in accordance with I2C, I3C, or other protocols. At least one device 202, 222a-222n may be configured to operate as a slave device on the serial bus 230. In one example, a slave device 202 may be adapted to provide a control function 204. In some examples, the control function 204 may include circuits and modules that support a display, an image sensor, and/or circuits and modules that control and communicate with one or more sensors that measure environmental conditions. In other examples, the control function 204 may include circuits and modules that support a radio, RF sensor, and/or circuits and modules that control and communicate with one or more devices external to the apparatus 200. The slave device 202 may include configuration registers 206 or other storage 224, control logic 212, a transceiver 210 and line drivers/receivers 214a and 214b. The control logic 212 may include a processing circuit such as a state machine, sequencer, signal processor, or general-purpose processor. The transceiver 210 may include a receiver 21 OEI, a transmitter 210c, and common circuits 210b, including timing, logic, and storage circuits and/or devices. In one example, the transmitter 210c encodes and transmits data based on timing in one or more signals 228 provided by a clock generation circuit 208.[0040] Two or more of the devices 202, 220, and/or 222a-222n may be adapted according to certain aspects and features disclosed herein to support a plurality of different communication protocols over a common bus, which may include an 12C, I3C and/or SPMI protocol. In some instances, devices that communicate using the I2C protocol can coexist on the same 2-wire interface with devices that communicate using the I3C protocol. In one example, the I3C protocols may support a mode of operation that provides a data rate between 6 megabits per second (Mbps) and 16 Mbps with one or more optional high-data-rate (HDR) modes of operation that provide higher performance. The I2C protocols may conform to de facto I2C standards providing for data rates that may range between 100 kilobits per second (kbps) and 3.2 megabits per second (Mbps). I2C, I3C and SPMI protocols may define electrical and timing aspects for signals transmitted on the 2-wire serial bus 230, in addition to data formats and aspects of bus control. In some aspects, the I2C, I3C, and SPMI protocols may define direct current (DC) characteristics affecting certain signal levels associated with the serial bus 230, and/or alternating current (AC) characteristics affecting certain timing aspects of signals transmitted on the serial bus 230. In some examples, a 2-wire serial bus 230 transmits data on a first wire 218 and a clock signal on a second wire 216. In some instances, data may be encoded in the signaling state, or transitions in signaling state of the first wire 218 and the second wire 216.[0041] FIG. 3 is a block diagram 300 illustrating an example of a device 302 that employs anRFFE bus 308 to couple various front-end devices 312, 314, 316, 318, 320, 322. Although the device 302 will be described with respect to an RFFE interface, it is contemplated that the device 302 may also apply to a system power management interface (SPMI) and other multi-drop serial interfaces. A modem 304 may include an RFFE interface 310 that couples the modem 304 to the RFFE bus 308. The modem 304 may communicate with a baseband processor 306. The illustrated device 302 may be embodied in one or more of a mobile communication device, a mobile telephone, a mobile computing system, a mobile telephone, a notebook computer, a tablet computing device, a media player, a gaming device, a wearable computing and/or communications device, an appliance, or the like. In various examples, the device 302 may be implemented with one or more baseband processors 306, modems 304, multiple communications links 308, 326, and various other buses, devices and/or different functionalities. In the example illustrated in FIG. 3, the RFFE bus 308 may be coupled to an RF integrated circuit (RF1C) 312, which may include one or more controllers, and/or processors that configure and control certain aspects of the RF front-end. The RFFE bus 308 may couple the RFIC 312 to a switch 314, an RF tuner 316, a power amplifier (PA) 318, a low noise amplifier (LNA) 320, and a power management module 322.GPIO Signaling[0042[ Mobile communication devices, and other devices that are related or connected to mobile communication devices, increasingly provide greater capabilities, performance and functionalities. In many instances, a mobile communication device incorporates multiple IC devices that are connected using a variety of communications links. FIG. 4 illustrates an apparatus 400 that includes an Application Processor 402 and multiple peripheral devices 404, 406, 408. In the example, each peripheral device 404, 406, 408 communicates with the Application Processor 402 over a respective communication link 410, 412, 414 operated in accordance with mutually different protocols. Communication between the Application Processor 402 and each peripheral device 404, 406, 408 may involve additional wires that carry control or command signals between the Application Processor 402 and the peripheral devices 404, 406, 408. These additional wires may be referred to as sideband general purpose input/output (sideband GPIO 420, 422, 424), and in some instances the number of connections needed for sideband GPIO 420, 422, 424 can exceed the number of connections used for a communication link 410, 412, 414.[0043] GPIO provides generic pins/cormections that may be customized for particular applications. For example, a GPIO pin may be programmable to function as an output, input pin or a bidirectional pin, in accordance with application needs. The term“pin,” as used herein, may refer to a physical structure such as a pad, pin or other interconnecting element used to couple an IC to a wire, trace, through-hole via, or other suitable physical connector provided on a circuit board, substrate, flex cable, board connector, or the like.[0044] In one example, the Application Processor 402 may assign and/or configure a number ofGPIO pins to conduct handshake signaling or inter-processor communication (1PC) with a peripheral device 404, 406, 408 such as a modem. When handshake signaling is used, sideband signaling may be symmetric, where signaling is transmitted and received by the Application Processor 402 and a peripheral device 404, 406, 408. With increased device complexity, the increased number of GPIO pins used for 1PC communication may significantly increase manufacturing cost and limit GPIO availability for other system-level peripheral interfaces.[0045] According to certain aspects, the state of GPIO, including GPIO associated with a communication link, may be captured, serialized and transmitted over a data communication link. In one example, captured GPIO may be transmitted in packets over an I3C bus using common command codes to indicate packet content and/or destination.[0046] FIG. 5 illustrates a system 500 that employs physical GPIO pins for a variety of purposes. Although not shown in FIG. 5 (but see FIG. 4), the system 500 may include one or more communication links and certain physical GPIO pins may be assigned to support out-of-band signaling associated with the communication links, while other physical GPIO pins may be used for other purposes. Physical GPIO pins may enable signals to be transmitted over wires 512, 514, 516, 518, 520 connecting two or more devices 502, 504, 506, 508, 510. The signals may include interrupt signals, enable/disable signals, ready/not-ready signals, synchronization signals, low-speed serial clock and/or data signals, and/or status signals such as data buffer condition or activity status, coexistence signals indicating when one of a plurality of radio frequency transceivers is actively transmitting or receiving.]0047] The illustrated system 500 includes a host device 502 and multiple slave devices 504,506, 508, 510. In one example, the host device 502 incorporates an Application Processor 402 (see FIG. 4) configured to service, configure, control and/or support operation of one or more slave devices 504, 506, 508, 510. In another example, the host device 502 may be configured to operate as a bus master on one or more communication links that couple the host device 502 to some or all of the slave devices 504, 506, 508, 510. In FIG. 5, the host device 502 is coupled to each of the slave devices 504, 506, 508, 510. [0048] First host GPIO 522 couples the host device 502 through a first connector configuration 512 to corresponding first slave GPIO 530 in a first slave device 504. The first host GPIO 522 may include GPIO pins configured as an input, an output or a bidirectional pin, with corresponding first slave GPIO 530 being configured to match the type of signaling transmitted over connectors in the first connector configuration 512. Some GPIO pins may be configured to be placed in a high-impedance state. In one example, the first slave device 504 may include an imaging device or display controller, and image and/or video data may be exchanged through a high-speed communication link 410 (see FIG. 4). In this example, the first host GPIO 522 and first slave GPIO 530 may include sideband GPIO 420 that enables control signaling in both directions between the host device 502 and the first slave device 504.[0049] Second host GPIO 524 couples the host device 502 through a second connector configuration 514 to corresponding second slave GPIO 532 in a second slave device 506. The second host GPIO 524 may include GPIO pins configured as an input, an output or a bidirectional pin. Some GPIO pins may be configured to be placed in a high- impedance state, with corresponding second slave GPIO 532 being configured to match the type of signaling transmitted over connectors in the second connector configuration 514. In the illustrated example, a connector 516 coupling the second host GPIO 524 with the second slave GPIO 532 may be connected to third slave GPIO 534 in a third slave device 508. The connector 516 may, for example, carry an interrupt signal and may be driven by open-drain GPIO in the second slave device 506 or the third slave device 508.[0050[ Third host GPIO 526 couples the host device 502 through a third connector configuration 518 to corresponding GPIO pins in the third slave GPIO 534 in the third slave device 508, and a GPIO pin in fourth slave GPIO 536 in a fourth slave device 510. In one example, the connector 518 may carry a synchronizing signal from the host device 502 to the second slave device 506 and the third slave device 508. In another example, the connector 518 may carry an enable/disable signal from the host device 502 to the second slave device 506 and the third slave device 508. In another example, the connector 518 may carry a select signal used by the host device 502 to select between the second slave device 506 and the third slave device 508.[0051] Fourth host GPIO 528 couples the host device 502 through a fourth connector configuration 520 to corresponding pins in the fourth slave GPIO 536 in the fourth slave device 510. The fourth host GPIO 528 may include GPIO pins configured as an input, an output or a bidirectional pin, with corresponding fourth slave GPIO 536 being configured to match the type of signaling transmitted over connectors in the fourth connector configuration 520. Some GPIO pins may be configured to be placed in a high-impedance state.[0052] Additional slave GPIO 542, 544, 546 may be provided in certain slave devices 504, 506,508 to support signaling between the slave devices 504, 506, 508 over connectors 538, 540 that are not coupled to the host device 502. Signaling between slave devices 506, 508, 510 may also occur on the connectors 516 and 518 coupled to the host device 502. Some connectors 516, 518, 538 support multi-drop or multipoint signaling where signals generated at a first device are received by multiple devices. In some instances, the connectors 516, 518, 538 may support multi-drive signaling where signals can be generated at one or more devices.[0053] Certain aspects disclosed herein enable GPIO state generated on different devices to be communicated across a multi-drop bus, such that physical interconnections between different groups or pairs of devices can be eliminated.[0054] FIG. 6 illustrates an example of a system 600 which includes one or more communication links that employ sideband GPIO. To facilitate description, the example of a serial data link may be employed, although the concepts described herein may be applied to parallel data communication links. The system 600 may include an application processor 602 that may serve as a host device on various communication links, multiple peripherals 604i-604,v, and one or more power management integrated circuits (PMICs 606, 608). In the illustrated system 600, at least a first peripheral 604i may include a modem. The application processor 602 and the first peripheral 604i may be coupled to respective PMICs 606, 608 using GPIO that provides a combination of reset and other signals, and a system power management interface (SPMI 618, 620). The SPMI 618, 620 operates as a serial interface defined by the MIPI Alliance that is optimized for the real-time control of devices including PMICs 606, 608. The SPMI 618, 620 may be configured as a shared bus that provides high-speed, low- latency connection for devices, where data transmissions may be managed, according to priorities assigned to different traffic classes.[0055] The application processor 602 may be coupled to each of the peripherals 604I-604LG using multiple communication links 612, 614 and GPIO 616. For example, the application processor 602 may be coupled to the first peripheral 6041 using a high-speed bus 612, a low-speed bus 614, and input GPIO 616. GPIO Triggered Interface Message[0056] In the field of chipsets, GPIO pins may be used to communicate information via hardware signals between chips. For example, the GPIO pins may be used to facilitate the communication of PMIC regulator and clock control signals for devices in various chipsets. Some chips/triggers that may utilize the GPIO pins include WLAN delivery traffic indication map (DTIM), BT ACK/connection, NFC Field sense/activity, WiGig (802.11 ad) DTIM, USB connector attach, battery insertion/removal, SIM card insertion/removal, SD card insertion/removal, and external sensor detection event (e.g., camera, touch, audio, gyro, etc.).[0057] In an aspect, numerous hardware signals are transmitted throughout a chipset to convey simple information from one chip to another. As a number of signals required to be communicated in the chipset increases, a number of GPIO pins on a chip, as well as printed circuit board (PCB) routing between chips, also increases. However, as chipsets become more advanced, die area is decreasing. Therefore, an amount of die space for accommodating the GPIO pins to communicate the hardware signals is limited.[0058] FIG. 7 is a diagram illustrating an example architecture 700 for communicating signals between devices (or device components). As technology advances, power management integrated circuits (PMICs) (e.g., first PMIC 702, second PMIC 704, and third PMIC 706) and packages are becoming smaller in size. Consequently, a number of GPIO pins to communicate hardware control signals to and from the PMICs, such as for enabling clocks, enabling regulators, etc., is also decreasing. As such, aspects of the present disclosure relate to eliminating as many hardware signals/GPIO pins as possible without sacrificing the amount or types of information capable of being communicated between devices. In an aspect, the present disclosure is directed toward the PMIC area, but may be applied to other device areas as well.[0059] Referring to FIG. 7, a control signal transmitted from a chip may indicate to a PMIC that an action is to be performed by the PMIC. The PMIC may enable a corresponding action (e.g., enable a clock, enable a voltage regulator, enable a mode/state change, etc.) based on the control signal received from the chip. In an aspect, a wide variety of actions may occur in the PMIC when the control signal from the chip transitions to a high state or to a low state.[0060] In one example operation, a first chip (Transceiver!) 708 is an independent chip that is operating. When the first chip 708 decides that it needs its clock, the first chip 708 may assert a hardware signal via a first clock enable line (CLKEN1) 710 connected to a first PMIC 702. For example, the first chip 708 asserts the hardware signal by raising a voltage of the first clock enable line 710 to 1.8V. The first PMIC 702 will then receive the hardware signal and determine that the first chip 708 hits requested its clock. In response, the first PMIC 702 will enable a corresponding clock for the first chip 708. As shown in FIG. 7, a clock of 38.4MHz (712) is enabled for the first chip 708. Furthermore, when the first chip 708 no longer needs its clock, the first chip 708 may de-assert the first clock enable line 710 and the first PMIC 702 may disable the corresponding clock accordingly.[0061] In another example operation, a first control line (CTRL1) 716 may not only enable clocks, but may also enable voltage regulators. Accordingly, when a second chip (Transceivers) 714 decides that it needs its clock/voltage regulator to be enabled, the second chip 714 may assert a hardware signal via the first control line 716. In this example, the first control line 716 is connected to two PMICs, the first PMIC 702 and the third PMIC 706, as there may exist two regulators (one regulator in each PMIC) that need to be enabled in order to support the second chip 714. The first PMIC 702 and the third PMIC 706 will then receive the hardware signal and determine that the second chip 714 has requested its clock/regulator to be enabled. In response, the first PMIC 702 and the third PMIC 706 will enable a corresponding clock/regulator for the second chip 714. When the second chip 714 no longer needs its clock/regulator enabled, the second chip 714 may de-assert the first control line 716 and the first PMIC 702 and the third PMIC 706 may disable the corresponding clock/regulator accordingly.[0062] External chips may need regulators and/or clocks for powering on/off, mode selection, and sequencing, for example. The regulators and/or clocks may also be needed when a main system on chip (SoC) is in a sleep or low power state. The external chips may route dedicated signals to PMIC hardware (GPIO) pins to control the regulators and clocks. However, a number of pins on the PMICs may be limited. Therefore, the elimination of pins can save PMIC system costs. Moreover, because control signals route to all PMICs having resources requiring control, and routing the control signals through a SoC/PMIC area is constrained, the elimination of routing can reduce PCB complexity.[0063] In an aspect of the disclosure, control signal routing is facilitated via a SoC instead of routing through each individual PMIC. The SoC includes an existing signal path for data communication between the SoC and a PMIC. Accordingly, the existing signal path may be leveraged to also communicate a control signal from an external IC (external to the SoC) to the PMIC. For example, the control signal may be converted into an SPMI transaction, which then triggers in a core PMIC a sequence that may be run. This may include control of all resources controlled by the PMICs. In an aspect, the core PMIC may include a controller that controls from a single point all PMICs and the resources associated with them. Accordingly, a number of GPIO pins and PCB routing complexity is reduced.[0064] In previous architectures, pins are routed to individual PMICs. Therefore, if three different chips/devices wanted to communicate with three different PMICs, for example, a total of nine separate signal routes may be needed to support all communications. In contrast, by routing the control signaling through the SoC instead of through each individual PMIC according to the aspects of the present disclosure, the number of signal rounds may be reduced to three. Moreover, as chipsets evolve, PMICs may become more discrete. Therefore, the ability to route control signals through a single entity (e.g., SoC) would allow communications to be distributed to the discrete PMICs and provide significant benefits.[0065] FIG. 8 is a diagram illustrating another example architecture 800 for communicating signals between devices. In an aspect, hardware control signals requesting actions may be sent to SoC GPIO pins, and the SoC may send an SPMI message to a PMIC subsystem to control corresponding resources (e.g., voltage regulators and clocks). In more detail, as shown in FIG. 8, instead of routing control signals from various chips/devices (e.g. first chip (Transceiver!) 808 and/or second chip (Peripheral Devicel) 810) to individual PMICs (e.g., first PMIC 802, second PMIC 804, and/or third PMIC 806), the control signals may be routed to the SoC (Processor) 812. The SoC 812 may then trigger an SPMI message to one or more PMICs for voltage regulator and clock control. In an aspect, a conversion engine, such as an interface circuit/module (e.g., a Multi-Generic event PMIC arbiter Interface (MGPI) circuit/module), within the SoC may be functional when the SoC is in a low power/sleep state. As such, the interface circuit/module may operate to handle the control signals from the various chips without waking, or causing a higher power state in, the SoC. The interface circuit/module is capable of controlling resources on multiple PMICs.[0066] In an aspect, all of the hardware control signals (requesting actions) that were previously routed between all of the different PMICs and chips/devices are brought together and routed into the SoC 812. The SoC 812 may send a control command to a core PMIC (e.g., one ofPMICs 802, 804, 806) by leveraging an existing communication bus (e.g., SPMI bus) between the SoC 812 and the core PMIC. The core PMIC may aggregate all of the different hardware signals requesting actions and send the requests to individual PMICs.[0067] In an aspect, a global control command may be sent to all of the PMICs. In another example, a control command may be sent to a PMIC controller that can route control signals to all PMICs intended to receive the control command. Accordingly, the control command may be sent to the core PMIC having the PMIC controller, or the control command may be sent to all of the individual PMICs on a shared communication bus. In an aspect, the hardware control signals from individual chips/devices (i.e., individual signal edges) are converted into commands via the SoC, and the commands are transmitted on a shared bus.[0068] FIG. 9 is a diagram illustrating an example on-SoC architecture 900 for communicating signals between devices. In an example operation, an external IC (Ext IC) 902 (e.g., WiGig chip, wireless LAN chip, NFC chip, etc.) may send a control signal (e.g., clock enable (CLKEN) signal and/or switch control (SWCTRL) signal) to pads 904 on a SoC (e.g., Processor) 906. The pads 904 may be routed to an interface circuit/module 908. The interface circuit/module 908 may wake an always-on system in the SoC 906 and issue a command through a port of an arbiter circuit/module 910. The arbiter 910 may then send the command across a bus 912 (e.g., SPMI bus) to one or more PMICs 914. In an aspect, the command may be a global command to all of the PMICs or a command to a PMIC controller of a core PMIC that triggers a sequence of events. In a further aspect, although the bus 912 is shown as an SPMI bus in FIG. 9, it is contemplated that the command may be sent to the one or more PMICs 914 via any type of interface protocol (e.g., RFFE, I3C, I2C, PCIe, VGI, etc.). Notably, the external IC 902 does not have access to the bus 912.[0069[ 1°311aspect, the interface circuit/module 908 is a conversion engine that converts a hardware signal transition into a message that can be sent over the SPMI bus 912. The interface circuit/module 908 operates in conjunction with the arbiter 910, which is an interface to the bus 912. For example, the interface circuit/module 908 provides sufficient information to translate a hardware signal transition into an address and data pair that is sent to the arbiter 910 to send over the SPMI bus 912. The interface circuit/module 908 effectively takes a rising/falling signal edge and translates the rising/falling signal edge into a bitstream (protocol message) that can be sent over the bus 912.[0070] In an aspect, more than one command may sent over the SPMI bus 912. As described above, the interface circuit/module 908 converts a hardware signal transition into a single message transmitted on the bus 912. However, in other aspects, the interface circuit/module 908 may convert the hardware signal transition into multiple messages transmitted on the bus 912.[0071[ In a further aspect, a requesting PMIC may send a command (e.g., EUD, BatAlarm) to the interface circuit/module 908 via the arbiter 910 using the SPMI bus 912. For example, if the requesting PMIC detects a USB plug event or a battery removal event, the requesting PMIC may request that a particular action (e.g., voltage regulator/clock control) be performed by one of the one or more PMICs 914. Accordingly, the requesting PMIC may make the request by sending an SPMI interrupt signal to a control circuit/module on the arbiter 910 that then triggers, based on pattern matching, an interrupt that goes into the interface circuit/module 908. Based on the interrupt from the arbiter 910, the interface circuit/module 908 may operate in the same way as with the control signal from the external IC 902. That is, the interface circuit/module 908 may take a rising/falling signal edge from the interrupt and translate the rising/falling signal edge into a bitstream to be sent over the SPMI bus 912 and back to the one or more PMICs 914 where the requested action may be performed.[0072] In an aspect, a host processor of the SoC 906 may be in a low power/sleep state.Accordingly, aspects of the present disclosure relate to the interface circuit/module 908 staying powered-on to process control signals without waking the host processor. The control signals capable of being processed by the interface circuit/module 908 may be used to control and power on/off resources at the PMICs without waking the host processor. In an aspect, the interface circuit/module 908 resides on the SoC 906. Nonetheless, the host processor does not have to be awake in order for the interface circuit/module 908 to perform the control signal processing. The interface circuit/module 908 performs operations in accordance with the aspects of the present disclosure while the host processor is asleep.[0073[ In an aspect, waking of the host processor is avoided to mitigate a power penalty. For example, when the host processor is awake, the host processor may perform other actions/services unrelated to the communication of control signaling between external chips and PMICs (e.g. PMIC requesting its clock). As such, unnecessary device power is drained when the host processor is unnecessarily awake to perform the unrelated collateral actions. The present disclosure promotes power savings by providing the interface circuit/module 908, which remains awake to perform the novel operations of the present disclosure while keeping the host processor asleep (or in a low power state).[0074] In an aspect of the disclosure, one or more rails 916 may connect the one or morePMICs 914 to vohage/clock resources 918. Moreover, one or more power rails 920 may connect the voltage/clock resources 918 to the external IC 902. When the arbiter 910 sends the command across the SPM1 bus 912 to a PMIC 914, the PMIC 914 may perform a requested action according to the command. For example, if the command is a request from the external IC 902 to enable a voltage regulator/clock, the PMIC 914 may enable a voltage regulator/clock buffer 918 corresponding to the external 1C 902 based on the external IC's resource requirements via the one or more rails 916. Thereafter, a signal corresponding to the enabled regulator/clock may be sent to the external IC 902 via the one or more power rails 920. The external IC 902 may request that the resources 918 be enabled/disabled on the external IC’s own timeline.[0075] In a further aspect of the disclosure, one or more other power rails 922 may connect the voltage/clock resources 918 back to the one or more PMICs 914. When the arbiter 910 sends the command across the SPMI bus 912 to a PMIC 914, the PMIC 914 may perform a requested action according to the command. For example, if the command is a request from a particular PMIC to enable a voltage regulator/clock, the one or more PMICs 914 may enable a voltage regulator/clock buffer 918 corresponding to the particular PMIC based on the particular PMIC’s resource requirements via the one or more rails 916. Thereafter, a signal corresponding to the enabled regulator/clock may be sent to the particular PMIC via the one or more other power rails 922.[0076] Aspects of the present disclosure are novel and innovative for a number of reasons. For example, aspects of the present disclosure reduce a number of GPIO pins in a PMIC subsystem. Moreover, aspects of the present disclosure allow flexibility in transmitting a command/message over the bus 912. The command/message may be transmitted using any of a number of different protocols, e.g., SPMI, I2C, I3C, UART, VGI, SPI, etc. Also, aspects of the present disclosure may be implemented to support a sequence of messages for more complex control. Aspects of the present disclosure further allow messages to be sent to multiple end points using one control signal. Aspects of the present disclosure may also provide reduced PMIC cost, reduced PMIC pin count and associated package area, and reduced PCB signal routes. Examples of a Method and Processing Circuit[0077] FIG. 10 is a flowchart 1000 of a method that may be performed at an apparatus (e.g., slave or bus master) for communicating a control signal between device components.[0078[ At block 1002, the apparatus may send a control signal from an integrated circuit (IC) to a system on chip (SoC). The control signal may request enablement or disablement of one or more resources corresponding to the IC and controlled by one or more devices (e.g., one or more power management integrated circuits (PMICs)). In an aspect, requesting enablement or disablement of the one or more resources may include a request for a feature adjustment, such as for example, a mode change, a state change, a voltage change, a clock signal, a pulse density modulation (PDM) output pattern change, and/or a noise spreading circuit. In an aspect, the IC is a circuit external to the SoC. In another aspect, the IC includes the one or more PMICs. Moreover, the one or more resources may include a voltage regulator regulating a voltage of the IC and/or a clock buffer providing a clock signal to the IC.[0079[ At block 1004, the device may convert, via a converting circuit within the SoC, the control signal from the IC into a command to be transmitted to the one or more PMICs. In an aspect, the converting circuit converts the control signal into the command while a host processor of the SoC is in a sleep or low power state. In a further aspect, the converting circuit converts the control signal into the command by translating a signal transition (e.g., rising edge or felling edge) of the control signal into a stream of bits representing the command.[0080[ At block 1006, the device may transmit the command from the converting circuit to the one or more PMICs via a bus coupling the SoC to the one or more PMICs. In an aspect, the IC has no direct access to the bus. In another aspect, the IC is coupled to the bus and includes the one or more PMICs. In a further aspect, the command is transmitted from the converting circuit to the one or more PMICs via an arbiter that provides access to the one or more PMICs. In another aspect, the command is transmitted via the bus according to a system power management interface (SPMI) protocol or any other type of interface protocol (e.g., RFFE, ISC, I2C, PCle, VGI, etc.). In a further aspect, the command transmitted via the bus is a stream of bits that may be encrypted or encoded (e.g., to disguise data for security purposes or to change an energy profile for noise considerations).[0081[ In an aspect, the command may be a single message, or multiple messages, transmitted on the bus. In another aspect, the command may be a global command transmitted to all PMICs of the one or more PMICs or a command transmitted to a core PMIC of the one or more PMICs. The core PMIC may include a PMIC controller configured to route the command to at least one PMIC of the one or more PMICs intended to receive the command.[0082] At block 1008, the device may enable or disable, via the one or more PMICs, the one or more resources corresponding to the IC based on the command.[0083] Additionally or alternatively, the device may perform other operations, such as the operations depicted in blocks 1010 to 1016 of FIG. 10.[0084] At block 1010, the device may send a second command from a requesting PMIC of the one or more PMICs to the SoC via the bus. The second command may request enablement or disablement of one or more resources corresponding to the requesting PMIC and controlled by at least one controlling PMIC of the one or more PMICs.[0085] At block 1012, the device may convert, via the converting circuit, the second command from the requesting PMIC into a third command to be transmitted to the at least one controlling PMIC.[0086] At block 1014, the device may transmit the third command from the converting circuit to the at least one controlling PMIC via the bus.[0087] At block 1016, the device may enable or disable, via the at least one controlling PMIC, the one or more resources corresponding to the requesting PMIC based on the third command.[0088] FIG. 11 is a diagram illustrating a simplified example of a hardware implementation for an apparatus 1100 employing a processing circuit 1102. The apparatus may implement a bridging circuit in accordance with certain aspects disclosed herein. The processing circuit typically has a controller or processor 1116 that may include one or more microprocessors, microcontrollers, digital signal processors, sequencers and/or state machines. The processing circuit 1102 may be implemented with a bus architecture, represented generally by the bus 1120. The bus 1120 may include any number of interconnecting buses and bridges depending on the specific application of the processing circuit 1102 and the overall design constraints. The bus 1120 links together various circuits including one or more processors and/or hardware modules, represented by the controller or processor 1116, the modules or circuits 1104, 1106, 1108, and 1110 and the processor-readable storage medium 1118. One or more physical layer circuits and/or modules 1114 may be provided to support communications over a communication link implemented using a multi-wire bus 1112 or other communication structure. The bus 1120 may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further.[0089] The processor 1116 is responsible for general processing, including the execution of software, code and/or instructions stored on the processor-readable storage medium 1118. The processor-readable storage medium may include a non-transitory storage medium. The code and/or instructions, when executed by the processor 1116, causes the processing circuit 1102 to perform the various functions described supra (e.g., the functions described with respect to FIGs. 8, 9, and 10) for any particular apparatus. The processor-readable storage medium may be used for storing data that is manipulated by the processor 1116 when executing software. The processing circuit 1102 further includes at least one of the modules/circuits 1104, 1106, 1108, and 1110. The modules/circuits 1104, 1106, 1108, and 1110 may be software modules running in the processor 1116, resident/stored in the processor-readable storage medium 1118, one or more hardware modules coupled to the processor 1116, or some combination thereof. The modules/circuits 1104, 1106, 1108, and 1110 may include microcontroller instructions, state machine configuration parameters, or some combination thereof.[0090] In one configuration, the apparatus 1100 includes modules and/or circuits 1104 configured to send control signals, modules and/or circuits 1106 configured to convert the control signals into commands to be transmitted to one or more PMICs, modules and/or circuits 1108 configured to transmit the commands to the one or more PMICs over a bus, and modules and/or circuits 1110 configured to enable or disable one or more resources via the one or more PMICs based on the commands.[0091] It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Further, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.[0092] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean“one and only one” unless specifically so stated, but rather“one or more.” Unless specifically stated otherwise, the term“some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase“means for.” |
Systems and methods are disclosed including a memory device and a processing device operatively coupled to the memory device. The processing device can perform operations including identifying, among a first plurality of wordlines of a set of pages of the memory device, at least one wordline having a current value of a data state metric satisfying a first condition; determining new values of the data state metric of a second plurality of wordlines of the set of pages, wherein the at least one wordline is excluded from the second plurality of wordlines; and responsive to determining that the new values of the data state metric of one or more wordlines of the second plurality of wordlines satisfy a second condition, performing a media management operation with respect to the one or more wordlines. |
CLAIMSWhat is claimed is:1. A system comprising: a memory device; and a processing device, operatively coupled to the memory device, to perform operations comprising: identifying, among a first plurality of wordlines of a set of pages of the memory device, at least one wordline having a current value of a data state metric satisfying a first condition; determining new values of the data state metric of a second plurality of wordlines of the set of pages, wherein the at least one wordline is excluded from the second plurality of wordlines; and responsive to determining that the new values of the data state metric of one or more wordlines of the second plurality of wordlines satisfy a second condition, performing a media management operation with respect to the one or more wordlines.2. The system of claim 1, wherein the processing device to perform further operations comprising: including the at least one excluded wordline in a subsequent determining of further new values of the data state metric.3. The system of claim 1, wherein the processing device to perform further operations comprising: determining further new values of the data state metric of a third plurality of wordlines of the set of pages, wherein at least one wordline having a same current value of the data state metric as its new value is excluded from the third plurality of wordlines.4. The system of claim 1, wherein the data state metric comprises a residual bit error rate (RBER).5. The system of claim 1, wherein the first condition comprises the current value of the data state metric being below an inclusion threshold criterion.6. The system of claim 1, wherein the second condition comprises at least one of the new values exceeding a refresh threshold criterion.7. The system of claim 1, wherein the media management operation comprises writing data stored at the one or more wordlines to a new block.8. The system of claim 1, wherein the media management operation comprises writing data stored at an entire block associated with the one or more wordlines to a new block.9. A method, comprising: identifying a first plurality of wordlines among a second plurality of wordlines of a set of pages of a memory device; determining values of a data state metric of each wordline of the first plurality of wordlines; and responsive to determining that the values of the data state metric of one or more wordlines of the first plurality of wordlines satisfy a condition, performing a media management operation with respect to the second plurality of wordlines.10. The method of claim 9, wherein the second plurality of wordlines comprises a set of three consecutive wordlines.11. The method of claim 9, wherein the data state metric comprises a residual bit error rate (RBER).12. The method of claim 9, wherein the condition comprises at least one of the values of the data state metric exceeding a refresh threshold criterion.13. The method of claim 9, wherein the media management operation comprises writing data stored at the second plurality of wordlines to a new block.14. The method of claim 9, wherein the media management operation comprises writing data stored at an entire block associated with the second plurality of wordlines to a new block.15. A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device operatively coupled to a memory, performs operations comprising: identifying, among a first plurality of wordlines of a set of pages of the memory device, at least one wordline having a current value of a data state metric satisfying a first condition; determining new values of the data state metric of a second plurality of wordlines of the set of pages, wherein the at least one wordline is excluded from the second plurality of wordlines; and responsive to determining that the new values of the data state metric of one or more wordlines of the second plurality of wordlines satisfy a second condition, performing a media management operation with respect to the one or more wordlines.16. The non-transitory computer-readable storage medium of claim 15, wherein the processing device to perform further operations comprising: including the at least one excluded wordline in a subsequent determining of further new values of the data state metric.17. The non-transitory computer-readable storage medium of claim 15, wherein the processing device to perform further operations comprising: determining further new values of the data state metric of a third plurality of wordlines of the set of pages, wherein at least one wordline having the same current value and new value is excluded from the third plurality of wordlines.18. The non-transitory computer-readable storage medium of claim 15, wherein the data state metric comprises a residual bit error rate (RBER).19. The non-transitory computer-readable storage medium of claim 15, wherein the first condition comprises the current value of the data state metric being below an inclusion threshold criterion.20. The non-transitory computer-readable storage medium of claim 15, wherein the second condition comprises at least one of the new values exceeding a refresh threshold criterion. |
SELECTIVE WORDLINE SCANS BASED ON A DATA STATE METRICTECHNICAL FIELD[001] Embodiments of the disclosure relate generally to memory sub-systems, and more specifically, relate to selective wordline scans based on a data state metric.BACKGROUND[002] A memory sub-system can include one or more memory devices that store data. The memory devices can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system can utilize a memory sub-system to store data at the memory devices and to retrieve data from the memory devices.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure.[003] FIG. 1 illustrates an example computing system that includes a memory subsystem in accordance with some embodiments of the present disclosure.[004] FIG. 2 is a flow diagram of an example method for performing data integrity checks, in accordance with some embodiments of the present disclosure.[005] FIG. 3 is a flow diagram of another example method for for performing data integrity checks, in accordance with some embodiments of the present disclosure.[006] FIG. 4 is an illustration of a memory sub-system determining data state metric values for wordlines in a block, in accordance with some embodiments of the present disclosure.[007] FIG. 5 is an architecture of wordlines and bit lines of a block, in accordance with some embodiments of the present disclosure.[008] FIG. 6 is a block diagram of an example computer system in which embodiments of the present disclosure can operate.DETAILED DESCRIPTION[009] Aspects of the present disclosure are directed a selective wordline scans based on a data state metric. A memory sub-system can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that
store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system.[0010] A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. For example, NAND memory, such as 3D flash NAND memory, offers storage in the form of compact, high density configurations. A non-volatile memory device is a package of one or more dice, each including one or more planes. For some types of non-volatile memory devices (e.g., NAND memory), each plane includes of a set of physical blocks. Each block includes of a set of pages. “Block” herein shall refer to a set of contiguous or non-contiguous memory pages. An example of “block” is “erasable block,” which is the minimal erasable unit of memory, while “page” is a minimal writable unit of memory. Each page includes of a set of memory cells. A memory cell is an electronic circuit that stores information. The memory device can include single-level cells (SLCs) that each store one bit of data, multilevel cells (MLCs) that each store two bits of data, triple-level cells (TLCs) that each store three bits of data and/or quad-level cells (QLCs) that each store four bits of data. Data is stored inside each memory cell as one or more threshold voltages of the memory cell depending on the type of memory cell.[0011] Each block can be organized as a two-dimensional array of memory cells. Memory cells in the same row form a wordline and the transistors of memory cells in the same column (each memory cell from a different memory page) are bound together to form a bit line. Only one memory cell is read at a time per bit line. When data is written to a memory cell of the memory sub-system for storage, the memory cell can deteriorate. Accordingly, each memory cell of the memory sub-system can have a finite number of write operations performed on the memory cell before the memory cell is no longer able to reliably store data. Data stored at the memory cells of the memory sub-system can be read from the memory subsystem and transmitted to a host system. For example, during a read operation, a read reference voltage is applied to the wordline containing the data to be read, while a pass through voltage is applied to wordlines of unread memory cells. The pass through voltage is a read reference voltage higher than any of the stored threshold voltages. However, when data is read from a memory cell of the memory sub-system, nearby or adjacent wordlines can experience what is known as read disturb.[0012] Read disturb is a phenomenon in NAND memory where reading data from a memory cell can cause the threshold voltage of unread memory cells in the same block to shift to a higher value. In particular, the high pass-through voltage induces electric tunneling
that can shift the threshold voltages of unread cells to higher values. Each threshold voltage shift can be small, but these shifts can accumulate over time and become large enough to alter the state of some memory cell, which can cause read disturb errors. Read disturb is the result of continually reading from one wordline without intervening erase operations, causing other memory cells in nearby wordlines to change over time. If too many read operations are performed on a wordline, data stored at adjacent wordlines of the memory sub-system can become corrupted or incorrectly stored at the memory cell(s). This can result in a higher error rate of the data stored at the wordline. This can increase the use of an error detection and correction operation (e.g., an error control operation) for subsequent operations (e.g., read and/or write) performed on the wordline. The increased use of the error control operation can result in a reduction of the performance of the conventional memory sub-system. As more resources of the memory sub-system are used to perform the error control operation, fewer resources can be used to perform other read operations or write operations.[0013] The error rate associated with data stored at the block can increase due to read disturb. Therefore, using a read operation counter, upon a threshold number of read operations being performed on the block, a conventional memory sub-system can perform the data integrity check to verify that the data stored at the block does not include any errors. During the data integrity check, one or more values of a data state metric are determined for data stored at each wordline of the block. “Data state metric” herein shall refer to a quantity that is measured or inferred from the state of data stored on a memory device. Specifically, the data state metrics may reflect the state of the temporal voltage shift, the degree of read disturb, and/or other measurable functions of the data state. A composite data state metric is a function (e.g., a weighted sum) of a set of component state metrics. One example of a data state metric is residual bit error rate (RBER). The RBER corresponds to a number of bit errors per unit of time that the data stored at the block includes an error.[0014] Conventionally, if the data state metric for a wordline of the block exceeds a threshold value, indicating a high error rate associated with data stored at the wordline due, at least in part, to read disturb, then a media management operation (also referred to as “folding”) can be performed to relocate the data stored at the wordline or entire block to a new block of the memory sub-system. The folding of the data stored at the wordline or the block to the other block can include writing the data stored at the wordline or the block to the other block to refresh the data stored by the memory sub-system, in order to mitigate the read disturb associated with the data and erase the data at the wordline or the block. However, as previously discussed, read disturb can affect memory cells of wordlines that are adjacent to
the wordline that a read operation is performed on. Therefore, read disturb can induce a non- uniform stress on memory cells of the block if particular memory cells are read from more frequently (e.g., a random or targeted read). For example, wordlines of a block that are adjacent to a wordline that is frequently read from can have a high error rate, while wordlines that are not adjacent to the wordline that is frequently read can have a lower error rate due to a reduced impact by read disturb on these memory cells. This is because wordlines adjacent to a wordline being read require a higher pass through voltage than wordlines that are not adjacent.[0015] The read operation counter is used for each block on the memory sub-system. Since a memory sub-system controller cannot easily detect which type of read pattern (sequential read pattern, targeted read pattern, etc.) is used on the memory block, nor which wordlines are read more frequently than other, all wordlines of a block are scanned during the data integrity check. However, scanning every wordline of the block is expensive in terms of system resources. Further, keeping a read line counter for each wordline of the block may not be practical because it would utilize excessive memory sub-system resources. This can result in a decrease of performance of the memory sub-system and increase the power consumption of the memory sub-system. Furthermore, scanning of each wordline during each integrity check can decrease the lifespan of the memory sub -system.[0016] Aspects of the present disclosure address the above and other deficiencies by implementing a memory sub-system controller capable of performing selective wordline scans based on a data state metric. In an illustrative example, the memory sub-system controller can, responsive to a read operation counter satisfying a scan threshold criterion, trigger a data integrity check and identify a data state metric value (e.g., an RBER value) for each wordline of a block of the memory sub-system. A data state metric value of a wordline exceeding a refresh threshold criterion indicates a high error rate associated with data stored at that wordline (due to, in part, read disturb) and can trigger a media management operation (e.g., a folding operation). Responsive to detecting no data state metric values exceeding the refresh threshold criterion, the memory sub-system controller can reset the read operation counter. Once the read operation counter satisfies the read threshold criterion again, the memory sub-system controller can trigger a new data integrity check and determine new data state metric values for the wordlines of the block. However, wordlines having a low (below an inclusion threshold criterion) data state metric value associated with the initial data integrity check are excluded from the new data integrity check. This allows the memory subsystem controller to scan fewer wordlines in subsequent data integrity checks. Responsive to
detecting no data state metric values exceeding the refresh threshold criterion in the new data integrity check, the memory sub-system controller can again reset the read operation counter. During a subsequent data integrity check, wordlines that were excluded from a preceding data integrity check can be automatically included in the subsequent data integrity check. Further, wordlines that experienced no change in their data state metric values between two consecutive preceding data integrity checks can be excluded from the subsequent data integrity check. Responsive to detecting a data state metric value of a wordline exceeding a refresh threshold criterion during any data integrity check, the memory sub-system controller can trigger the media management operation with respect to the one or more wordlines of the block.[0017] Advantages of the present disclosure include, but are not limited to, an improved performance of the memory sub-system by reducing the number of wordlines scanned during data integrity checks performed by the memory sub-system. Since the number of wordlines scanned is reduced, the amount of resources of the memory sub -system devoted to performing the data integrity scans is also reduced. This can result in an improvement of performance of the memory sub-system and a decrease in power consumption by the memory sub-system. Furthermore, this can increase the lifespan of the memory sub-system. Although embodiments are described using memory cells of a NAND flash memory, aspects of the present disclosure can be applied to other types of memory sub-systems.[0018] FIG. 1 illustrates an example computing system 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. The memory sub-system 110 can include media, such as one or more volatile memory devices (e.g., memory device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination of such.[0019] A memory sub-system 110 can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded MultiMedia Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual inline memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of nonvolatile dual in-line memory modules (NVDIMMs).[0020] The computing system 100 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (loT) enabled device, embedded
computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device.[0021] The computing system 100 can include a host system 120 that is coupled to one or more memory sub-systems 110. In some embodiments, the host system 120 is coupled to different types of memory sub-system 110. FIG. 1 illustrates one example of a host system 120 coupled to one memory sub-system 110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc.[0022] The host system 120 can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system 120 uses the memory sub-system 110, for example, to write data to the memory sub-system 110 and read data from the memory sub-system 110.[0023] The host system 120 can be coupled to the memory sub-system 110 via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system 120 and the memory sub-system 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices 130) when the memory sub-system 110 is coupled with the host system 120 by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 110 and the host system 120. FIG. 1 illustrates a memory sub-system 110 as an example. In general, the host system 120 can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections.[0024] The memory devices 130, 140 can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory
devices (e.g., memory device 140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).[0025] Some examples of non-volatile memory devices (e.g., memory device 130) include negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross- gridded data access array. Additionally, in contrast to many flash-based memories, cross- point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).[0026] Each of the memory devices 130 can include one or more arrays of memory cells.One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices 130 can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices 130 can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks.[0027] Although non-volatile memory components such as 3D cross-point array of nonvolatile memory cells and NAND type flash memory (e.g. 2D NAND, 3D NAND) are described, the memory device 130 can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM).
[0028] The memory sub-system controller 115 (or controller 115 for simplicity) can communicate with the memory devices 130 to perform operations such as reading data, writing data, or erasing data at the memory devices 130 and other such operations. The memory sub-system controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor.[0029] The memory sub-system controller 115 can be a processing device, which includes one or more processors (e.g., processor 117), configured to execute instructions stored in local memory 119. In the illustrated example, the local memory 119 of the memory sub-system controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110, including handling communications between the memory subsystem 110 and the host system 120.[0030] In the illustrated example, the local memory 119 of the memory sub-system controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110, including handling communications between the memory sub-system 110 and the host system 120.[0031] In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system 110 in FIG. 1 has been illustrated as including the memory sub-system controller 115, in another embodiment of the present disclosure, a memory sub-system 110 does not include a memory sub-system controller 115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system).[0032] In general, the memory sub-system controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 130. The memory sub-system controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations
between a logical address (e.g., logical block address (LB A), namespace) and a physical address (e.g., physical MU address, physical block address) that are associated with the memory devices 130. The memory sub-system controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices 130 as well as convert responses associated with the memory devices 130 into information for the host system 120.[0033] The memory sub-system 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller 115 and decode the address to access the memory devices 130.[0034] In some embodiments, the memory devices 130 include local media controllers 135 that operate in conjunction with memory sub-system controller 115 to execute operations on one or more memory cells of the memory devices 130. An external controller (e.g., memory sub-system controller 115) can externally manage the memory device 130 (e.g., perform media management operations on the memory device 130). In some embodiments, memory sub-system 110 is a managed memory device, which includes a raw memory device 130 having control logic (e.g., local controller 132) on the die and a controller (e.g., memory sub-system controller 115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.[0035] In one embodiment, the memory sub-system 110 includes a media manager component 113 that can be used to track and manage data in the memory device 130 and the memory device 140. In some embodiments, the memory sub-system controller 115 includes at least a portion of the media manager component 113. In some embodiments, the media manager component 113 is part of the host system 120, an application, or an operating system. In other embodiments, local media controller 135 includes at least a portion of media manager component 113 and is configured to perform the functionality described herein. The media manager component 113 can communicate directly with the memory devices 130 and 140 via a synchronous interface. Furthermore, transfers of data between the memory device 130 and the memory device 140 can be done within the memory sub-system 110 without accessing the host system 120.[0036] The media manager component 113 can perform selective wordline scans on a block based on a data state metric. As discussed above, the data state metric is a quantity that
is measured or inferred from the state of data stored on a memory device. In an example, a data state metric is a residual bit error rate (RBER), which corresponds to a number of bit errors per unit of time that the data stored at the block includes an error. A media manager component 113 can identify or indicate a read operation to be used by memory device 130 and/or memory device 140 to retrieve data (e.g., pages) that is stored at a particular location of memory device 130, 140. Each of the pages may be accessed by a wordline and a bit line (or bit line group) of the memory device 130, 140. For example, the media manager component 113 may provide a read operation to assert (e.g., provide a voltage input) at particular wordline and a particular bit line to retrieve data stored at a corresponding memory page of the memory device 130, 140. As a result, data can be retrieved from memory pages of the memory device 130, 140 by providing voltage inputs at wordlines and bit lines.[0037] The media manager component 113 can track read operations using a read operation counter. For example, each read operation performed on a block of the memory device 130, 140 can increment a read operation counter value by 1 for that block. Responsive to the read operation counter value satisfying a scan threshold criterion (e.g., exceeding 10,000 read operations, 100,000 read operations, etc.), the media manager component 113 can trigger a data integrity check on the block. Each data integrity check can identify a data state metric value (e.g., an RBER value) for one or more wordlines of the block. In some embodiments, the data integrity check can include reading data from the set of sampled memory cells in the block. Upon reading the data from the set of sampled memory cells, an error correction operation can be performed on the read data. In some implementations, the error correction operation can be an error-correcting code (ECC) operation or another type of error detection and correction operation to detect and correct an error. During the error correction operation one or more data state metrics, such as RBER, can be determined for the data read from the set of sampled memory cells. In some embodiments, the set of sampled memory cells in the block can be one or more memory cells of the block, a wordline, a group of wordlines in the block, or any combination thereof. For example, the set of sampled memory cells can be a group of three wordlines in the block.[0038] The data state metric values can be associated with different operations. In a first example, a data state metric value exceeding a refresh threshold criterion can trigger a media management operation (e.g., a folding operation). For example, the media management operation can write the data stored at the wordline to a new block to refresh the data stored by the memory sub-system 110. In another example, the media management operation can write the data stored at the entire block to a new block to refresh the data stored by the memory
sub-system 110. If a data integrity check yields no data state metric values exceeding the refresh threshold criterion, the media manager component 113 can reset the read operation counter.[0039] In a second example, a data state metric value falling below an inclusion threshold criterion can trigger an exclusion operation. For example, each wordline associated with a data state metric value below the inclusion threshold criterion is excluded from a subsequent data integrity check. The block management component 113 can include the excluded wordlines in a data integrity check performed after the subsequent data integrity check.[0040] In some embodiments, wordlines that experienced no change in their data state metric values between two consecutive preceding data integrity checks can be excluded from the subsequent data integrity check. For example, if the data state metric value associated with a wordline is the same in a first data integrity check and a second data integrity check, the block management component 113 can exclude the wordline during a third data integrity check. The block management component 113 can then include the wordline during a fourth data integrity check. Further details with regards to the operations of the selective relocation component 113 are described below.[0041] FIG. 2 is a flow diagram of an example method 200 for performing data integrity checks, in accordance with some embodiments of the present disclosure. The method 200 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 200 is performed by the media manager component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel.Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0042] At operation 210, the processing logic identifies, among a first plurality of wordlines of a set of pages of the memory device, at least one wordline having a current value of a data state metric satisfying a first condition. In some embodiments, the current value of a data state metric is a RBER value. In some embodiments, the first condition includes the current value being below an inclusion threshold criterion. In an example, a data state metric value below the inclusion threshold criterion can trigger an exclusion operation
where the wordline associated with a data state metric value below the inclusion threshold criterion is excluded from a subsequent data integrity check. In some embodiments, the processing logic can determine values of the data state metric responsive to the read operation counter exceeding the scan threshold criterion.[0043] At operation 220, the processing logic determines new values of the data state metric of a second plurality of wordlines of the set of pages, wherein the at least one wordline is excluded from the plurality of wordlines.[0044] At operation 230, responsive to determining that the new values of the data state metric of one or more wordlines of the second plurality of wordlines satisfy a second condition, the processing logic performs a media management operation with respect to the one or more wordlines. In some embodiments, the second condition includes at least one of the new values exceeding a refresh threshold criterion. In an example, the media management operation includes the processing logic writing the data stored at the one or more wordlines to a new block to refresh the data. In another example, the media management operation includes the processing logic writing the data stored at the entire block to a new block to refresh the data stored. If the data integrity check yields no data state metric values satisfying the second condition (e.g., exceeding the refresh threshold criterion), the processing logic can reset the read operation counter.[0045] In some embodiments, if a wordline is excluded from a determining of new values of the data state metric (e.g., a data integrity check), the processing logic can include the excluded wordline in a subsequent determining of further new values of the data state metric (e.g., a subsequent data integrity check). This can prevent a wordline from being excluded from consecutive data integrity checks.[0046] In some embodiments, the processing logic can exclude a wordline from a data integrity check when that wordline has a same data state metric value in two consecutive data integrity checks. For example, if the data state metric value associated with a wordline is the same in the first two data integrity checks, the processing logic can exclude that wordline during a third data integrity check. The wordline can then be included again during a subsequent (e.g., fourth) data integrity check.[0047] FIG. 3 is a flow diagram of an example method 300 for performing data integrity checks, in accordance with some embodiments of the present disclosure. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a
combination thereof. In some embodiments, the method 300 is performed by the media manager component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel.Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0048] Method 300 relates to determining data state metrics values of every two out of three consecutive wordlines during each data integrity check. As explained above, during a read operation, a read reference voltage is applied to the wordline containing the data to be read, while a pass through voltage is applied to the other wordlines in the block associated with the wordline being read. The pass through voltage is a read reference voltage higher than any of the stored threshold voltages. Wordlines adjacent (above and below) to the wordline being read require a higher pass through voltage than the wordlines that are not adjacent in the block. For example, a read reference voltage can be 5.5 volts, an adjacent pass through voltage can be 9.1 volts, and a non-adjacent pass through voltage can be 7.5 volts. As such, adjacent wordlines to the wordline being read can experience a higher degree of read disturb than non-adjacent wordlines. Accordingly, in a scenario where read operations are performed on a wordline disproportionally to other wordlines in the same block (e.g., row-hammer stress), by determining data state metrics values of every two out of three consecutive wordlines during each data integrity check, the system of the present disclosure can identify at least one of two wordlines with a high data state metric value (e.g., at least one of the adjacent wordlines) without scanning each wordline in the block. Thus, if one of the two adjacent wordlines has a data state metric value exceeding the refresh threshold criterion, a media management operation can be performed.[0049] At operation 310, the processing logic identifies a first plurality of wordlines among a second plurality of wordlines of a set of pages of a memory device, wherein the first plurality of wordlines is fewer than the second plurality of wordlines. In some embodiments, the second plurality of wordlines includes a set of three consecutive wordlines and the first plurality of wordlines includes any two world lines of the three consecutive wordlines.[0050] At operation 320, the processing logic determines values of a data state metric of each of the first plurality of wordlines. In some embodiments, each value of the data state metric is a RBER value.[0051] At operation 330, responsive to determining that the values of the data state metric
of one or more wordlines of the first plurality of wordlines satisfy a condition, the processing logic performs a media management operation with respect to the second plurality of wordlines. In some embodiments, the condition includes at least one of the new values exceeding a refresh threshold criterion. In an example, the media management operation includes the processing logic writing the data stored at the one or more wordlines to a new block to refresh the data. In another example, the media management operation includes the processing logic writing the data stored at the entire block to a new block to refresh the data stored. If the data integrity check yields no data state metric values satisfying the second condition (e.g., exceeding the refresh threshold criterion), the processing logic can reset the read operation counter. In subsequent scan, the processing logic can determine new values of the data state metric of the same first plurality of wordlines, or a plurality of wordlines from the second plurality of wordlines.[0052] FIG. 4 is an illustration of a memory sub-system 400 determining data state metric values for wordlines in a block, in accordance with some embodiments of the disclosure. Memory sub-system 400 includes a read operation counter 410 and a block 420. The memory sub-system 400 can track read operations using read operation counter 410. For example, each read operation performed on block 420 can increment a value of the read operation counter 410 by 1. Responsive to the read operation counter value satisfying a scan threshold criterion (e.g., exceeding 10 thousand read operations, 100,000 read operations, etc.), the memory sub-system 400 can trigger a data integrity check on block 420.[0053] Block 420 can be organized as a two-dimensional array of memory cells, as shown in FIG. 5. Fig. 5 illustrates the architecture of wordlines and bit lines of block 420. Memory cells in the same row form a wordline (e.g., wordline 0 (422), wordline 1 (424), wordline N (438)) and the transistors of memory cells in the same column (each memory cell from a different memory page) are bound together to form a bit line (e.g., bit line group (BG) 0 (442), bit line group 1 (444), bit line group 2 (446), and bit line group N (448). The intersection of a wordline and a bit line may correspond to a memory cell or a memory page at a corresponding physical block address. For example, memory cells 452, 454, 456, 458 form wordline 1 (424). Only one memory cell is read at a time per bit line. During a read operation, a read reference voltage is applied to the wordline containing the data to be read, while a pass through voltage is applied to wordlines of unread memory cells. The pass through voltage is a read reference voltage higher than any of the stored threshold voltages.[0054] Returning to FIG. 4, by way of example, block 420 includes nine wordlines, each wordline associated with a data state metric value. For example, wordline 0 (422) is
associated with data state metric value 0, wordline 1 (424) is associated with data state metric value 1, wordline 2 (426) is associated with data state metric value 2, wordline 3 (428) is associated with data state metric value 3, wordline 4 (430) is associated with data state metric value 4, wordline 5 (432) is associated with data state metric value 5, wordline 6 (434) is associated with data state metric value 6, wordline 7 (436) is associated with data state metric value 7, wordline N (438) is associated with data state metric value N.[0055] During the data integrity check on block 420, the memory sub-system 400 can determine a data state metric value (e.g., an RBER value) for each wordline (e.g., wordlines 422-438) of block 420. For example, the data integrity check can include reading data from the set of sampled memory cells in each wordline 422-438, and performing an error correction operation on the read data. During the error correction operation, the data state metrics can be determined for the data read from the set of sampled memory cells. The data state metric values can be updated during subsequent data integrity checks.[0056] In some embodiments, if a data state metric value exceed a refresh threshold criterion, the memory sub-system 400 can trigger a media management operation (e.g., a folding operation). For example, if a value of data state metric value 6, which is associated with wordline 6 (434), exceed the refresh threshold criterion, the memory sub-system 400 can i) write the data stored at wordline 6 (434) to a new block, ii) write the data stored at block 420 to a new block, or iii) perform another media management operation. If a data integrity check yields no data state metric values exceeding the refresh threshold criterion, the memory sub-system can reset read operation counter 410.[0057] In some embodiments, a data state metric value below an inclusion threshold criterion can trigger an exclusion operation. For example, if data state metric value 0 is below the inclusion threshold criterion, memory sub-system 400 can exclude wordline 0 from a subsequent data integrity check. Thus, during the subsequent data integrity check, memory sub-system 400 can determine a new data state metric value for wordlines 424-438 only. The memory sub -system 400 can include wordline 0 (422) in a data integrity check performed after the subsequent data integrity check.[0058] In some embodiments, wordlines that experienced no change in their data state metric values between two consecutive data integrity checks can be excluded from a subsequent data integrity check. For example, if the data state metric value 1 did not increase during two consecutive data integrity checks, the memory sub-system can exclude wordline 1 from the subsequent.[0059] FIG. 6 illustrates an example machine of a computer system 600 within which a
set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system 600 can correspond to a host system (e.g., the host system 120 of FIG. 1) that includes or utilizes a memory sub-system (e.g., the memory sub-system 110 of FIG. 1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to media manager component 113 of FIG. 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[0060] The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.[0061] The example computer system 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 618, which communicate with each other via a bus 630. Processing device 602 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 602 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. The computer system 600 can further include a network interface device 608 to communicate over the network 620.
[0062] The data storage system 618 can include a machine-readable storage medium 624 (also known as a computer-readable medium) on which is stored one or more sets of instructions 626 or software embodying any one or more of the methodologies or functions described herein. The instructions 626 can also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600, the main memory 604 and the processing device 602 also constituting machine-readable storage media. The machine-readable storage medium 624, data storage system 618, and/or main memory 604 can correspond to the memory sub-system 110 of FIG. 1.[0063] In one embodiment, the instructions 626 include instructions to implement functionality corresponding to media manager component 113 of FIG. 1. While the machine- readable storage medium 624 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine- readable storage medium” shall accordingly be taken to include, but not be limited to, solid- state memories, optical media, and magnetic media.[0064] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[0065] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data
represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.[0066] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. [0067] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.[0068] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.[0069] In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
A fabric controller for providing a coherent accelerator fabric includes a host interconnect communicatively coupled to a host device; a memory interconnect communicatively coupled to the accelerator memory; an accelerator interconnect communicatively coupled to an accelerator having a last level cache (LLC); and an LLC controller configured to provide a bias check for a memory access operation. |
1.A device comprising:ports, including:Protocol circuit arrangements for:transferring data over a link in any one of a plurality of protocols, including an I/O protocol, a cache protocol, and a memory protocol, the link coupling the port to another computing device , and the data of the multiple protocols will be carried on the common physical layer of the link; andA bias-based consistency model is supported, wherein the bias-based consistency model includes a host bias state and a device bias state.2.The apparatus of claim 1, further comprising attached memory, wherein the other device comprises a host device and the data is associated with a request for a block of data stored in the attached memory.3.2. The apparatus of claim 1 or 2, wherein, during the host bias state, the protocol circuitry is to report to the apparatus based on an attempt by the apparatus to access a particular block of data in the attached memory The host device sends a coherency request to resolve coherency for the particular block of data.4.3. The apparatus of any one of claims 1 to 3, wherein during the device bias state, the apparatus is allowed to access corresponding data blocks without sending any transactions to the host device.5.4. The apparatus of any one of claims 1 to 4, wherein the attached memory stores a plurality of data blocks, the host bias state to be applied to a first subset of the plurality of data blocks, And the device bias status will be applied to a second subset of the plurality of data blocks.6.5. The apparatus of any one of claims 1 to 5, further comprising a bias table for tracking, for each of the plurality of data blocks, whether the host bias state or the device bias state The set state will be applied to the corresponding data block.7.6. The apparatus of any of claims 1-6, wherein the common physical layer comprises a Peripheral Component Interconnect Express (PCIe) physical layer.8.7. The apparatus of any one of claims 1-7, wherein at least one of the plurality of protocols comprises a non-PCIe protocol.9.8. The apparatus of any one of claims 1 to 8, further comprising hardware accelerator circuitry.10.A method that includes:communicating between the device and the host over a link, wherein the link supports communication according to any one of a plurality of protocols, including an I/O protocol, a cache protocol, and a memory protocol,identifying specific blocks of data stored in the attached memory of the device;determining whether a host-biased mode or a device-biased mode is to be applied to the particular block of data; andAn attempt is made to access the particular data block, wherein the consistency of the particular data block is resolved based on whether the host-biased mode or the device-biased mode applies to the particular data block.11.11. The method of claim 10, wherein during device bias mode, the device is allowed to access corresponding data blocks in the attached memory without consulting a coherency agent of the host device.12.11. The method of claim 10 or 11, wherein during the host bias mode, consistency for corresponding data blocks in the attached memory is resolved by the host device.13.12. The method of any one of claims 10 to 12, wherein determining whether the host bias mode or the device bias mode is to be applied to the particular block of data comprises accessing a bias table corresponding to the and an entry for the particular block of data, wherein the entry identifies whether the host-biased mode or the device-biased mode is to be applied to the particular block of data.14.13. The method of any of claims 10 to 13, further comprising setting the value of the entry to change the bias pattern applied to the particular block of data.15.15. The method of any of claims 10 to 14, wherein the value is set by software.16.15. The method of any of claims 10 to 14, wherein the value is set by hardware.17.A system that includes:the first device; andA second device coupled to the first device by a link, wherein the link supports communication according to any one of a plurality of protocols, the plurality of protocols including an I/O protocol, a cache protocol, and a memory protocol, and the plurality of protocols utilize a common physical layer; and wherein the second device comprises:attached memory; andProtocol circuit arrangements for:determining whether a host-biased state or a device-biased state will apply to a particular block of data in the attached memory; andConsistency for the particular block of data is resolved based on whether the host bias state or the device bias state is applied.18.18. The system of claim 17, wherein the first device includes a host processor and the second device includes an accelerator.19.The system of claim 17 or 18, further comprising a software controller for defining whether the host bias state or the device bias state is to be applied to the particular block of data.20.19. The system of any of claims 17-19, wherein the common physical layer comprises a PCIe physical layer.21.A method that includes:Set the bias mode of the accelerator memory page to host bias mode;push operands and/or data to the accelerator memory page;transitioning the accelerator memory page to the device bias mode;generating a result using the operands via the accelerator, and storing the result in the accelerator memory page;setting the bias mode of the accelerator memory page storing the result to the host bias mode; andThe results are provided to host software from the accelerator memory page.22.A device comprising:a unit for setting the bias mode of the accelerator memory page to the host bias mode;a unit for pushing operands and/or data to the accelerator memory page;means for transitioning the accelerator memory page to the device bias mode;means for generating a result using the operands via the accelerator, and storing the result in the accelerator memory page;means for setting a bias mode of the accelerator memory page storing the result to the host bias mode; andmeans for providing the results to host software from the accelerator memory page.23.A method that includes:receiving, by the coherent accelerator fabric, input from the host device, wherein the input includes instructions to perform the computation and a payload for the computation;The result is calculated by the accelerator according to its ordinary function; andThe results are flushed to local memory.24.A device comprising:a unit for receiving, by the coherent accelerator fabric, input from a host device, wherein the input includes instructions to perform a computation and a payload for the computation;A unit for computing the result by the accelerator according to its ordinary function; andA unit for flushing the results to local memory. |
accelerator structureThis application is a divisional application of the patent application of the same name with the application number of 201810988366.2 filed on August 28, 2018.technical fieldThe present disclosure relates generally to the field of interconnecting devices, and more particularly, but not exclusively, to systems and methods for Peripheral Component Interconnect Express (PCIe) coherent memory devices.Background techniqueComputing systems include various components for managing demands on processor resources. For example, a developer may include a hardware accelerator (or "accelerator") operably coupled to a central processing unit (CPU). Typically, an accelerator is an autonomous element that is configured to perform the functions entrusted to it by the CPU. Accelerators may be configured for specific functions and/or may be programmable. For example, accelerators may be configured to perform certain computations, graphics functions, and the like. While the accelerator performs the specified function, the CPU is free to use the resources for other needs. In conventional systems, an operating system (OS) may manage the physical memory (eg, "system memory") available within the computing system; however, the OS does not manage or allocate memory local to the accelerator. As a result, memory protection mechanisms such as cache coherency introduce inefficiencies into accelerator-based configurations. For example, traditional cache coherency mechanisms limit an accelerator's ability to access its attached local memory at very high bandwidth and/or limit accelerator deployment options.Description of drawingsThe invention is best understood from the following detailed description when read in conjunction with the accompanying drawings. It is emphasized that, in accordance with standard practice in the industry, the various features are not necessarily to scale and are for illustrative purposes only. Where scale is shown explicitly or implicitly, it provides only an illustrative example. In other embodiments, the dimensions of various features may be arbitrarily increased or decreased for clarity.FIG. 1 illustrates an example operating environment that may represent various embodiments in accordance with one or more examples of this specification.Figure 2a illustrates an example of a fully coherent operating environment in accordance with one or more examples of the present specification.Figure 2b illustrates an example of a non-coherent operating environment in accordance with one or more examples of the present specification.Figure 2c illustrates an example of a coherent engine without a biased operating environment in accordance with one or more examples of the present specification.3 illustrates an example of an operating environment that may represent various embodiments in accordance with one or more examples of this specification.FIG. 4 illustrates another example operating environment that may represent various embodiments in accordance with one or more examples of this specification.Figures 5a and 5b illustrate other example operating environments that may represent various embodiments in accordance with one or more examples of the present specification.6 illustrates an embodiment of a logic flow in accordance with one or more examples of this specification.7 is a block diagram illustrating a structure in accordance with one or more examples of the present specification.8 is a flowchart illustrating a method according to one or more examples of the present specification.9 is a block diagram of an accelerator link memory (IAL.mem) read operating over PCIe in accordance with one or more examples of the present specification.10 is a block diagram of an IAL.mem write over PCIe operating in accordance with one or more examples of the present specification.11 is a block diagram of IAL.mem data completion over PCIe operations in accordance with one or more examples of the present specification.12 illustrates an embodiment of a structure consisting of point-to-point links interconnecting a set of components in accordance with one or more examples of the present specification.Figure 13 illustrates an embodiment of a layered protocol stack in accordance with one or more embodiments of the present specification.14 illustrates an embodiment of a PCIe transaction descriptor in accordance with one or more examples of this specification.15 illustrates an embodiment of a PCIe serial point-to-point fabric in accordance with one or more examples of this specification.Detailed waysThe accelerator link (IAL) of this specification is an extension of the Rosetta link (R-Link) multi-chip package (MCP) interconnect link. IAL extends the R-Link protocol to support accelerators and input/output (IO) devices that the baseline R-Link or Peripheral Component Interconnect Express (PCIe) protocols may not adequately support.The following disclosure provides many different embodiments or examples for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. Of course, these are only examples and not limiting. Furthermore, the present disclosure may repeat reference numerals and/or letters in various instances. This repetition is for the purpose of simplicity and clarity, and does not in itself bind the relationship between the various embodiments and/or configurations discussed. Different embodiments may have different advantages, and no particular advantage is necessarily required for any embodiment.In the following description, numerous specific details are set forth, such as specific types of processors and system configurations, specific hardware structures, specific architectural and microarchitectural details, specific register configurations, specific instruction types, specific system components, specific measurements/altitudes, Examples of specific processor pipeline stages and operations, etc., are provided in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present invention.In other instances, well-known components or methods have not been described in detail, such as specific and alternative processor architectures, specific logic circuits/code for the algorithms described, specific firmware codes, specific interconnect operations, specific logic configurations, specific manufacturing Techniques and materials, specific compiler implementations, specific expressions of algorithms in code, specific power-down and gating techniques/logic, and other specific operational details of computer systems are provided to avoid unnecessarily obscuring the invention.Although the following embodiments may be described with reference to power conservation and energy efficiency in particular integrated circuits, such as in computing platforms or microprocessors, other embodiments are also applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of the embodiments described herein may be applied to other types of circuits or semiconductor devices, which may also benefit from better energy efficiency and power savings. For example, the disclosed embodiments are not limited to desktop computer systems or Ultrabooks™, and may also be used in other devices, such as handheld devices, tablet computers, other thin notebooks, system-on-chip (SOC) devices, and embedded applications.Some examples of handheld devices include cellular telephones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld personal computers (PCs). Embedded applications typically include microcontrollers, digital signal processors (DSPs), systems on chips (SoCs), network personal computers (NetPCs), set-top boxes, network hubs, wide area network (WAN) switches, or any other function that can perform the functions taught below and operating system. Furthermore, the apparatus, methods, and systems described herein are not limited to physical computing devices, but may also involve software optimization for power conservation and efficiency. As will become apparent in the following description, embodiments of the methods, apparatus and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are critical to a "green technology" future balanced with performance considerations of.Various embodiments may generally relate to techniques for providing cache coherence among various components within a processing system. In some embodiments, the various components may include a processor, such as a central processing unit (CPU), and a logic device communicatively coupled to the processor. In various embodiments, the logic device may include locally attached memory. In some embodiments, the various components may include a processor communicatively coupled to an accelerator with locally attached memory (eg, logical device memory).In some embodiments, the processing system may operate a coherent biasing process configured to provide multiple cache coherency processes. In some embodiments, the multiple cache coherency processes may include a device bias process and a host bias process (collectively referred to as "bias protocol flows"). In some embodiments, the host biasing process may route requests, including requests from the logical device, to the locally attached memory of the logical device through the coherency component of the processor. In some embodiments, the device biasing process may route logical device requests for logical device memory directly to logical device memory, eg, without consulting a coherency component of the processor. In various embodiments, the cache coherence process may switch between the device-biased process and the host-biased process based on a bias indicator determined using application software, hardware hints, a combination thereof, and the like. Embodiments are not limited in this context.The IAL described in this specification uses the Optimized Accelerator Protocol (OAP), which is a further extension of the R-Link MCP interconnect protocol. The IAL may be used in one example to provide an interconnect fabric to accelerator devices (in some examples, the accelerator devices may be heavy-duty accelerators performing, for example, graphics processing, intensive computing, SmartNIC services, or the like). The accelerator may have its own attached accelerator memory, and an interconnect fabric such as an IAL, or in some embodiments a PCIe-based fabric, may be used to attach the processor to the accelerator. The interconnect fabric may be a coherent accelerator fabric, in which case the accelerator memory may be mapped into the host device's memory address space. The coherent accelerator structure can maintain consistency within the accelerator and between the accelerator and the host device. This can be used to implement state-of-the-art memories and implement coherent support for these types of accelerators.Advantageously, coherent accelerator structures according to the present specification may provide optimizations that increase efficiency and throughput. For example, an accelerator may have a number of memory banks, with respective n last level caches (LLCs) each controlled by an LLC controller. The fabric may provide different kinds of interconnects to connect accelerators and their caches to memory, and the fabric to host devices.As an illustration, throughout this specification, a bus or interconnect that connects devices of the same nature is referred to as a "horizontal" interconnect, while an interconnect or bus that connects different devices upstream and downstream may be referred to as a "vertical" interconnect . The terms "horizontal" and "vertical" are used herein for convenience only and are not meant to imply any necessary physical arrangement of interconnects or buses, or a requirement that they must be physically orthogonal to each other on the die.For example, an accelerator may include 8 banks, with corresponding 8 LLCs, which may be level 3 (L3) caches, each controlled by an LLC controller. A coherent accelerator structure can be divided into multiple independent "slices". Each slice serves a bank and its corresponding LLC, and operates essentially independently of other slices. In an example, each slice can utilize the biasing operation provided by the IAL, and provide a parallel path to the bank. Memory operations involving the host device can be routed through a fabric coherence engine (FCE) that provides coherence with the host device. However, any single slice LLC can also have a parallel bypass path that writes directly to the memory connecting the LLC directly to the bank, bypassing the FCE. For example, this can be accomplished by providing bias logic (eg, host bias or accelerator bias) in the LLC controller itself. The LLC controller can be physically separated from the FCE and can be upstream of the FCE in a vertical orientation, enabling accelerator biased memory operations to bypass the FCE and write directly to the memory bank.Embodiments of the present specification may also achieve significant power savings by providing a power manager that selectively shuts down portions of the coherent fabric when not in use. For example, an accelerator can be a very high bandwidth accelerator that can perform many operations per second. When the accelerator is performing its acceleration function, it is using the structure heavily and requires extremely high bandwidth so that the computed value can be flushed to memory in time after computation. However, once the computation is complete, the host device may not be ready to consume the data. In this case, parts of the interconnect, such as the vertical bus from the FCE to the LLC controller, as well as the horizontal bus between the LLC controller and the LLC itself, can be powered down. These can remain powered down until the accelerator receives new data to operate on.The following table describes several types of accelerators. Note that the baseline R-Link may only support the first two classes of accelerators, while IAL can support all five classes of accelerators.Note that, in addition to producer-consumer, embodiments of these accelerators may require some degree of cache coherence to support the usage model. Therefore, IAL is a coherent accelerator link.IAL uses a combination of three protocols dynamically multiplexed onto a common link to enable the accelerator model disclosed above. These agreements include:System-on-Chip Fabric (IOSF) - A reformatted PCIe-based interconnect that provides a non-coherent ordered semantic protocol. The IOSF may include an on-chip implementation of all or part of the PCIe standard. IOSF packages PCIe traffic so that it can be sent to a companion chip, such as a system-on-chip (SoC) or multi-chip module (MCM). IOSF supports device discovery, device configuration, error reporting, interrupts, direct memory access (DMA) style data transfers, and various services provided as part of the PCIe standard.• In-die Interconnect (IDI) - enables the device to issue coherent read and write requests to the processor.• Scalable Memory Interconnect (SMI) - enables the processor to access memory attached to the accelerator.These three protocols can be used in different combinations (eg, IOSF only, IOSF plus IDI, IOSF plus IDI plus SMI, IOSF plus SMI) to support the various models described in the table above.As a baseline, IAL provides a single link or bus definition that can cover all five accelerator models through a combination of the aforementioned protocols. Note that producer-consumer accelerators are essentially PCIe accelerators. They only need the IOSF protocol, which is already a reformatted version of PCIe. IOSF supports some Accelerator Interface Architecture (AiA) operations, such as support for enqueue (ENQ) instructions, which may not be supported by industry standard PCIe devices. Therefore, IOSF provides added value over PCIe for this type of accelerator. Producer-Consumer Plus accelerators are accelerators that can use only the IDI layer and IOSF layer of IAL.In some embodiments, software auxiliary device memory and autonomous device memory may require the SMI protocol on the IAL, including including special operation codes (opcodes) on the SMI as well as specializations for the streams associated with those opcodes in the processor Controller support. These additions support IAL's consistency bias model. Usage can use all IOSF, IDI and SMI.Jumbo cache usage also uses IOSF, IDI, and SMI, but new qualifiers can also be added to the IDI and SMI protocols specifically designed for use with jumbo cache accelerators (ie, not used in the device memory model discussed above). Huge caches can add new special controller support in the processor that is not needed for any other usage.IAL refers to these three protocols as IAL.IO, IAL.cache and IAL.mem. The combination of these three protocols provides the desired performance benefits for the five accelerator models.To achieve these benefits, IAL can use R-Link (for MCP) or Flexbus (for discrete) physical layers to allow dynamic multiplexing of IO, cache and mem protocols.However, some form factors do not natively support the R-Link or Flexbus physical layers. In particular, Class 3 and 4 device memory accelerators may not support R-Link or Flexbus. Existing examples of these can use standard PCIe, which restricts the device to a dedicated memory model, rather than providing coherent memory that can be mapped into the host device's write-back memory address space. This model is limited because the memory attached to the device is therefore not directly addressable by software. This can result in suboptimal data marshalling between host and device memory over bandwidth-constrained PCIe links.Thus, embodiments of the present specification provide consistency semantics that follow the same deviation model-based definition defined by IAL, which retains the benefits of consistency without incurring the traditional overhead. All of this can be provided over existing PCIe physical links.Therefore, some of the advantages of IAL can be realized at the physical layer, which does not provide dynamic multiplexing between the IO, cache and mem protocols provided by R-Link and Flexbus. Advantageously, enabling the IAL protocol over PCIe for certain classes of devices reduces the entry burden on the ecosystem of devices using physical PCIe links. This enables leveraging existing PCIe infrastructure, including the use of off-the-shelf components such as switches, root ports, and endpoints. This also allows devices with attached memory to be used more easily across platforms using either a traditional dedicated memory model or a coherent system addressable memory model suitable for installations.To support Class 3 and Class 4 devices (software-assisted storage and autonomous device storage) as described above, the components of the IAL can be mapped as follows:IOSF or IAL.io can use standard PCIe. This can be used for device discovery, enumeration, configuration, error reporting, interrupts, and DMA-style data transfers.SMI or IAL.mem can use SMI tunneling over PCIe. Details of SMI tunneling over PCIe are described below, including the tunnels described in Figures 9, 10 and 11 below.IDI or IAL.cache is not supported in some embodiments of this specification. IDI enables devices to issue coherent read or write requests to host memory. Although IAL.cache may not be supported, the methods disclosed herein can be used to enable skew-based coherency for device-attached memory.To achieve this result, an accelerator device may use one of its standard PCIe memory base address register (BAR) regions for the size of its attached memory. To this end, a device can implement a specified vendor-specific extension capability (DVSEC), similar to the standard IAL, to point to the BAR region that should be mapped into the coherent address space. In addition, DVSEC can declare additional information such as memory type, latency, and other properties that help the Basic Input/Output System (BIOS) map this memory to the system address decoder in the coherent region. The BIOS can then program the memory base address and limit the host physical address in the device.This allows the host to read device-attached memory using standard PCIe memory read (MRd) opcodes.However, for writes, non-publish semantics may be required, as access to metadata may be required upon completion. To get NP MWr on PCIe, the following reserved encodings can be used:·Fmt[2:0]-011btype[4:0]-11011bUsing novel non-issue memory writes (NP MWr) on PCIe has the added benefit of enabling AiA ENQ instructions to efficiently submit work to devices.In order to achieve the best quality of service, the embodiments of this specification can implement three different virtual channels (VC0, VC1 and VC2) to separate different traffic types, as follows:VC0 → All memory mapped input/output (MMIO) and configuration (CFG) traffic, both upstream and downstreamVC1→IAL.mem write (from host to device)VC2→IAL.mem read (from host to device)Note that since IAL.cache or IDI is not supported, embodiments of this specification may not allow accelerator devices to issue coherent reads or writes to host memory.Embodiments of this specification may also have the ability to flush cache lines from the host (required for host-to-device bias inversion). This can be done at cache line granularity using non-allocated zero-length writes from devices on PCIe. Non-assignment semantics are described using transaction and processing hints on transaction layer packets (TLPs).·TH=1, PH=01This allows the host to invalidate the given line. The device can issue a read after a page offset flip to ensure all rows are flushed. The device may also implement a content addressable memory (CAM) to ensure that no new requests for the line are received from the host while the rollover is in progress.Systems and methods of coherent memory devices over PCIe will now be described with more specific reference to the accompanying drawings. It should be noted that certain reference numerals may be repeated throughout the drawings to indicate that a particular device or block is completely or substantially consistent throughout the drawings. However, this is not meant to imply any specific relationship between the various disclosed embodiments. In some examples, a class of elements may be referenced by a specific reference number ("widget 10"), and individual categories or instances of the class may be referenced by a hyphenated number ("the first specific widget 10- 1" and "second specific widget 10-2").FIG. 1 illustrates an example operating environment 100 that may represent various embodiments in accordance with one or more examples of the present specification. The operating environment 100 depicted in FIG. 1 may include a device 105 having a processor 110 (eg, a central processing unit (CPU)). Processor 110 may include any type of computing element, such as, but not limited to, microprocessors, microcontrollers, complex instruction set computing (CISC) microprocessors, reduced instruction set (RISC) microprocessors, very long instruction words (VLIW) ) microprocessor, virtual processor, such as a virtual central processing unit (VCPU), or any other type of processor or processing circuit. In some embodiments, the processor 110 may be one or more processors in a family of processors available from the company of Santa Clara, California. Although only one processor 110 is depicted in FIG. 1 , an apparatus may include multiple processors 110 . Processor 110 may include processing elements 112, such as processing cores. In some embodiments, the processor 110 may comprise a multi-core processor having multiple processing cores. In various embodiments, processor 110 may include processor memory 114 , which may include, for example, processor cache or local cache memory to efficiently access data processed by processor 110 . In some embodiments, processor memory 114 may include random access memory (memory); however, processor memory 114 may be implemented using other memory types, such as dynamic RAM (DRAM), synchronous DRAM (SDRAM), combinations thereof Wait.As shown in FIG. 1 , processor 110 may be communicatively coupled to logic device 120 via link 115 . In various embodiments, logical device 120 may comprise a hardware device. In various embodiments, logic device 120 may include accelerators. In some embodiments, logic device 120 may include hardware accelerators. In various embodiments, logic device 120 may include accelerators implemented in hardware, software, or any combination thereof.Although an accelerator may be used as an example logical device 120 in this particular embodiment, embodiments are not so limited as logical device 120 may include any type of device, processor (eg, graphics processing unit (GPU)), logic unit, Circuits, integrated circuits, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), memory units, computing units, and/or the like capable of operating in accordance with some embodiments. In embodiments where logic device 120 includes an accelerator, logic device 120 may be configured to perform one or more functions of processor 110 . For example, logic device 120 may include accelerators operable to perform graphics functions (eg, a GPU or graphics accelerator), floating point operations, fast Fourier transform (FFT) operations, and the like. In some embodiments, logic device 120 may include accelerators configured to operate using various hardware components, standards, protocols, and the like. Non-limiting examples of the types of accelerators and/or accelerator technologies that can be used by logical devices may include OpenCAPI™, CCIX, GenZ, NVLink™, Accelerator Interface Architecture (AiA), Cache Coherent Agent (CCA), Global Map, and Coherent Device Memory (GCM), Graphics Media Accelerator (GMA), Virtualization Technology for Directed Input/Output (IO) (eg, VT-d, VT-x, etc.), Shared Virtual Memory (SVM), etc. Embodiments are not limited in this context.Logic device 120 may include processing elements 122, such as processing cores. In some embodiments, logic device 120 may include multiple processing elements 122 . Logical device 120 may include logical device storage 124 , eg, configured as locally attached storage for logical device 120 . In some embodiments, logical device memory 124 may include local memory, cache memory, and the like. In various embodiments, logical device memory 124 may include random access memory (RAM); however, logical device memory 124 may be implemented using other memory types, such as dynamic RAM (DRAM), synchronous DRAM (SDRAM), their combination, etc. In some embodiments, at least a portion of logical device memory 124 may be visible or accessible to processor 110 . In some embodiments, at least a portion of logical device memory 124 may be visible to or accessible by processor 110 as system memory (eg, as an accessible portion of system memory 130 ).In various embodiments, processor 110 may execute driver 118 . In some embodiments, drivers 118 may be used to control various functional aspects of logical device 120 and/or manage communications with one or more applications using logical device 120 and/or computational results generated by logical device 120 . In various embodiments, logic device 120 may include and/or may have access to bias information 126 . In some embodiments, the biasing information 126 may include information associated with a coherent biasing process. For example, bias information 126 may include information indicating which cache coherency process may be valid for logical device 120 and/or a particular process, application, thread, memory operation, or the like. In some embodiments, the bias information 126 may be read, written, or otherwise managed by the driver 118 .In some embodiments, link 115 may include a bus component, such as a system bus. In various embodiments, link 115 may comprise a communication link operable to support multiple communication protocols (eg, a multi-protocol link). Supported communication protocols can include standard load/store IO protocols for component communication, including serial link protocols, device cache protocols, memory protocols, memory semantics protocols, directory bit support protocols, networking protocols, coherence protocols, accelerators protocol, data storage protocol, point-to-point protocol, fabric-based protocol, on-package (or on-chip) protocol, fabric-based on-package protocol, and/or the like. Non-limiting examples of supported communication protocols may include Peripheral Component Interconnect (PCI) protocol, Peripheral Component Interconnect Express (PCIe or PCI-E) protocol, Universal Serial Bus (USB) protocol, Serial Peripheral Interface (SPI) Protocol, Serial AT Attachment (SATA) Protocol, QuickPath Interconnect (QPI) Protocol, UltraPath Interconnect (UPI) Protocol, Optimized Accelerator Protocol (OAP), Accelerator Link (IAL), Intra Device Interconnect (IDI) Protocol ( or IAL.cache), Extensible Fabric on Chip (IOSF) protocol (or IAL.io), Scalable Memory Interconnect (SMI) protocol (or IAL.mem), SMI Generation 3 (SMI3) and/or similar protocols. In some embodiments, link 115 may support in-device protocols (eg, IDI) and memory interconnect protocols (eg, SMI3). In various embodiments, link 115 may support in-device protocols (eg, IDI), memory interconnect protocols (eg, SMI3), and fabric-based protocols (eg, IOSF).In some embodiments, apparatus 105 may include system memory 130 . In various embodiments, system memory 130 may include main system memory for device 105 . System memory 130 may store data and sequences of instructions executed by processor 110 or any other device or component of apparatus 105 . In some embodiments, system memory 130 may be RAM; however, system memory 130 may be implemented using other memory types, such as dynamic DRAM, SDRAM, combinations thereof, and the like. In various embodiments, system memory 130 may store software applications 140 (eg, "host software") executable by processor 110 . In some embodiments, software application 140 may be used or otherwise associated with logic device 120 . For example, software application 140 may be configured to use computation results generated by logic device 120 .The apparatus may include coherence logic 150 to provide a cache coherency process. In various embodiments, coherence logic 150 may be implemented in hardware, software, or a combination thereof. In some embodiments, at least a portion of coherence logic 150 may be disposed in, partially in, or otherwise associated with processor 110 . For example, in some embodiments, coherence logic 150 for cache coherency elements or processes 152 may be arranged within processor 110 . In some embodiments, the processor 110 may include a coherence controller 116 to perform various cache coherence processes, such as the cache coherence process 152 . In some embodiments, the cache coherence process 152 may include one or more standard cache coherence techniques, functions, methods, procedures, elements (including hardware or software elements), protocols, etc. executed by the processor 110 . In general, the cache coherency process 152 may include standard protocols for managing the system's caches so that data is not lost or not overwritten before data is transferred from the cache to the target memory. Non-limiting examples of standard protocols performed or supported by cache coherency process 152 may include snoop-based (or snoop) protocols, write-invalidate protocols, write-update protocols, directory-based protocols, hardware-based protocols (eg, , Modified Exclusive Shared Invalidation (MESI) protocol), private memory based protocols and/or similar protocols. In some embodiments, cache coherence process 152 may include one or more standard cache coherence protocols for maintaining cache coherence for logical devices 120 with attached logical device memory 124 . In some embodiments, cache coherence process 150 may be implemented in hardware, software, or a combination thereof.In some embodiments, coherent logic 150 may include coherent biasing processes, such as host biasing process or element 154 and device biasing process or element 156 . In general, a coherent biasing process may operate to maintain cache coherency with respect to requests, data flows, and/or other memory operations with respect to logic device memory 122 . In some embodiments, at least a portion of the coherence logic, eg, the host biasing process 154, the device biasing process 156, and/or the biasing selection component 158, may be arranged external to the processor 110, eg, in one or more separate coherent logic 150 units. In some embodiments, host biasing process 154, device biasing process 156, and/or bias selection component 158 may be implemented in hardware, software, or a combination thereof.In some embodiments, host biasing process 154 may include techniques, procedures, data flows, data, algorithms, etc., for processing requests to logical device memory 124 by cache coherency process 152 of processor 110 , including from logical device 120 request. In various embodiments, device biasing process 156 may include techniques, processes, data flows, data, algorithms, etc. that allow logical device 120 to directly access logical device memory 124, eg, without using a cache coherency process. In some embodiments, bias selection process 158 may include techniques, procedures, data flows, data, algorithms, etc. for activating host bias process 154 or device bias process 156 as a result of requests associated with logical device memory Active Biasing Process. In various embodiments, the active biasing process may be based on biasing information 126, which may include data used by the biasing selection process to determine and/or set the active biasing process, Data structures and/or procedures.Figure 2a shows an example of a fully coherent operating environment 200A. The operating environment 200A depicted in Figure 2a may include an apparatus 202 having a CPU 210 that includes a plurality of processing cores 212a-n. As shown in Figure 2a, the CPU may include various protocol agents, such as a caching agent 214, a host agent 216, a memory agent 218, and/or the like. In general, caching proxy 214 is operable to initiate transactions into coherent memory and retain copies in its own cache structure. The caching proxy 214 may be defined by the messages it may receive and send according to the behavior defined in the cache coherency protocol associated with the CPU. Caching proxy 214 may also provide copies of the coherent memory content to other caching proxies (eg, accelerator caching proxy 224). Host agent 216 may be responsible for the protocol side of CPU 210's memory interaction, including coherent and non-coherent host agent protocols. For example, host agent 216 may sequence memory reads/writes. Host proxy 216 may be configured to service coherent transactions, including handshakes with caching proxies as necessary. Host agent 216 may operate to supervise a portion of CPU 210's coherent memory, eg, to maintain coherency for a given address space. Host proxy 216 may be responsible for managing conflicts that may arise between different caching proxies. For example, host agent 216 may provide appropriate data and ownership responses according to the flow of a given transaction. Memory agent 218 is operable to manage access to memory. For example, memory agent 218 may facilitate memory operations (eg, load/store operations) and functions (eg, swap and/or similar functions) of CPU 210 .As shown in FIG. 2a , the apparatus 202 may include an accelerator 220 operably coupled to the CPU 210 . The accelerator 220 may include an accelerator engine 222 operable to perform functions (eg, computations, etc.) that are offloaded by the CPU 210 . Accelerator 220 may include accelerator caching proxy 224 and memory proxy 228 .Accelerator 220 and CPU 210 may be configured according to, and/or include, various conventional hardware and/or memory access techniques. For example, as shown in Figure 2a, all memory accesses, including those initiated by accelerator 220, must go through path 230. Path 230 may include non-coherent links, such as PCIe links. In the configuration of apparatus 202 , accelerator engine 222 may have direct access to accelerator caching proxy 224 and memory proxy 228 , rather than caching proxy 214 , host proxy 216 , or memory proxy 218 . Similarly, the cores 212a-n will not be able to directly access the memory agent 228. Therefore, the memory behind memory proxy 228 will not be part of the system address map as seen by cores 212a-n. Because the cores 212a-n do not have access to common memory proxies, data can only be exchanged through replicas. In some implementations, a driver may be used to facilitate copying data back and forth between storage agents 218 and 228 . For example, a driver may include runtime elements that create a shared memory abstraction that hides all copies from the programmer. Rather, and described in detail below, some embodiments may provide a configuration in which a request from the accelerator engine may be forced through the link between the accelerator and the CPU when the accelerator engine wants to access accelerator memory, eg, via the accelerator agent 228 .Figure 2b shows an example of a non-coherent operating environment 200B. The operating environment 200B depicted in FIG. 2b may include an accelerator 220 having an accelerator host agent 226 . CPU 210 and accelerator 220 may be operably coupled via a non-coherent path 232 (eg, a UPI path or a CCIX path).For operation of device 204, accelerator engine 222 and cores 212a-n may access both memory agents 228 and 218. The cores 212a-n can access the memory 218 without crossing the link 232, and the accelerator agent 222 can access the memory 228 without crossing. The cost of these local accesses from 222 to 228 is the need to build host agent 226 so that it can track all accesses from cores 212a-n to memory 228 consistently. This requirement results in complexity and high resource usage when apparatus 204 includes multiple CPU 210 devices all connected by other instances of link 232 . Host agent 226 needs to be able to track the consistency of all cores 212a-n on all instances of CPU 210. This can become quite expensive in terms of performance, area and power, especially for large configurations. Specifically, for access from CPU 210, it adversely affects the performance efficiency of accesses between accelerator 222 and memory 228, even though relatively few accesses from CPU 210 are expected.Figure 2c shows an example of a coherent engine without the bias operating environment 200C. As shown in FIG. 2 , apparatus 206 may include accelerator 220 operably coupled to CPU 210 through coherent links 236 and 238 . The accelerator 220 may include an accelerator engine 222 operable to perform functions (eg, computations, etc.) that are offloaded by the CPU 210 . Accelerator 220 may include accelerator caching proxy 224 , accelerator host proxy 226 and memory proxy 228 .In the configuration of apparatus 206, accelerator 220 and CPU 210 may be configured according to and/or include various conventional hardware and/or memory access techniques, such as CCIX, GCM, standard conformance protocols (eg, a symmetric consensus protocol) and/or similar protocols. For example, as shown in FIG. 2 , all memory accesses, including those initiated by accelerator 220 , must go through path 230 . In this manner, in order to access accelerator memory (eg, through memory agent 228) accelerator 220 must go through CPU 220 (and thus, the coherence protocol associated with the CPU). Accordingly, the device may not provide the ability to access certain memory, such as accelerator-attached memory associated with accelerator 220, as part of system memory (eg, as part of the system address map), which may allow host software to set operands And the computation results of the accelerator 220 are accessed without overhead such as IO direct memory access (DMA) data copies. Such data copies may require driver calls, interrupts, and MMIO accesses, which are inefficient and complex compared to memory accesses. As shown in Figure 2c, the inability to access accelerator-attached memory without cache coherency overhead can be detrimental to the execution time of computations offloaded to accelerator 220. For example, in processes involving a large amount of streaming write memory traffic, the cache coherency overhead may halve the effective write bandwidth seen by the accelerator 220 .The efficiency of operand setting, result access, and accelerator computation play a role in determining the effectiveness and benefit of offloading CPU 210 work to accelerator 220 . If the cost of offloading work is too high, offloading may not be beneficial or may be limited to very large tasks. Accordingly, various developers have created accelerators that attempt to improve the efficiency of using accelerators (eg, accelerator 220 ) that have limited effectiveness compared to techniques configured in accordance with some embodiments. For example, some conventional GPUs may operate without mapping accelerator-attached memory as part of the system address, and may or may not use certain virtual memory configurations (eg, SVMs) to access accelerator-attached memory. Thus, in such a system, the accelerator-attached memory is invisible to the host system software. Instead, accelerator-attached memory is accessed only through the runtime software layer provided by the GPU device driver. A data copy and page table manipulation system is used to create the appearance of a virtual memory (eg, SVM) enabled system. Such a system is inefficient, especially compared to some embodiments, because the system requires memory duplication, memory pinning, memory duplication, complex software, and the like. These requirements result in significant overhead at memory page transition points that are not needed in systems configured in accordance with some embodiments. In certain other systems, traditional hardware coherency mechanisms are used for memory operations associated with accelerator-attached memory, which limit the accelerator's ability to access accelerator-attached memory at high bandwidth and/or limit a given accelerator deployment options (eg, cannot support accelerators attached via on-package or off-package links without significant bandwidth loss).In general, conventional systems can access accelerator-attached memory using one of two methods: a fully coherent (or fully hardware coherent) method or a private memory model or method. The fully coherent approach requires that all memory accesses, including accelerator-requested accesses to accelerator-attached memory, must go through the corresponding CPU's coherency protocol. In this way, the accelerator must take a circuitous route to access accelerator-attached memory, since the request must at least travel to the corresponding CPU, through the CPU coherency protocol, and then to the accelerator-attached memory. Thus, the fully coherent approach carries coherency overhead that can significantly compromise the data bandwidth that the accelerator can extract from its own attached memory when the accelerator accesses its own memory. The private memory model requires significant resource and time costs, such as memory copying, page pinning requirements, page copying data bandwidth costs, and/or page translation costs (e.g., translation lookaside buffer (TLB) beats, page table operations, and/or the like) of). Accordingly, some embodiments may provide a coherent biasing process configured to provide multiple cache coherence processes that provide better memory for systems including accelerator-attached memory than conventional systems Utilization and improved performance, etc.FIG. 3 shows an example of an operating environment 300 that may represent various embodiments. The operating environment 300 depicted in FIG. 3 may include apparatus 305 operable to provide a coherent biasing process in accordance with some embodiments. In some embodiments, apparatus 305 may include a CPU 310 having multiple processing cores 312a-n and various protocol agents, such as caching agent 314, host agent 316, memory agent 318, and the like. The CPU 310 may be communicatively coupled to the accelerator 320 using various links 335, 340. Accelerator 320 may include accelerator engine 312 and memory agent 318 , and may include or access bias information 338 .As shown in FIG. 3 , accelerator engine 322 may be communicatively coupled directly to memory agent 328 via biased coherent bypass 330 . In various embodiments, accelerator 320 may be configured to operate during a device biasing process, wherein biased coherent bypass 330 may allow memory requests of accelerator engine 322 to directly access accelerator attachments of accelerators facilitated via memory agent 328 connected memory (not shown). In various embodiments, the accelerator 320 may be configured to operate in a host-biased process, wherein memory operations associated with accelerator-attached memory may be processed via the links 335, 340 using the CPU's cache coherency protocol, For example, via caching proxy 314 and host proxy 316 . Thus, the accelerator 320 of the device 305 can utilize the coherence protocol of the CPU 310 when appropriate (eg, when a non-accelerator entity requests accelerator-attached memory), while allowing the accelerator 320 to directly access the accelerator-attached via the biased coherent bypass 330 of memory.In some embodiments, the coherent bias (eg, whether device bias or host bias is active) may be stored in bias information 338 . In various embodiments, bias information 338 may include and/or may be stored in various data structures, such as data tables (eg, "bias tables"). In some embodiments, the bias information 338 may include a bias indicator with a value indicating a valid bias (eg, 0=host bias, 1=device bias). In some embodiments, the offset information 338 and/or the offset indicator may be at various levels of granularity, such as memory regions, page tables, address ranges, and the like. For example, bias information 338 may specify that certain memory pages are set for device bias, while other memory pages are set for host bias. In some embodiments, bias information 338 may include a bias table configured to operate as a low-cost scalable snoop filter.FIG. 4 illustrates an example operating environment 400 that may represent various embodiments. According to some embodiments, the operating environment 400 depicted in FIG. 4 may include an apparatus 405 operable to provide a coherent biasing process. Apparatus 405 may include accelerator 410 communicatively coupled to main processor 445 through link (or multi-protocol link) 489 . Accelerator 410 and main processor 445 may communicate over links using interconnect fabrics 415 and 450, respectively, which allow data and messages to pass between them. In some embodiments, link 489 may comprise a multi-protocol link that may be used to support multiple protocols. For example, link 489 and interconnect fabrics 415 and 450 may support various communication protocols, including but not limited to serial link protocols, device cache protocols, memory protocols, memory semantics protocols, directory bit support protocols, networking protocols, protocols protocol, accelerator protocol, data storage protocol, point-to-point protocol, fabric-based protocol, on-package (or on-chip) protocol, fabric-based on-package protocol, and/or similar protocols. Non-limiting examples of supported communication protocols may include PCI, PCIe, USB, SPI, SATA, QPI, UPI, OAP, IAL, IDI, IOSF, SMI, SMI3, etc. In some embodiments, link 489 and interconnect fabrics 415 and 450 may support intra-device protocols (eg, IDI) and memory interconnect protocols (eg, SMI3). In various embodiments, link 489 and interconnect fabrics 415 and 450 may support intra-device protocols (eg, IDI), memory interconnect protocols (eg, SMI3), and fabric-based protocols (eg, IOSF).In some embodiments, accelerator 410 may include bus logic 435 with device TLB 437 . In some embodiments, bus logic 435 may be or may include PCIe logic. In various embodiments, bus logic 435 may communicate over interconnect 480 using fabric-based protocols (eg, IOSF) and/or Peripheral Component Interconnect Express (PCIe or PCI-E) protocols. In various embodiments, communication through interconnect 480 may be used for various functions including, but not limited to, discovery, register access (eg, registers of accelerator 410 (not shown)), configuration, initialization, interrupts, direct memory access and/or Address Translation Service (ATS).The accelerator 410 may include a core 420 having a host memory cache 422 and an accelerator memory cache 424 . Core 420 may communicate using interconnect 481 using, for example, in-device protocols (eg, IDI) for various functions such as coherent requests and memory streaming. In various embodiments, accelerator 410 may include coherence logic 425 that includes or accesses bias mode information 427 . Coherent logic 425 may communicate using interconnect 482 using, for example, a memory interconnect protocol (eg, SMI3). In some embodiments, communication through interconnect 482 may be used for memory streaming. Accelerator 410 may be operably coupled to accelerator memory 430 (eg, as accelerator-attached memory) that may store bias information 432 .In various embodiments, host processor 445 may be operably coupled to host memory 440 and may include coherent logic (or coherent and cache logic) 455 with last level cache (LLC) 457 . Coherent logic 455 may communicate using various interconnects, eg, interconnects 484 and 485 . In some embodiments, interconnects 484 and 485 may include memory interconnect protocols (eg, SMI3) and/or in-device protocols (eg, IDI). In some embodiments, LLC 457 may include a combination of host memory 440 and at least a portion of accelerator memory 430 .Host processor 445 may include bus logic 460 with input output memory management unit (IOMMU) 462 . In some embodiments, bus logic 460 may be or may include PCIe logic. In various embodiments, bus logic 460 may communicate over interconnects 486 and 488 using fabric-based protocols (eg, IOSF) and/or Peripheral Component Interconnect Express (PCIe or PCI-E) protocols. In various embodiments, the host processor 445 may include multiple cores 465a-n, each core having a cache 467a-n. In some embodiments, the cores 465a-n may include architecture (IA) cores. Each of cores 465a-n may communicate with coherent logic 455 via interconnects 487a-n. In some embodiments, interconnects 487a-n may support in-device protocols (eg, IDI). In various embodiments, the host processor may include a device 470 operable to communicate with bus logic 460 via interconnect 488 . In some embodiments, device 470 may include an IO device, such as a PCIe IO device.In some embodiments, the apparatus 405 is operable to perform a coherent biasing process suitable for use in various configurations, such as a system having an accelerator 410 and a main processor 445 (eg, a computer processing complex comprising one or more computer processor chips) body), where accelerator 410 is communicatively coupled to host processor 445 through multiprotocol link 489, and where memory is directly attached to accelerator 410 and host processor 445 (eg, accelerator memory 430 and host memory 440, respectively). The coherent biasing process provided by device 405 may provide several technical advantages over conventional systems, such as providing accelerator 410 and "host" software running on processing cores 465a-n to access accelerator memory 430. The coherent biasing process provided by the apparatus may include a host biasing process and a device biasing process (collectively, the biasing protocol streams) and multiple options for modulating and/or selecting the biasing protocol stream for a particular memory access .In some embodiments, the biased protocol flow may be implemented at least in part using a protocol layer (eg, a "biased protocol layer") on the multi-protocol link 489 . In some embodiments, the bias protocol layer may include: Intra-Device Protocol (eg, IDI) and/or Memory Interconnect Protocol (eg, SMI3). In some embodiments, the biasing protocol flow may be enabled by using various information of the biasing protocol layer, adding new information to the biasing protocol layer, and/or adding support for the protocol. For example, existing opcodes for in-device protocols (eg, IDI) may be used, opcodes may be added to the memory interconnect protocol (eg, SMI3) standard and/or added to the memory interconnect on multi-protocol link 489 Protocol (eg, SMI3) support to implement biased protocol flows (eg, traditional multi-protocol links may only include intra-device protocols (eg, IDI) and fabric-based protocols (eg, for IOSF)).In some embodiments, apparatus 405 may be associated with at least one operating system (OS). The OS may be configured not to use accelerator memory 430 or to not use some portion of accelerator memory 430 . Such an OS may include support for "memory-only NUMA modules" (eg, no CPU). Device 405 may execute drivers (eg, including driver 118) to perform various accelerator memory services. Illustrative and non-limiting accelerator memory services implemented in the driver may include driver discovery and/or get/allocate accelerator memory 430, provide allocation API and map pages through OS page map services, provide management of multi-process memory oversubscription and work scheduling process, provides APIs to allow software applications to set and change the bias mode of storage regions of accelerator memory 430, and/or return pages to the drive's free page list and/or return pages to deallocation of default bias modes API.Figure 5a shows an example of an operating environment 500 that may represent various embodiments. According to some embodiments, the operating environment 500 depicted in Figure 5a may provide a host biased process flow. As shown in FIG. 5a, apparatus 505 may include CPU 510 communicatively coupled to accelerator 520 via link 540. In some embodiments, link 540 may comprise a multi-protocol link. CPU 510 may include coherent controller 530 and may be communicatively coupled to host memory 512 . In various embodiments, coherent controller 530 may be used to provide one or more standard cache coherency protocols. In some embodiments, coherent controller 530 may include and/or be associated with various agents, such as host agents. In some embodiments, CPU 510 may include and/or may be communicatively coupled to one or more IO devices. Accelerator 520 may be communicatively coupled to accelerator memory 522 .Host bias process flows 550 and 560 may include a set of data streams that funnel all requests to accelerator memory 522, including requests from accelerator 520, through coherent controller 530 in CPU 510. In this way, the accelerator 522 takes a roundabout route to access the accelerator memory 522, but allows access from both the accelerator 522 and the CPU 510 (including requests from IO devices via the CPU 510) to use the standard cache coherence of the coherent controller 530 agreement and remain consistent. In some embodiments, the host biasing process flows 550 and 560 may use an in-device protocol (eg, IDI). In some embodiments, the host biasing process flows 550 and 560 may use standard opcodes of an in-device protocol (eg, IDI), eg, to issue a request to the coherent controller 530 over the multi-protocol link 540 . In various embodiments, coherent controller 530 may issue various coherence messages (eg, snoops) generated by requests from accelerator 520 to all peer processor chips and internal processor agents on behalf of accelerator 520 . In some embodiments, the various conformance messages may include point-to-point protocol (eg, UPI) conformance messages and/or intra-device protocol (eg, IDI) messages.In some embodiments, coherent controller 530 may conditionally issue memory access messages to accelerator memory controllers (not shown) of accelerators 520 over multi-protocol link 540 . Such memory access messages may be the same or substantially similar to memory access messages that coherent controller 530 may send to a CPU memory controller (not shown), and may include new opcodes that allow data to be returned directly to accelerator 520 Instead of forcing data to be returned to the coherent controller and then back to the accelerator 520 via the multi-protocol link 540 as an in-device protocol (eg, IDI) response to the internal proxy.Host bias process flow 550 may include a flow resulting from a request or memory operation originating from the accelerator for accelerator memory 522 . The host biased process path 560 may include streams resulting from requests or memory operations originating from the accelerator memory 522 of the CPU 510 (or an IO device or a software application associated with the CPU 510). When device 505 is active in host bias mode, host bias process flows 550 and 560 may be used to access accelerator memory 522, as shown in Figure 5a. In various embodiments, in host biased mode, all requests from CPU 510 targeting accelerator memory 522 may be sent directly to coherent controller 530 . Coherent controller 530 may apply standard cache coherence protocols and send standard cache coherence messages. In some embodiments, the coherent controller 530 may send a memory interconnect protocol (eg, SMI3) command over the multi-protocol link 540 for such a request, wherein the memory interconnect protocol (eg, SMI3) flows through the multi-protocol link 540 Return data.Figure 5b shows another example of an operating environment 500 that may represent various embodiments. According to some embodiments, the operating environment 500 depicted in Figure 5a may provide a device biasing process flow. As shown in FIG. 5 , device bias path 570 may be used to access accelerator memory 522 when apparatus 505 is active in the device bias mode. For example, device bias flow or path 570 may allow accelerator 520 to directly access accelerator memory 522 without consulting coherent controller 530 . More specifically, device bias path 570 may allow accelerator 520 to directly access accelerator memory 522 without having to send a request over multi-protocol link 540 .In device-biased mode, CPU 510 requests for accelerator memory may be issued the same or substantially similar to those described for host-biased mode, but at path 580's memory interconnect protocol (eg, SMI3), according to some embodiments part is different. In some embodiments, in device bias mode, CPU 510 requests to attached memory may be completed as if they were issued as "uncached" requests. In general, data for requests that are not cached during device bias mode is not cached in the CPU cache hierarchy. In this manner, accelerator 520 is allowed to access data in accelerator memory 522 during device bias mode without consulting coherent controller 530 of CPU 510. In some embodiments, unbuffered requests may be implemented on an intra-device protocol (eg, IDI) bus of CPU 510 . In various embodiments, uncached requests may be implemented using a globally observed one-use (GO-UO) protocol on the CPU 510 internal device protocol (eg, IDI) bus. For example, a response to an uncached request may return a piece of data to CPU 510 and instruct CPU 510 to use the piece of data only once, eg, to prevent caching of the piece of data and support the use of uncached data streams.In some embodiments, device 505 and/or CPU 510 may not support GO-UO. In such an embodiment, uncached streams (eg , path 580). For example, when a "device bias" page of CPU 510 and accelerator 520 is targeted, accelerator 520 may set one or more states to block future requests from accelerator 520 for the target memory region (eg, cache line) and The memory interconnect protocol (eg, SMI3) line of the multi-protocol link 540 sends a "device bias hit" response. In response to the "Device Bias Hit" message, the coherent controller 530 (or its proxy) may return data to the requesting processor core, followed by a snoop invalidation message. When the corresponding processor core confirms that snoop invalidation is complete, coherent controller 530 (or its proxy) may send a "device bias block complete" to accelerator 520 on the memory interconnect protocol (eg, SMI3) line of multi-protocol link 540 "information. In response to receiving the "Device Bias Block Complete" message, the accelerator may clear the corresponding blocking state.4, bias mode information 427 may include a bias indicator configured to indicate a valid bias mode (eg, device bias mode or host bias mode). Selection of an active bias mode can be determined by bias information 432 . In some embodiments, the offset information 432 may include an offset table. In various embodiments, the offset table may include offset information 432 for certain regions of accelerator memory, such as pages, lines, and the like. In some embodiments, the offset table may include bits (eg, 1 or 3 bits) per accelerator memory 430 memory page. In some embodiments, the offset table may be implemented using RAM, such as SRAM at accelerator 410 and/or a stolen range of accelerator memory 430 , with or without a cache inside accelerator 410 .In some embodiments, the bias information 432 may include bias table entries in the bias table. In various embodiments, the offset table entry associated with each access to accelerator memory 430 may be accessed prior to the actual access to accelerator memory 430 . In some embodiments, a local request from accelerator 410 to discover pages in its device bias may be forwarded directly to accelerator memory 430 . In various embodiments, a local request from accelerator 410 to discover pages in its host bias may be forwarded to host processor 445 , eg, as an in-device protocol (eg, IDI) request on multi-protocol link 489 . In some embodiments, a host processor 445 request that discovers a page in its device bias, eg, using a memory interconnect protocol (eg, SMI3), may use an uncached stream (eg, path 580 of Figure 5b) to complete the request . In some embodiments, a host processor 445 request, eg, using a memory interconnect protocol (eg, SMI3), discovering a page in its host bias may complete the request as a standard memory read of the accelerator memory (eg, via FIG. 5a ). path 560).The bias mode of the bias indicator of the bias mode information 427 for a region (eg, memory page) of the accelerator memory 430 may be changed by a software-based system, a hardware-assisted system, a hardware-based system, or a combination thereof. In some embodiments, the bias indicator can be changed via an application programming interface (API) call (eg, OpenCL), which in turn can call the accelerator 410 device driver (eg, driver 118 ). The accelerator 410 device driver may send a message to the accelerator 410 (or enqueue a command descriptor) instructing the accelerator 410 to change the bias indicator. In some embodiments, the change in the bias indicator may accompany a cache flush operation in the main processor 445 . In various embodiments, a cache flush operation may be required for transition from host bias mode to device bias mode, but a cache flush operation may not be required for transition from device bias mode to host bias mode. In various embodiments, software may change the bias mode of one or more memory regions of accelerator memory 430 via work requests sent to accelerator 430 .In some cases, software may not be able or easily able to determine when to make a bias conversion API call and identify the memory regions that require bias conversion. In this case, the accelerator 410 may provide a bias transition prompt process, wherein the accelerator 410 determines the need for a bias transition and sends a message to the accelerator driver (eg, driver 118 ) indicating that a bias transition is required. In various embodiments, the bias transition hint process may be activated in response to a bias table lookup that triggers the accelerator 410 to access the host bias mode memory area or the host processor 445 to access the device bias mode memory area. In some embodiments, the bias transition hinting process may signal the need for a bias transition to the accelerator driver via an interrupt. In various embodiments, the bias table may include bias status bits for enabling bias transition state values. Bias status bits may be used to allow access to memory regions during the process of bias changes (eg, when the cache is partially flushed and delta cache pollution due to subsequent requests must be suppressed).Included herein are one or more logic flows representing example methods for performing the novel aspects of the disclosed architecture. Although one or more of the methodologies illustrated herein are shown and described as a series of acts for simplicity of illustration, those skilled in the art will understand and appreciate that the methodologies are not limited by the order of the acts. Accordingly, some acts may occur in a different order and/or concurrently with other acts shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Furthermore, not all acts illustrated in a method may be required for novel implementations.The logic flow can be implemented in software, firmware, hardware, or any combination thereof. In software and firmware embodiments, the logic flows may be implemented by computer-executable instructions stored on a non-transitory computer-readable medium or machine-readable medium (eg, optical, magnetic, or semiconductor memory). Embodiments are not limited in this context.FIG. 6 shows an embodiment of a logic flow 600 . Logic flow 600 may represent some or all of the operations, such as apparatuses 105 , 305 , 405 and 505 , performed by one or more embodiments described herein. In some embodiments, logic flow 600 may represent some or all of the operations for a coherent biasing process in accordance with some embodiments.As shown in FIG. 6, at block 602, the logic flow 600 may set the bias mode of the accelerator memory page to the host bias mode. For example, a host software application (eg, software application 140) may set the bias mode of accelerator device memory 430 to the host bias mode through a driver and/or API call. The host software application may use an API call (eg, OpenCL API) to convert the allocated (or target) page of accelerator memory 430 storing operands to host bias. A cache flush will not be initiated because the allocated page is transitioning from device-biased to host-biased mode. The device bias mode may be specified in the bias table of bias information 432.At block 604, logic flow 600 may push operands and/or data to accelerator memory pages. For example, accelerator 420 may perform functions for a CPU that requires certain operands. The host software application may push operands from peer CPU cores (eg, core 465a) to allocated pages of accelerator memory 430. Host processor 445 may generate operand data in allocated pages in accelerator memory 430 (and anywhere in host memory 440).At block 606, the logic flow 600 may transition the accelerator memory page to a device bias mode. For example, a host software application may use an API call to convert an operand memory page of accelerator memory 430 to device bias mode. When the device bias conversion is complete, the host software application can submit the work to the accelerator 430 . The accelerator 430 can perform the functions associated with the submitted work without the coherency overhead associated with the host.At block 608, the logic flow 600 may generate a result using the operands by the accelerator, and store the result in an accelerator memory page. For example, the accelerator 420 may use the operands to perform functions (eg, floating point operations, graphics calculations, FFT operations, and/or similar functions) to generate results. The results may be stored in accelerator memory 430 . In addition, software applications can use API calls to cause work descriptor submissions to flush operand pages from the host cache. In some embodiments, cache flushing may be performed using a cache (or cache line) flush routine (such as CLFLUSH) over an in-device protocol (eg, IDI) protocol. The results generated by this function may be stored in allocated accelerator memory 430 pages.At block 610, the logic flow may set the bias mode of the accelerator memory page storing the result to the host bias mode. For example, a host software application may use an API call to convert an operand memory page of accelerator memory 430 to a host-biased mode without causing a coherency process and/or cache flush action. The host CPU 445 can access, cache and share the results. At block 612, the logic flow 600 may provide the results to the host software from the accelerator memory page. For example, a host software application may access the results directly from accelerator memory page 430 . In some embodiments, the allocated accelerator memory pages may be freed through logic flow. For example, the host software application may use driver and/or API calls to free allocated memory pages of accelerator memory 430 .7 is a block diagram illustrating a structure in accordance with one or more examples of the present specification. In this case, a coherent accelerator structure 700 is provided. The coherent accelerator fabric 700 is interconnected with an IAL endpoint 728, which communicatively couples the coherent accelerator fabric 700 to a host device, such as the one disclosed in the preceding figures.A coherent accelerator structure 700 is provided to communicatively couple an accelerator 740 and its attached memory 722 to a host device. The memory 722 includes a plurality of memory controllers 720-1 through 720-n. In one example, 8 memory controllers 720 may serve 8 separate memory banks.Fabric controller 736 includes a set of controllers and interconnects to provide coherent memory fabric 700 . In this example, fabric controller 736 is divided into n separate slices to serve n banks of memory 722 . Each slice can be substantially independent of every other slice. As described above, fabric controller 736 includes both "vertical" interconnects 706 and "horizontal" interconnects 708 . Vertical interconnection can generally be understood as connecting upstream or downstream equipment to each other. For example, last level cache (LLC) 734 is connected vertically to LLC controller 738 , to the fabric, to an in-die interconnect (F2IDI) block that communicatively couples fabric controller 736 to accelerator 740 . F2IDI 730 provides a downstream link to fabric stop 712 and may also provide bypass interconnect 715. Bypass interconnect 715 connects LLC controller 738 directly to fabric-to-memory interconnect 716, where the signals are multiplexed to memory controller 720. In the non-bypass route, the request from F2IDI 730 travels along the horizontal interconnect to the host, then back to fabric stop 712, then to fabric coherence engine 704, and down to F2MEM 716.The horizontal bus includes a bus that interconnects fabric stops 712 to each other and connects LLC controllers to each other.In one example, the IAL endpoint 728 may receive a packet from the host device that includes instructions to perform an acceleration function, and a payload that includes a snoop for the accelerator to operate. The IAL endpoint 728 passes these to the L2FAB 718 , which acts as the host device interconnect for the fabric controller 736 . The L2FAB 718 may act as a link controller for the fabric, including providing an IAL interface controller (although in some embodiments, additional IAL control elements may also be provided, and in general, any combination of elements providing IAL interface control may be called for "IAL Interface Controller"). The L2FAB 718 controls requests from the accelerator to the host and vice versa. The L2FAB 718 may also become an IDI proxy and may need to act as an ordering proxy between IDI requests from the accelerator and snoops from the host.L2FAB 718 may then operate fabric stop 712-0 to populate memory 722 with values. Fabric stop L2FAB 718 may apply a load balancing algorithm, eg, simple address based hashing, to tag payload data for a particular destination bank. Once the banks in memory 722 are filled with appropriate data, accelerator 740 operates fabric controller 736 to fetch values from memory to LLC 734 via LLC controller 738 . Accelerator 740 performs its accelerated computations and then writes the outputs to LLC 734 , where they are then passed downstream and written out to memory 722 .In some examples, fabric stop 712, F2MEM controller 716, multiplexer 710, and F2IDI 730 may all be standard buses and interconnects that provide interconnection according to well-known principles. The aforementioned interconnects may provide virtual and physical channels, interconnects, buses, switching elements, and flow control mechanisms. They can also provide conflict resolution mechanisms related to interactions between requests made by the accelerator or device proxy and requests made by the host. The fabric may include a physical bus in a horizontal direction, with servers switching in a ring as the bus traverses the various slices. The structure may also include specially optimized horizontal interconnections 739 between LLC controllers 738 .Requests from F2IDI 730 may be passed through hardware to split and multiplex traffic to the host between the horizontal fabric interconnect and each slice of optimized path between LLC controller 738 and memory 722. This includes multiplexing and directing traffic to an IDI block, where the traffic traverses traditional routes through fabric stops 712 and FCEs 704, or directing traffic to bypass interconnects 715 using IDI fills. F2IDI 730-1 may also include hardware for managing entry and exit to and from horizontal fabric interconnects, such as by providing appropriate signals to fabric stop 712.The IAL interface controller 718 may be a suitable PCIe controller. The IAL interface controller provides the interface between the packetized IAL bus and fabric interconnects. It is responsible for queuing and providing flow control for IAL messages and for directing IAL messages to the appropriate fabric physical and virtual channels. L2FAB 718 may also provide arbitration between multiple classes of IAL messages. It can further enforce the IAL collation.At least three control structures within the structure controller 736 provide novel and advantageous features of the structure controller 736 of the present specification. These include LLC controller 738 , FCE 704 and power management module 750 .Advantageously, LLC controller 738 may also provide bias control functionality in accordance with the IAL bias protocol. Accordingly, LLC controller 738 may include hardware for performing cache lookups, hardware for checking IAL bases for cache miss requests, hardware for directing requests on the appropriate interconnect paths, and Logic for responding to snoops issued by the host processor or by the FCE704.When diverting requests from fabric stop 712 through L2FAB 718 to the host, LLC controller 738 determines where traffic should be directed through fabric stop 712, either directly to F2MEM 716 through bypass interconnect 715, or to another memory through horizontal bus 739 controller.Note that in some embodiments LLC controller 738 is a physically separate device or block from FCE 704 . A single block that provides the functionality of both LLC controller 738 and FCE 704 may be provided. However, by separating the two blocks and providing IAL bias logic in LLC controller 738, bypass interconnect 715 may be provided, thereby speeding up certain memory operations. Advantageously, in some embodiments, the separation of LLC controller 738 and FCE 704 may also assist with selective power gating in portions of the fabric for more efficient use of resources.FCE 704 may include hardware for queuing, processing (eg, issuing snoops to the LLC), and tracking SMI requests from hosts. This provides consistency with the host device. FCE 704 may also include hardware for queuing requests on each chip, optimized paths to banks within memory 722 . Embodiments of the FCE may also include hardware for arbitrating and multiplexing the aforementioned two request classes onto the CMI memory subsystem interface, and may include hardware or logic for resolving conflicts between the aforementioned two request classes . Other embodiments of FCE may provide support for ordering of requests from direct vertical interconnects and requests from FCE 704 .The power management module (PMM) 750 also provides advantages for embodiments of this specification. For example, consider the case where each individual slice in fabric controller 736 supports 1 GB per second of bandwidth vertically. 1 GB per second is provided as an illustrative example only, and a real-world example of fabric controller 736 may be much faster or slower than 1 GB per second.LLC 734 may have a higher bandwidth, eg, 10 times the bandwidth of the vertical bandwidth of a piece of fabric controller 736 . Thus, LLC 734 may have a bandwidth of 10 GB per second, which may be bi-directional, such that the total bandwidth through LLC 734 is 20 GB per second. Thus, with 8 slices of fabric controller 736 supporting bi-directional 20 GB/s, accelerator 740 can see a total bandwidth of 160 GB/s via horizontal bus 739. Therefore, running LLC controller 738 and horizontal bus 739 at full speed consumes a lot of power.However, as mentioned above, the vertical bandwidth may be 1 GB per slice, and the total IAL bandwidth may be approximately 10 GB per second. Thus, the bandwidth provided by the horizontal bus 739 is about an order of magnitude higher than the bandwidth of the overall fabric controller 736 . For example, horizontal bus 739 may include thousands of physical lines, while vertical interconnects may include hundreds of physical lines. The horizontal fabric 708 can support the full bandwidth of the IAL, ie 10 GB per second in each direction, for a total of 20 GB per second.Accelerator 740 can perform computations and operate on LLC 734 at a rate much higher than the host device can consume data. Thus, data can enter the accelerator 740 in bursts and can then be consumed by the main processor as needed. Once the accelerator 740 completes its calculations and populates the LLC 734 with appropriate values, maintaining full bandwidth between the LLC controllers 738 consumes a lot of power, which is essentially wasted since the LLC controller 738 is no longer needed when the accelerator 740 is idle communicate with each other. Thus, when the accelerator 740 is idle, the LLC controller 738 can power down, shutting down the horizontal bus 739, while making the appropriate vertical bus active, eg, from fabric stop 712 to FCE 704 to F2MEM 716 to memory controller 720, and also keep the horizontal bus 708. Because horizontal bus 739 operates at about an order of magnitude or higher than the rest of fabric 700, this can save about an order of magnitude of power when accelerator 740 is idle.Note that some embodiments of the coherent accelerator structure 700 may also provide an isochronous controller, which may be used to provide isochronous flow to delay or time sensitive elements. For example, if accelerator 740 is a display accelerator, an isochronous display path may be provided to a display generator (DG) so that connected displays receive isochronous data.The overall combination of agents and interconnects in the coherent accelerator fabric 700 enables the IAL function in a high performance, deadlock and starvation free manner. It provides increased efficiency through bypass interconnect 715 while saving energy.8 is a flowchart of a method 800 in accordance with one or more examples of the present specification. Method 800 illustrates a method of power saving, such as may be provided by PMM 750 of FIG. 7 .Input from host device 804 may reach the coherent accelerator structure, including instructions to perform computations and payloads for computations. In block 808, if the horizontal interconnect between LLC controllers is powered down, the PMM powers the interconnect to its full bandwidth.In block 812, the accelerator computes the result according to its normal function. In computing these results, it is possible to operate the coherent accelerator structure at its full available bandwidth, including the full bandwidth of the horizontal interconnects between LLC controllers.When the results are complete, the accelerator fabric may flush the results to local memory 820 in block 816 .In decision block 824, the PMM determines if there is new operational data available from the host. If any new data is available, control returns to block 812 and the accelerator continues to perform its acceleration function. At the same time, the host device can consume data directly from the local memory 820, which can be mapped to the host memory address space in a coherent manner.Returning to block 824, if no new data is available from the host, then in block 828, the PMM reduces power, eg, shuts down the LLC controller, thereby disabling the high bandwidth horizontal interconnect between the LLC controllers. As mentioned above, because local memory 820 is mapped into the host address memory space, the host can continue to consume data from local memory 820 at the full IAL bandwidth, which in some embodiments is much lower than that of the LLC controller. full bandwidth between.In block 832, the controller waits for new input from the host device, and when new data is received, the interconnect may be powered to stand by.Figures 9-11 show an example of an IAL.mem tunnel over PCIe. The packet format described includes the standard PCIe packet fields, with the exception of the fields highlighted in gray. Fields in gray are fields that provide new tunnel areas.9 is a block diagram of an IAL.mem read operating over PCIe in accordance with one or more examples of the present specification. New fields include:• MemOpcode (4 bits) - Memory opcode. Contains information about the memory transaction that needs to be processed. Such as read, write, no operation, etc.MetaField and MetaValue (2 bits) - Metadata Field and Metadata Value. Together they specify which metadata field in memory needs to be modified and to what value. Metadata fields in memory typically contain information associated with the actual data. For example, QPI stores catalog state in metadata.TC (2 bits) - Traffic class. Used to differentiate traffic belonging to different quality of service categories.• Snp Type (3 bits) - Snoop Type. Used to maintain coherency between the host and device caches.R (5 bits) - reserved10 is a block diagram of an IAL.mem write over PCIe operating in accordance with one or more examples of the present specification. New fields include:• MemOpcode (4 bits) - Memory opcode. Contains information about the memory transaction that needs to be processed. For example, read, write, no-op, etc.MetaField and MetaValue (2 bits) - Metadata Field and Metadata Value. Together they specify which metadata field in memory needs to be modified and to what value. Metadata fields in memory typically contain information associated with the actual data. For example, QPI stores catalog state in metadata.TC (2 bits) - Traffic class. Used to differentiate traffic belonging to different quality of service categories.• Snp Type (3 bits) - Snoop Type. Used to maintain coherency between the host and device caches.R (5 bits) - reserved11 is a block diagram of IAL.mem data completion over PCIe operations in accordance with one or more examples of the present specification. New fields include:R (1 bit) - reservedOpcode (3 digits) - IAL.io opcodeMetaField and MetaValue (2 bits) - Metadata Field and Metadata Value. Together they specify which metadata field in memory needs to be modified and to what value. Metadata fields in memory typically contain information associated with the actual data. For example, QPI stores catalog state in metadata.PCLS (4 bits) - Previous Cache Line Status. Used to identify consistency transitions.• PRE (7 bits) - performance code. Used by performance monitoring counters in the host.12 illustrates an embodiment of a structure consisting of point-to-point links interconnecting a set of components in accordance with one or more examples of the present specification. System 1200 includes processor 1205 and system memory 1210 coupled to controller hub 1215. Processor 1205 includes any processing element, such as a microprocessor, main processor, embedded processor, coprocessor, or other processor. Processor 1205 is coupled to controller hub 1215 through front side bus (FSB) 1206 . In one embodiment, FSB 1206 is a serial point-to-point interconnect as described below. In another embodiment, link 1206 includes a serial differential interconnect fabric that conforms to a differential interconnect standard.System memory 1210 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible to devices in system 1200 . System memory 1210 is coupled to controller hub 1215 through memory interface 1216 . Examples of memory interfaces include double data rate (DDR) memory interfaces, dual-channel DDR memory interfaces, and dynamic RAM (DRAM) memory interfaces.In one embodiment, the controller hub 1215 is a root hub, root complex, or root controller in a Peripheral Component Interconnect Express (PCIe) interconnect hierarchy. Examples of controller hubs 1215 include chipsets, memory controller hubs (MCHs), north bridges, interconnect controller hubs (ICH), south bridges, and root controllers/hubs. Typically, the term chipset refers to two physically separate controller hubs, a memory controller hub (MCH) coupled to an interconnect controller hub (ICH).Note that current systems typically include an MCH integrated with processor 1205, while controller 1215 communicates with I/O devices in a manner similar to that described below. In some embodiments, peer-to-peer routing is optionally supported through root complex 1215.Here, controller hub 1215 is coupled to switch/bridge 1220 via serial link 1219 . Input/output modules 1217 and 1221 (which may also be referred to as interfaces/ports 1217 and 1221 ) include/implement a layered protocol stack to provide communication between controller hub 1215 and switch 1220 . In one embodiment, multiple devices can be coupled to switch 1220.The switch/bridge 1220 routes packets/messages from the device 1225 upstream (ie, up towards the root complex) to the controller hub 1215 and downstream (ie, down the hierarchy away from the root controller) from the processor 1205 or system memory 1210 is routed to device 1225. In one embodiment, switch 1220 is referred to as a logical component of multiple virtual PCI-to-PCI bridge devices.Devices 1225 include any internal or external devices or components to be coupled to an electronic system, such as I/O devices, network interface controllers (NICs), add-in cards, audio processors, network processors, hard drives, storage devices, CD/ DVDROMs, monitors, printers, mice, keyboards, routers, portable storage devices, Firewire devices, Universal Serial Bus (USB) devices, scanners and other input/output devices. Usually in PCIe vernacular, for example a device is called an endpoint. Although not specifically shown, device 1225 may include a PCIe to PCI/PCI-X bridge, supporting legacy or other versions of PCI devices. Endpoint devices in PCIe are generally classified as legacy, PCIe or root complex integrated endpoints.Accelerator 1230 is also coupled to controller hub 1215 through serial link 1232 . In one embodiment, graphics accelerator 1230 is coupled to the MCH, which is coupled to the ICH. Switch 1220 and thus I/O device 1225 are then coupled to the ICH. I/O modules 1231 and 1218 are also used to implement a layered protocol stack for communication between graphics accelerator 1230 and controller hub 1215 . Similar to the MCH discussion above, the graphics controller or graphics accelerator 1230 itself may be integrated in the processor 1205 .In some embodiments, accelerator 1230 may be an accelerator, such as accelerator 740 of FIG. 7 , which provides coherent memory for processor 1205 .To support IAL over PCIe, the controller hub 1215 (or another PCIe controller) may include extensions to the PCIe protocol, including by way of non-limiting example a mapping engine 1240, a tunneling engine 1242, host bias to device bias inversion Engine 1244 and QoS Engine 1246.Mapping engine 1240 may be configured to provide opcode mapping between PCIe instructions and IAL.io (IOSF) opcodes. IOSF provides a non-coherent ordered semantic protocol and can provide services such as device discovery, device configuration, error reporting, interrupt provisioning, interrupt handling, and DMA-style data transfer by way of non-limiting examples. The native PCIe can provide corresponding instructions, so in some cases the mapping can be one-to-one.The tunnel engine 1242 provides IAL.mem (SMI) tunnels over PCIe. The tunnel enables a host (eg, a processor) to map accelerator memory into the host memory address space, and to read and write from accelerator memory in a coherent manner. SMI is a transactional memory interface that can be used by the coherent engine on the host to coherently tunnel IAL transactions over PCIe. Examples of modified packet structures for such tunnels are shown in Figures 9-11. In some cases, special fields for the tunnel may be allocated within one or more DVSEC fields of the PCIe packet.The host-bias-to-device bias rollover engine 1244 provides the accelerator device with the ability to flush host cache lines (required for host-to-device bias rollover). This can be done using non-allocated zero-length writes (ie, writes with no byte enable set) at cache line granularity from the accelerator device on PCIe. Non-assignment semantics can be described using transaction and processing hints on transaction layer packets (TLPs). E.g:·TH=1, PH=01This enables the device to invalidate a given cache line, allowing it to access its own memory space without losing coherency. The device can issue a read after a page offset flip to ensure all rows are flushed. The device may also implement a CAM to ensure that no new requests for the line are received from the host while a rollover is in progress.QoS engine 1246 may divide IAL traffic into two or more virtual channels to optimize interconnection. For example, these may include a first virtual channel (VC0) for MMIO and configuration operations, a second virtual channel (VC1) for host-to-device writes, and a third virtual channel (VC1) for host-to-device reads ( VC2).Figure 13 illustrates an embodiment of a layered protocol stack in accordance with one or more embodiments of the present specification. Layered protocol stack 1300 includes any form of layered communication stack, such as a Quick Path Interconnect (QPI) stack, a PCie stack, a next-generation high performance computing interconnect stack, or other layered stacks. Although the discussion below with reference to Figures 12-15 is given with respect to PCIe stacks, the same concepts can be applied to other interconnect stacks. In one embodiment, protocol stack 1300 is a PCIe protocol stack, including transaction layer 1305 , link layer 1310 and physical layer 1320 .Interfaces such as interfaces 1217 , 1218 , 1221 , 1222 , 1226 and 1231 in FIG. 12 may be represented as communication protocol stack 1300 . A representation as a communication protocol stack may also be referred to as a module or interface that implements/includes the protocol stack.PCIe uses packets to transfer information between components. Packets are formed in transaction layer 1305 and data link layer 1310 to convey information from sending components to receiving components.As the transmitted packets flow through other layers, they are extended with additional information needed to process packets at those layers. On the receiving side, the reverse process occurs and the packet is transformed from its physical layer 1320 representation to the data link layer 1310 representation, and finally (for transaction layer packets) into a form that can be processed by the receiving device's transaction layer 1305.transaction layerIn one embodiment, transaction layer 1305 is used to provide an interface between the processing core of a device and the interconnect fabric, such as data link layer 1310 and physical layer 1320 . In this regard, the primary responsibility of the transaction layer 1305 is the assembly and disassembly of packets, ie, transaction layer packets (TLPs). The transaction layer 1305 typically manages credit-based flow control for the TLP. PCIe implements split transactions, that is, transactions with requests and responses separated by time, allowing the link to carry other traffic while the target device collects the data for the response.Additionally, PCIe utilizes credit-based flow control. In this scheme, the device advertises an initial credit amount for each receive buffer in the transaction layer 1305. At an external device at the opposite end of the link, such as the controller hub 115 in Figure 1, the number of credits consumed by each TLP is calculated. If the transaction does not exceed the credit limit, the transaction can be sent. After a response is received, a certain amount of credit will be restored. An advantage of the credit scheme is that if the credit limit is not reached, the delay in credit return does not affect performance.In one embodiment, the four transactional address spaces include a configuration address space, a memory address space, an input/output address space, and a message address space. A storage space transaction includes one or more read and write requests to transfer data to or from a memory-mapped location. In one embodiment, memory space transactions can use two different address formats, eg, a short address format, such as a 32-bit address, or a long address format, such as a 64-bit address. Configuration space transactions are used to access the configuration space of a PCIe device. Transactions in the configuration space include read requests and write requests. Message space transactions (or simply messages) are defined to support in-band communication between PCIe agents.Thus, in one embodiment, transaction layer 1305 assembles packet header/payload 1306. The format of the current packet header/payload can be found in the PCIe specification on the PCIe specification website.14 illustrates an embodiment of a PCIe transaction descriptor in accordance with one or more examples of this specification. In one embodiment, transaction descriptor 1400 is a mechanism for carrying transaction information. In this regard, transaction descriptor 1400 supports the identification of transactions in the system. Other potential uses include tracking modifications to the default transaction ordering and association of transactions with channels.Transaction descriptor 1400 includes global identifier field 1402 , attribute field 1404 and channel identifier field 1406 . In the example shown, global identifier field 1402 is depicted as including local transaction identifier field 1408 and source identifier field 1410 . In one embodiment, the global transaction identifier 1402 is unique to all outstanding requests.According to one implementation, the local transaction identifier field 1408 is a field generated by the requesting proxy and is unique for all outstanding requests that require the requesting proxy to complete. Also, in this example, the source identifier 1410 uniquely identifies the requestor proxy within the PCIe hierarchy. Thus, along with the source ID 1410, the local transaction identifier 1408 field provides the global identification of the transaction within the hierarchy domain.Attribute fields 1404 specify the characteristics and relationships of the transaction. In this regard, the attribute field 1404 can potentially be used to provide additional information that allows modification of the default handling of the transaction. In one embodiment, the attribute fields 1404 include a priority field 1412 , a reserved field 1414 , an ordering field 1416 , and a non-snooping field 1418 . Here, the priority subfield 1412 may be modified by the originator to assign a priority to the transaction. Reserved properties field 1414 is reserved for future use or vendor-defined use. A possible usage model using priority or security attributes can be implemented using reserved attribute fields.In this example, the ordering attribute field 1416 is used to provide optional information conveying the type of ordering that can modify the default ordering rules. According to one example implementation, an ordering attribute of "0" indicates that the default collation is to be applied, where an ordering attribute of "1" indicates that the ordering is relaxed, writes can pass writes in the same direction, and read completions can pass writes in the same direction enter. Snoop attribute field 1418 is used to determine whether a transaction was snooped. As shown, the channel ID field 1406 identifies the channel associated with the transaction.link layerLink layer 1310 (also referred to as data link layer 1310 ) acts as an intermediate level between transaction layer 1305 and physical layer 1320 . In one embodiment, the responsibility of the data link layer 1310 is to provide a reliable mechanism for exchanging TLPs between two link components. One side of the data link layer 1310 accepts the TLP assembled by the transaction layer 1305, applies the packet sequence identifier 1311, i.e. the identification number or packet number, calculates and applies the error detection code, i.e. the CRC 1312, and submits the modified TLP to the physical layer 1320 is used for transmission across the physical layer to external devices.physical layerIn one embodiment, physical layer 1320 includes logical sub-block 1321 and electronic block 1322 to physically send packets to external devices. Here, the logical sub-block 1321 is responsible for the "digital" functions of the physical layer 1321. In this regard, the logical sub-block includes a transmit portion for preparing outgoing information for transmission by the physical sub-block 1322, as well as identifying and preparing for receipt of the received information prior to passing the received information to the link layer 1310 part.Physical block 1322 includes transmitters and receivers. The transmitter is provided with symbols by logic sub-block 1321, which the transmitter serializes and transmits to an external device. The receiver is supplied with serialized symbols from an external device and converts the received signal into a bit stream. The bitstream is deserialized and provided to logic sub-block 1321. In one embodiment, an 8b/10b transmission code is used, in which 10-bit symbols are sent/received. Here, special symbols are used to frame the packet with frame 1323. Additionally, in one example, the receiver also provides a symbol clock recovered from the incoming serial stream.As noted above, although transaction layer 1305, link layer 1310, and physical layer 1320 are discussed with reference to specific embodiments of the PCIe protocol stack, the layered protocol stack is not so limited. In fact, any layered protocol can be included/implemented. For example, a port/interface represented as a layered protocol includes: (1) a first layer for assembling packets, the transaction layer; a second layer for ordering packets, the link layer; and a third layer for sending packets , the physical layer. As a specific example, the Common Criteria Interface (CSI) layered protocol is used.15 illustrates an embodiment of a PCIe serial point-to-point fabric in accordance with one or more examples of this specification. Although an embodiment of a PCIe serial point-to-point link is shown, the serial point-to-point link is not so limited as it includes any transmission path for transmitting serial data. In the embodiment shown, the basic PCIe link includes two low voltage differential drive signal pairs: transmit pair 1506/1511 and receive pair 1512/1507. Accordingly, device 1505 includes transmit logic 1506 to transmit data to device 1510 and receive logic 1507 to receive data from device 1510 . In other words, two transmit paths, paths 1516 and 1517, and two receive paths, paths 1518 and 1519, are included in the PCIe link.A transmit path refers to any path used to transmit data, such as a transmit wire, copper wire, light, wireless communication channel, infrared communication link, or other communication path. A connection between two devices (eg, device 1505 and device 1510 ) is called a link, eg, link 1515 . A link can support one channel - each channel represents a set of differential signal pairs (one pair for transmit and one pair for receive). To scale bandwidth, a link can aggregate multiple lanes represented by xN, where N is any supported link width, such as 1, 2, 4, 8, 12, 16, 32, 64 or wider.A differential pair refers to two transmit paths, such as lines 1516 and 1517, to transmit differential signals. As an example, when line 1516 switches from a low voltage level to a high voltage level, ie, a rising edge, line 1517 is driven from a high logic level to a low logic level, ie, a falling edge. Differential signals potentially exhibit better electrical characteristics, such as better signal integrity, i.e. cross-coupling, voltage over/undershoot, ringing, etc. This allows for better timing windows, which enables faster transmission frequencies.The foregoing outlines the features of one or more embodiments of the subject matter disclosed herein. These embodiments are provided to enable those of ordinary skill in the art (PHOSITA) to better understand various aspects of the present disclosure. Certain well-understood terms and underlying technologies and/or standards may be referred to without detailed description. It is expected that PHOSITA will possess or obtain sufficient background knowledge or information in implementing those techniques and standards taught in this specification.PHOSITA will appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes, structures or variations for carrying out the same purposes and/or achieving the same advantages of the embodiments presented herein. PHOSITA will also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions and alterations herein without departing from the spirit and scope of the present disclosure.In the foregoing description, certain aspects of some or all embodiments are described in more detail than is strictly necessary to practice the appended claims. These details are provided for the purpose of non-limiting example only, to provide context and illustration of the disclosed embodiments. These details should not be construed as required and the claims should not be "read" as limiting. The phrase may refer to "one embodiment" or "an embodiment." These terms and any other references to embodiments should be construed broadly to refer to any combination of one or more embodiments. Furthermore, several features disclosed in a particular "embodiment" can also be distributed in multiple embodiments. For example, if features 1 and 2 are disclosed in an "embodiment", embodiment A may have feature 1 but lack feature 2, while embodiment B may have feature 2 but lack feature 1.This specification may provide illustrations in a block diagram format, in which certain features are disclosed in separate blocks. These should be construed broadly as disclosing how various features interoperate, but are not meant to imply that these features must necessarily be embodied in separate hardware or software. Furthermore, where a single block discloses more than one feature of the same block, those features do not necessarily have to be embodied in the same hardware and/or software. For example, computer "memory" may in some cases be in multiple levels of cache or local storage, main storage, battery-backed volatile storage, and various forms of persistent storage (eg, hard disks, storage servers, optical storage, magnetic disks, tape drives or similar devices). In some embodiments, some components may be omitted or combined. In a general sense, the arrangements depicted in the figures may be more logical in their representation, and the physical architecture may include various permutations, combinations and/or mixtures of these elements. Countless possible design configurations can be used to achieve the operational goals outlined in this article. Thus, the associated infrastructure has countless alternative arrangements, design choices, equipment possibilities, hardware configurations, software implementations, and equipment options.Reference may be made herein to computer-readable media, which may be tangible and non-transitory computer-readable media. As used in this specification and throughout the claims, "computer-readable media" should be understood to include one or more computer-readable media of the same or a different type. As non-limiting examples, computer-readable media may include optical drives (eg, CD/DVD/Blu-ray), hard drives, solid-state drives, flash memory, or other non-volatile media. Computer-readable media can also include media such as read only memory (ROM), an FPGA or ASIC configured to execute the desired instructions, stored instructions for programming the FPGA or ASIC to execute the desired instructions, and can be integrated in hardware. intellectual property (IP) blocks in other circuits or, where appropriate, coded directly into, for example, a microprocessor, digital signal processor (DSP), microcontroller or any other suitable component, device, element or Instructions in hardware or microcode on the object's processor. Non-transitory storage media herein are expressly intended to include any non-transitory special purpose or programmable hardware configured to provide the disclosed operations or cause a processor to perform the disclosed operations.Throughout this specification and claims, various elements may be "communicatively," "electrically," "mechanically," or otherwise "coupled" to each other. This coupling may be a direct point-to-point coupling, or may include intermediate devices. For example, two devices may be communicatively coupled to each other via a controller that facilitates communication. The devices may be electrically coupled to each other via intermediate devices such as signal boosters, voltage dividers, or buffers. A mechanically coupled device may be indirectly mechanically coupled.Any "module" or "engine" disclosed herein may refer to or include software, a software stack, hardware, a combination of firmware and/or software, circuitry configured to perform the functions of the engine or module, or any computer-engineered engine as described above. read media. Where appropriate, these modules or engines may be provided on or in conjunction with a hardware platform that may include hardware computing resources such as processors, memory, storage, interconnects, networks and network interfaces, accelerators or other suitable hardware. Such hardware platforms may be provided as a single monolithic device (eg, in a PC form factor), or some or part of the functionality may be distributed (eg, "composite nodes" in high-end data centers, where computing, memory, storage, and other resources can be are dynamically allocated and do not need to be local to each other).Flow diagrams, signal flow diagrams, or other illustrations that illustrate operations performed in a particular order may be disclosed herein. Unless expressly stated otherwise, or unless required in a particular context, this order should be understood as a non-limiting example only. Furthermore, where one operation is shown following another operation, other intervening operations may also occur, which may or may not be related. Some operations can also be performed concurrently or in parallel. Where an operation is referred to as being "based on" or "according to" another item or operation, this should be understood to imply that the operation is at least partially based on or at least partially based on the other item or operation. This should not be construed as implying that an operation is based solely or exclusively on that item or action, or solely or exclusively upon that item or action.All or part of any hardware element disclosed herein can readily be provided in a system-on-chip (SoC), including a central processing unit (CPU) package. SoC stands for an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, a client device or a server device may be provided in whole or in part in the SoC. SoCs can contain digital, analog, mixed-signal and RF functions, all of which can be provided on a single chip substrate. Other embodiments may include a multi-chip module (MCM) in which multiple chips are located within a single electronic package and are configured to closely interact with each other through the electronic package.In a general sense, any suitably configured circuit or processor may execute any type of instructions associated with data to achieve the operations detailed herein. Any of the processors disclosed herein can transform an element or item (eg, data) from one state or thing to another. Furthermore, based on specific requirements and implementations, the information that is tracked, sent, received or stored in the processor may be provided in any database, register, table, cache, queue, control list or storage structure, all of which may be in the Citations in any suitable time frame. Any memory or storage element disclosed herein should be deemed to be appropriately included within the broad terms "memory" and "storage."Computer program logic that implements all or part of the functionality described herein can be embodied in various forms including, but not limited to, source code form, computer-executable form, machine instructions or microcode, programmable hardware, and various intermediate forms (eg, by assembly program, compiler, linker, or locator-generated form). In one example, source code includes a series of computer program instructions implemented in various programming languages, such as object code, assembly language, or high-level languages such as OpenCL, FORTRAN, C, C++, JAVA, or HTML, for various An operating system or operating environment, or implemented in hardware description languages such as Spice, Verilog, and VHDL. Source code can define and use various data structures and communication messages. Source code may be in a computer-executable form (eg, by an interpreter), or the source code may be converted (eg, by a translator, assembler, or compiler) into a computer-executable form, or into an intermediate form, such as as a word Section code. Where appropriate, any of the foregoing may be used to construct or describe a suitable discrete or integrated circuit, whether sequential, combinatorial, state machine or otherwise.In an example embodiment, any number of the circuits of the figures may be implemented on a board of an associated electronic device. The board can be a general-purpose circuit board that can hold various components of the electronic device's internal electronic system and also provide connectors for other peripherals. Any suitable processor and memory may be appropriately coupled to the board based on specific configuration requirements, processing requirements, and computing design. Note that with the numerous examples provided herein, interactions can be described in terms of two, three, four, or more electronic components. However, this is done for the sake of clarity and example only. It should be appreciated that systems may be combined or reconfigured in any suitable manner. Along with similar design alternatives, any of the components, modules and elements shown in the figures may be combined in various possible configurations, all of which are within the broad scope of this description.Numerous other changes, substitutions, changes, alterations and modifications may be ascertained by those skilled in the art, and the present disclosure is intended to cover all such changes, substitutions, changes, changes and modifications as fall within the scope of the appended claims. To assist the United States Patent and Trademark Office (USPTO) and any reader of any patents issued in this application in interpreting the appended claims, Applicant wishes to note that Applicant: (a) does not intend any of the appended claims to exist as of the date of their filing When citing paragraph (f) of paragraph 6(6)(post-AIA) of 35 USC section 112 (pre-AIA), unless "means for" or "with (b) is not intended to limit the present disclosure by any statement in the specification in any way not expressly reflected in the appended claims.Example implementationIn one example, a fabric controller for providing a coherent accelerator fabric is disclosed, comprising: a host interconnect for communicatively coupling to a host device; a memory interconnect for communicatively coupling to an accelerator memory; an accelerator interconnect for communicative coupling with is communicatively coupled to an accelerator having a last level cache (LLC); and an LLC controller configured to provide bias checking for memory access operations.Also disclosed is a fabric controller, further comprising a fabric coherence engine (FCE) configured to be able to map accelerator memory into a host fabric memory address space, wherein the fabric controller is configured to operate host memory accesses via the FCE Boot into the accelerator memory.Also disclosed is a fabric controller wherein the FCE is physically separate from the LLC controller.A fabric controller is also disclosed, further comprising a direct bypass bus for connecting the LLC to the memory interconnect and bypassing the FCE.A fabric controller is also disclosed, wherein the fabric controller is configured to provide the fabric in a plurality of n independent slices.Also disclosed is a structure controller, wherein n=8.A fabric controller is also disclosed, wherein the n independent slices include n independent LLC controllers interconnected by horizontal interconnects and communicatively coupled to respective memory controllers through respective vertical interconnects.A fabric controller is also disclosed, further comprising a power manager configured to determine that the LLC controller is idle, and to power down the horizontal interconnects and maintain the corresponding vertical interconnects and host interconnects in an active state.Also disclosed is a fabric controller, wherein the LLC is a level 3 cache.Also disclosed is a fabric controller wherein the host interconnect is an Intel Accelerator Link (IAL) compliant interconnect.Also disclosed is a fabric controller wherein the host interconnect is a PCIe interconnect.Also disclosed is a fabric controller, wherein the fabric controller is an integrated circuit.Also disclosed is a fabric controller, wherein the fabric controller is an intellectual property (IP) block.Also disclosed is an accelerator apparatus comprising: an accelerator including a last level cache (LLC); and a fabric controller for providing a coherent accelerator fabric, including a host interconnect for communicatively coupling the accelerator to a host device; A memory interconnect for communicatively coupling the accelerator and the host device to the accelerator memory; an accelerator interconnect for communicatively coupling the accelerator fabric to the LLC; and an LLC controller configured to provide bias checking for memory access operations.An accelerator apparatus is also disclosed, wherein the fabric controller further includes a fabric coherence engine (FCE) configured to enable mapping of accelerator memory into a host fabric memory address space, wherein the fabric controller is configured to The FCE directs host memory access operations to the accelerator memory.Also disclosed is an accelerator apparatus in which the FCE is physically separated from the LLC controller.An accelerator apparatus is also disclosed wherein the fabric controller further includes a direct bypass bus for connecting the LLC to the memory interconnect and bypassing the FCE.An accelerator apparatus is also disclosed, wherein the fabric controller is configured to provide the fabric in a plurality of n individual slices.An accelerator device is also disclosed, wherein n=8.An accelerator apparatus is also disclosed, wherein the n independent slices include n independent LLC controllers interconnected by horizontal interconnects and communicatively coupled to respective memory controllers through respective vertical interconnects.An accelerator apparatus is further disclosed, further comprising a power manager configured to determine that the LLC controller is idle, and to power down the horizontal interconnects and maintain the corresponding vertical interconnects and host interconnects in an active state.An accelerator arrangement is also disclosed, wherein the LLC is a level 3 cache.An accelerator apparatus is also disclosed, wherein the host interconnect is an Intel Accelerator Link (IAL) compliant interconnect.An accelerator apparatus is also disclosed, wherein the host interconnect is a PCIe interconnect.Also disclosed are one or more tangible, non-transitory computer-readable media having stored thereon instructions for providing a fabric controller, including instructions for: providing a host interconnect to communicatively couple to a host device; A memory interconnect is provided to communicatively couple to the accelerator memory; an accelerator interconnect is provided to communicatively couple to the accelerator having a last level cache (LLC); and an LLC controller configured to provide bias checking for memory access operations is provided.One or more tangible, non-transitory computer-readable media are further disclosed, wherein the instructions further provide a fabric coherence engine (FCE) configured to be able to map the accelerator memory into the host fabric memory address space, wherein the fabric The controller is configured to direct host memory access operations to accelerator memory via the FCE.One or more tangible, non-transitory computer-readable media are further disclosed, wherein the FCE is physically separate from the LLC controller.One or more tangible, non-transitory computer-readable media are also disclosed, further including a direct bypass bus for connecting the LLC to the memory interconnect and bypassing the FCE.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the fabric controller is configured to provide the fabric in a plurality of n separate slices.One or more tangible, non-transitory computer-readable media are also disclosed, wherein n=8.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the n individual slices include memory controllers that are interconnected through horizontal interconnects and communicatively coupled to respective memory controllers through respective vertical interconnects. n independent LLC controllers.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the instructions further provide a power manager configured to determine that the LLC controller is idle and disconnect the horizontal interconnection Power and keep the corresponding vertical interconnect and host interconnect active.One or more tangible non-transitory computer readable media are also disclosed, wherein the LLC is a level 3 cache.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the host interconnect is an Intel Accelerator Link (IAL) compliant interconnect.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the host interconnect is a PCIe interconnect.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the instructions include hardware instructions.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the instructions include field programmable gate array (FPGA) instructions.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the instructions include data for programming a field programmable gate array (FPGA).One or more tangible, non-transitory computer-readable media are also disclosed, wherein the instructions include instructions for manufacturing a hardware device.One or more tangible, non-transitory computer-readable media are also disclosed, wherein the instructions include instructions for making a block of intellectual property (IP).Also disclosed is a method of providing a coherent accelerator structure, comprising: communicatively coupled to a host device; communicatively coupled to an accelerator memory; communicatively coupled to an accelerator having a last level cache (LLC); Bias checking for memory access operations.Also disclosed is a method, further comprising providing a fabric coherence engine (FCE) configured to be able to map accelerator memory into a host fabric memory address space, wherein a fabric controller is configured to direct host memory access operations via the FCE to the accelerator memory.Also disclosed is a method wherein the FCE is physically separated from the LLC controller.A method is also disclosed that further includes providing a direct bypass path for connecting the LLC to the memory interconnect and bypassing the FCE.Also disclosed is a method further comprising providing the structure in a plurality of n independent slices.A method is also disclosed, wherein n=8.Yet another method, wherein the n independent slices include n independent LLC controllers interconnected by horizontal interconnects and communicatively coupled to respective memory controllers through respective vertical interconnects.A method is also disclosed, further comprising: determining that the LLC controller is idle, and powering down the horizontal interconnect and maintaining the corresponding vertical interconnect and host interconnect in an active state.Also disclosed is a method wherein the LLC is a level 3 cache.A method is also disclosed wherein the host interconnect is an Intel Accelerator Link (IAL) compliant interconnect.A method is also disclosed wherein the host interconnect is a PCIe interconnect.Also disclosed is an apparatus comprising means for performing the method of any one of a plurality of the above examples.An apparatus is also disclosed, wherein the unit includes a structure controller.Also disclosed is an accelerator device including an accelerator, an accelerator memory and a fabric controller.Also disclosed are one or more tangible, non-transitory computer-readable media having stored thereon instructions for providing a method or making a device or apparatus of many of the above examples. |
An embedded device is provided which comprises a device memory and hardware entities including a 3D graphics entity. The hardware entities are connected to the device memory, and at least some of the hardware entities perform actions involving access to and use of the device memory. A grid cell value buffer is provided, which is separate from the device memory. The buffer holds data, including buffered grid cell values. Portions of the 3D graphics entity access the buffered grid cell values in the buffer, in lieu of the portions directly accessing the grid cell values in the device memory, for per-grid processing by the portions. |
CLAIMSWhat is claimed is:1. An embedded device, comprising: a device memory and hardware entities connected to the device memory, at least some of the hardware entities to perform actions involving access to and use of the device memory, and the hardware entities comprising a 3D graphics entity; and a grid cell value buffer separate from the device memory, to hold data, including buffered grid cell values, portions of the 3D graphics entity accessing the buffered grid cell values in the grid cell value buffer, in lieu of the portions directly accessing the grid cell values in the device memory, for per-grid cell processing by the portions.2. The embedded device according to claim 1, wherein the grid cell value buffer comprises a pixel buffer, the grid cell values comprise pixels, and the per-grid cell processing comprises per-pixel processing.3. The embedded device according to claim 2, further comprising a bus, the device memory being connected to and accessible by the hardware entities through the bus.4. The embedded device according to claim 3, wherein the bus comprises a system bus, and wherein the device memory comprises a main memory.5. The embedded device according to claim 4, wherein the 3D graphics entity further comprises a graphics pipeline and a graphics clock, the graphics pipeline comprising a primitive-to-pixel conversion portion and later portions succeeding the primitive-to-pixel conversion portion, and data exchanges within the 3D graphics entity being clocked at the graphics clock rate.6. The embedded device according to claim 5, wherein the 3D graphics entity comprises a chip.7. The embedded device according to claim 5, wherein the 3D graphics entity comprises a 3D graphics core of a larger integrated system on a chip.8. The embedded device according to claim 5, wherein the 3D graphics entity further comprises a bus interface to interface the 3D graphics entity with the bus.9. The embedded device according to claim 8, wherein the graphics clock rate is faster than a clocked data exchange rate of the bus.10. The embedded device according to claim 5, wherein the pixel buffer comprises a cache.11. The embedded device according to claim 10, wherein the cache is internal to the 3D graphics entity which comprises a chip distinct from the device memory, from the bus, and from others of the hardware entities.12. The embedded device according to claim 10, wherein the cache is dedicated to data used in per-pixel processing by the 3D graphics entity.13. The embedded device according to claim 12, wherein the data used in per-pixel processing comprises frame buffer data.14. The embedded device according to claim 10, wherein the cache comprises a pixel prefetch mechanism to prefetch pixels from a frame buffer in the device memory.15. The embedded device according to claim 14, wherein the prefetch mechanism comprises a mechanism to prefetch groups of pixels associated with each other and grouped together in a pixel address queue local to the 3D graphics entity.16. The embedded device according to claim 14, wherein the later portions of the graphics pipeline and the shading portion of the graphics pipeline each comprise stages of the graphics pipeline.17. The embedded device according to claim 14, wherein the later portions of the graphics pipeline comprise a texturing portion.18. The embedded device according to claim 14, wherein the later portions of the graphics pipeline comprise a blending portion.19. The embedded device according to claim 14, wherein the later portions of the graphics pipeline comprise both texturing and blending portions.20. The embedded device according to claim 14, further comprising a post- primitive-to-pixel conversion (post-conversion) graphics processing portion, the post- conversion graphics processing portion of the graphics pipeline comprising a per-object processing portion, the per-object processing portion and the cache collectively comprising a new object enable mechanism to enable new object prefetching by the cache of pixels of a new object, the per-object processing portion processing portions of the new object to produce new object pixels, where pixels from a previously processed different object coinciding with the new object pixels are already in the cache at the time of the new object prefetching, and where the cache does not prefetch the coinciding pixels.21. The embedded device according to claim 20, wherein each object comprises a triangle.22. The embedded device according to claim 14, wherein the cache comprises a write-back mechanism to write back a processed given pixel to replace the unprocessed version of the same given pixel in a frame buffer external to the 3D graphics entity.23. The embedded device according to claim 22, wherein the frame buffer is in the main memory of the embedded device and is accessed by the cache via the system bus.24. The embedded device according to claim 14, wherein the cache comprises cache line accesses, each cache line access corresponding to a plural set of linear pixel indices generated from the primitive-to-pixel conversion portion of the graphics pipeline.25. The embedded device according to claim 1, wherein the embedded device comprises a mobile device.26. The embedded device according to claim 1, wherein the embedded device comprises a wireless communications device.27. The embedded device according to claim 1, wherein the embedded device comprises a mobile phone.28. The embedded device according to claim 1, wherein the grid cell value buffer comprises a depth buffer, and wherein the grid cell values comprising depth values.29. The embedded device according to claim 28, wherein the 3D graphics entity comprises a hidden surface removal portion that accesses the depth values in the depth buffer, in lieu of the hidden surface removal portion directly accessing the depth values in the device memory, for per-grid-cell processing by the hidden surface removal portion.30. The embedded device according to claim 29, wherein the depth buffer comprises a depth value prefetch mechanism to prefetch depth values from a buffer in the device memory.31. The embedded device according to claim 30, wherein the depth value prefetch mechansim comprises a mechanism to prefetch groups of depth values associated with each other.32. The embedded device according to claim 30, wherein the depth buffer comprises addressable units, each addressable unit comprising an integer M depth values.33. The embedded device according to claim 29, comprising a mechanism to defer a given write to the depth buffer memory until a read access to the depth buffer memory occurs.34. An integrated circuit comprising: 3D graphics processing portions; and a grid cell value buffer to hold data, including buffered grid cell values, the crtior-S accessing the buffered grid ceil values i the grid cell value buffer, in lieu of the portions directly accessing the grid cell values in a separate device memory and in lieu of accessing a system bus required to access the separate device memory, for per- grid cell processing by the portions.35. The integrated circuit according to claim 34, wherein the grid cell value buffer comprises a pixel buffer, the grid cell values comprise pixels, and the per-grid cell processing comprises per-pixel processing.36. The integrated circuit according to claim 35, wherein the pixel buffer comprises a prefetch cache, the prefetch cache comprising addressable units, each addressable unit comprising an integer number of pixels.37. The integrated circuit according to claim 34, wherein the grid cell value buffer comprises a depth buffer, and wherein the grid cell values comprise depth values.38. The integrated circuit according to claim 37, comprising a mechanism to defer a given write to the depth buffer memory until a read access to the depth buffer memory occurs.39. Machine-readable media, interoperable with a machine to: perform 3D graphics processing with processing portions of an embedded system; hold data, including buffered grid cell values, in a grid cell value buffer; and cause the processing portions to access the buffered grid cell values in the grid cell value buffer, in lieu of the processing portions directly accessing the grid cell values in a separate device memory and in lieu of accessing a system bus required to access the separate device memory, for per-grid cell processing by the processing portions.40. The machine-readable media according to claim 39, wherein the grid cell value buffer comprises a pixel buffer, the grid cell values comprise pixels, and the per-grid cell processing comprises per-pixel processing.41. The machine-readable media according to claim 40, wherein the pixel buffer comprises a prefetch cache, the prefetch cache comprising addressable units, each addressable unit comprising an integer number of pixels.42. The machine-readable media according to claim 39, wherein the grid cell value buffer comprises a depth buffer, and wherein the grid cell values comprise depth values.43. The machine-readable media according to claim 42, interoperable with the machine to: defer a given write to the depth buffer memory until a read access to the depth buffer memory occurs.44. Apparatus comprising: 3D graphics processing means for performing 3D graphics processing; and buffer means for holding data, including buffered grid cell values, the 3D graphics processing means further comprising means for accessing the buffered grid cell values in the buffer, in lieu of the 3D graphics processing means directly accessing the grid cell values in a separate device memory and in lieu of the 3D graphics processing means accessing a system bus required to access the separate device memory, and the 3D graphics processing means comprising means for performing per-grid cell processing.45. The apparatus according to claim 44, wherein the buffer means comprise a pixel buffer, the grid cell values comprise pixels, and the per-grid cell processing means comprise means for performing per-pixel processing.46. The apparatus according to claim 45, wherein the buffer means comprise prefetch means for performing prefetch caching of pixels accessed by the 3D graphics processing means, the prefetch means comprising means for receiving data requests in addressable units, each addressable unit comprising an integer number of pixels.47. The apparatus according to claim 44, wherein the buffer means comprise means for buffering depth values, and wherein the grid cell values comprise the depth values.48. The apparatus according to claim 47, further comprising means for deferring a given write to the means for buffering depth values until a read access to the means for buffering depth values occurs. |
EMBEDDED SYSTEM WITH 3D GRAPHICS CORE AND LOCAL PIXEL BUFFERCOPYRIGHT NOTICE [0001] This patent document contains information subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent, as it appears in the U.S. Patent and Trademark Office files or records, but otherwise reserves all copyright rights whatsoever.CROSS REFERENCE TO RELATED APPLICATIONS [0002] This application claims the benefit of provisional U.S. Application Serial No. 60/550,027, entitled "Pixel-Based Frame Buffer Prefetch Cache for 3D Graphics," filed March 3, 2004.BACKGROUND OF THE INVENTION [0003] The present invention is related to embedded systems having 3D graphics capabilities. In other respects, the present invention is related to a graphics pipeline, a mobile phone, and memory structures for the same. [0004] Embedded systems, for example, mobile phones, have limited memory resources. A given embedded system may have a main memory and a system bus, both of which are shared by different system hardware entities, including a 3D graphics chip. [0005] Meanwhile, the embedded system 3D chip requires large amounts of bandwidth of the main memory via the system bus. For example, a 3D graphics chip displaying 3D graphics on a quarter video graphics array (QVGA) 240 x 320 pixel screen, at twenty frames per second, could require a memory bandwidth between 6.1MB per second and 18.4MB per second, depending upon the complexity of the application. This example assumes that the pixels include only color and alpha components. [0006] Memory bandwidth demands like this can result in a memory access bottleneck, which could adversely affect the operation of the 3D graphics chip as well as of other hardware entities that use the same main memory and system bus. BRIEF SUMMARY OF THE INVENTION [0007] An embedded device is provided which comprises a device memory and hardware entities including a 3D graphics entity. The hardware entities are connected to the device memory, and at least some of the hardware entities perform actions involving access to and use of the device memory. A grid cell value buffer is provided, which is separate from the device memory. The buffer holds data, including buffered grid cell values. Portions of the 3D graphics entity access the buffered grid cell values in the buffer, in lieu of the portions directly accessing the grid cell values in the device memory, for per-grid cell processing by the portions. [0008] Other features, functions, and aspects of the invention will be evident from the Detailed Description of the Invention that follows.BRLEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS [0009] The present invention is further described in the detailed description, which follows, by reference to the noted drawings by way of non-limiting exemplary embodiments, in which like reference numerals represent similar parts throughout the several views of the drawings, and wherein: [00010] Fig. 1 is a block diagram of an embedded device; [00011] Fig. 2 is a more detailed block diagram of a main memory, a system bus, and a 3D graphics entity processor of the embedded device shown in Fig. 1; [00012] Fig. 3 is a flow chart of a per-triangle processing process which may be performed by certain 3D graphics pipeline stages of the illustrated 3D graphics entity; [00013] Fig. 4 is a schematic diagram of an exemplary embodiment of a blending block which may form part of the illustrated 3D graphics pipeline; [00014] Fig. 5 illustrates a frame buffer and an example linear address mapping scheme; [00015] Fig. 6 is a simplified screen depiction of a set of triangles forming part of a given 3D image; [00016] Fig. 7 is a schematic diagram of an example cache subsystem; [00017] Fig. 8 is a block diagram of a graphics entity comprising, among other elements, a depth buffer memory; and [00018] Fig. 9 is a timing diagram for the depth buffer memory illustrated in Fig. 8. DETAILED DESCRIPTION OF THE INVENTION [00019] To facilitate an understanding of the following Detailed Description, definitions will be provided for certain terms used therein. A primitive may be, e.g., a point, a line, or a triangle. A triangle may be rendered in groups of fans, strips, or meshes. An object is one or more primitives. A scene is a collection of models and the environment within which the models are positioned. A pixel comprises information regarding a location on a screen along with color information and optionally additional information (e.g., depth). The color information may, e.g., be in the form of an RGB color triplet. A screen grid cell is the area of a screen that may be occupied by a given pixel. A screen grid value is a value corresponding to a screen grid cell or a pixel. An application programming interface (API) is an interface between an application program on the one hand and operating system, hardware, and other functionality on the other hand. An API allows for the creation of drivers and programs across a variety of platforms, where those drivers and programs interface with the API rather than directly with the platform's operating system or hardware. [00020] Fig. 1 is a block diagram of an exemplary embedded device 10, which in the illustrated embodiment comprises a wireless mobile communications device. The illustrated embedded device 10 comprises a system bus 14, a device memory (a main memory 16 in the illustrated system) connected to and accessible by other portions of the embedded device through system bus 14, and hardware entities 18 connected to system bus 14. At least some of the hardware entities 18 perform actions involving access to and use of main memory 16. [00021] A 3D graphics entity 20 is connected to system bus 14. 3D graphics entity 20 may comprise a core of a larger integrated system (e.g., a system on a chip (SoC)), or it may comprise a 3D graphics chip, such as a 3D graphics accelerator chip. The 3D graphics entity comprises a graphics pipeline (see Fig. 2), a graphics clock 23, a buffer 22, and a bus interface 19 to interface 3D graphics entity 20 with system bus 14. Data exchanges within 3D graphics entity 20 are clocked at the graphics clock rate set by graphics clock 23. [00022] Buffer 22 holds data used in per-pixel processing by 3D graphics entity 20. Buffer 22 provides local storage of pixel-related data, such as pixel information from buffers within main memory 16, which may comprise one or more frame buffers 24 and Z buffers 26. Frame buffers 24 store separately addressable pixels for a given 3D graphics imag ; each pixel is indexed with X (horizontal position) and Y (vertical position) screen position index integer values. Frame buffers 24, in the illustrated system, comprise, for each pixel, RGB and alpha values. In the illustrated embodiment, Z buffer 26 comprises depth values Z for each pixel. [00023] Fig. 2 is a block diagram of main memory 16, system bus 14, and certain portions of 3D graphics entity 20. As shown in Fig. 2, 3D graphics entity 20 comprises a graphics pipeline 21. The illustrated graphics pipeline 21 comprises, among other elements not specifically shown in Fig. 2, certain graphics pipeline stages comprising a setup stage 23, a shading stage 25, and succeeding graphics pipeline stages 30. The succeeding graphics pipeline stages 30 shown in Fig. 2 include a texturing stage 27 and a blending stage 29. [00024] A microprocessor (one of hardware entities 18) and main memory 16 operate together to execute an application program (e.g., a mobile phone 3D game, a program for mobile phone shopping with 3D images, or a program for product installation or assembly assistance via a mobile phone) and an application programming interface (API). The API facilitates 3D rendering for a application, by providing the application with access to the 3D graphics entity. The application may be developed in a work station or desktop personal computer, and then loaded to the embedded device, which in the illustrated embodiment comprises a wireless mobile communications device (e.g., a mobile phone). [00025] Setup stage 23 performs computations on each of the image's primitives (e.g., triangles). These computations precede an interpolation stage (otherwise referred to as a shading stage 25 or a primitive-to-pixel conversion stage) of the graphics pipeline. Such computations may include, for example, computing the slope of a triangle edge using vertex information at the edge's two end points. Shading stage 25 involves the execution of algorithms to define a screen's triangles in terms of pixels addressed in terms of horizontal and vertical (X and Y) positions along a two- dimensional screen. Texturing stage 27 matches image objects (triangles, in the embodiment) with certain images designed to add to the realistic look of those objects. Specifically, texturing stage 27 will map a given texture image by performing a surface parameterization and a viewing projection. The texture image in texture space (u,v) (in texels) is converted to object space by performing a surface parameterization into object space (xo, yn, ZQ). The image in object space is then projected into screen space (x, y) (pixels), onto the object (triangle). [00026] In the illustrated embodiment, blending stage 29 takes a texture pixel color from texture stage 27 and combines it with the associated triangle pixel color of the pre-texture triangle. Blending stage 29 also performs alpha blending on the texture- combined pixels, and performs a bitwise logical operation on the output pixels. More specifically, blending stage 29, in the illustrated system, is the last stage in 3D graphics pipeline 21. Accordingly, it will write the final output pixels of 3D graphics entity 20 to frame buffer(s) 24 within main memory 16. An additional graphics pipeline stage (not shown) may be provided between shading stage 25 and texturing stage 27. That is, a hidden surface removal (HSR) stage (not shown) may be provided, which uses depth information to eliminate hidden surfaces from the pixel data-thereby simplifying the image data and reducing the bandwidth demands on the pipeline. [00027] A local buffer 28 is provided, which may comprise a buffer or a cache. Local buffer 28 buffers or caches pixel data obtained from shading stage 25. The pixel data may be provided in buffer 28 from frame buffer 24, after population of frame buffer 24 by shading stage 25, or the pixel data may be stored directly in buffer 28, as the pixel data is interpolated in shading stage 25.([00028] As shown in Fig. 2, the later stages of graphics pipeline 21 perform per- object (per-triangle) processing functions. The mapping process involved in texturing, and the subsequent blending for a given triangle, are examples of such per-triangle processing functions. Fig. 3 is a flow diagram illustrating per-triangle processing 50. Per-triangle processing is performed for each triangle within the image, and involves the preliminary processing of data (act 56) and local storage of triangle pixels (act 54) in act 52, and subsequent per-pixel processing 58. Each of these acts will be performed for a given triangle upon the initiation of an "enable new triangle" signal received by the per- object processing portions of the graphics pipeline. [00029] More specifically, in act 52, the triangle pixels for the given triangle will be stored locally at act 54, and the per-triangle processing will commence process actions not requiring triangle pixels at act 56. Actions not requiring triangle pixels may include, for example, the inputting of alpha, RGB diffused, and RGB specular data; the inputting of texture RGB, and alpha data; and the inputting of control signals, all to an input buffer (see input buffer 86, in Fig. 4). [[00030] In a per-pixel processing act 58, a given pixel is obtained from the local buffer at act 60. The per-pixel processing actions are then executed on the given pixel at act 62. In act 64, the processed pixels of the triangle are stored locally and written back to the frame buffer (if the processed pixel is now dirty). [00031] The local buffer from which the given pixel is obtained (in act 60) may comprise a local buffer, a local queue, a local Z-buffer, and or a local cache. In the illustrated embodiment, the local buffer comprises a local cache dedicated to frame buffer data used in per-pixel processing by the 3D graphics pipeline. The cache comprises a pixel buffer mechanism to buffer pixels and to allow access to and processing of the buffered pixels by later portions of the graphics pipeline (in the illustrated embodiment, the texturing and blending stages). Those portions succeed the shading portion of the graphics pipeline. In the illustrated embodiment, those portions are separate graphics pipeline stages. [00032] The per-triangle processing portion of the graphics pipeline, together with the 3D graphics cache, collectively comprise a new object enable mechanism to enable prefetching by the cache of pixels of the new object (a triangle in the illustrated embodiment). The per-object processing portion of the graphics pipeline processes portions of the new triangle pixels. Where processed pixels from a previous triangle coinciding with the new triangle pixels are already in the cache, the cache does not prefetch those coinciding pixels. [00033] Fig. 4 is the block diagram of a post-shading (i.e., post primitive-to-pixel conversion) per-triangle processing portion of the illustrated 3D graphics entity. The illustrated circuitry 70 comprises a cache portion 72 and a blending portion 74. The illustrated cache portion 72 comprises a triangle pixel address buffer 76, a cache control unit 78, an out color converter 80, an in color converter 82, and a frame buffer prefetch cache 84. Cache control unit 78 comprises a prefetch mechanism 91 and a cache mechanism 93. [00034] Triangle pixel address buffer 76 has a pixel address input for identifying the address of a first pixel of the current cache line corresponding to the triangle being currently processed by per-triangle processing portion 70. Triangle pixel address buffer 76 also has an "enable, new triangle" input, for receiving a signal indicating that a new triangle is to be processed and enabling operation of the cache, at which point memory accesses are checked within the contents of the cache, and, when there is a cache miss, memory requests are made through the bus interface. [00035] Blending portion 74 comprises an input buffer 86, a blending control portion 88, a texture shading unit 90, an alpha blending unit 92, a rasterization code portion (RasterOp) 94, and a result buffer 96. [00036] Input buffer 86 has an output for indicating that it is ready for input from the texture stage. It comprises inputs: for alpha RGB diffused and RGB specular data; for texture RGB and alpha data; and for controls. It also has an input that receives the "enable, new triangle" signal. Input buffer 86 outputs the appropriate data for use by texture shading unit 90, which forwards pixel values to alpha blending unit 92. Alpha blending unit 92 receives input pixels from frame buffer prefetch cache 84, and is thus able to blend the texture information with the pre-textured pixel information from the frame buffer via frame buffer prefetch cache 84. The output information from alpha blending unit 92 is forwarded to RasterOp device 94, which executes the rasterization code. The results are forwarded to result buffer 96, which returns each pixel to its appropriate storage location within frame buffer prefetch cache 84. [00037] A given pixel may be represented using full precision in the graphics core, while its precision may be reduced when packing in the frame buffer. Accordingly, a given pixel may comprise 32 bits of data, allowing for eight bits for each of R, G, and B, and eight bits for an alpha value. At the same resolution, if the depth value Z is integrated into each pixel, each pixel will require 48 bits. Each such pixel may be packed, thereby reducing its precision, as it is stored in cache 84. Out color converter 82 and in color converter 84 are provided for this purpose, i.e., out color converter 80 converts 24 bit pixel data to 32 bit pixel data, while in color converter 82 converts 32 bit pixel data to 24 bit pixel data. [00038] Fig. 5 illustrates that a given frame buffer may have an addressing scheme based on pixel indices, i.e., in terms of X and Y screen position values for the respective pixels. Those pixels may be mapped linearly to memory addresses, as shown in Fig. 5. Particularly, the pixels in the frame buffer may be mapped to linear memory addresses, starting from the upper-left corner to the lower-right corner of the screen. For example, if each pixel value (R,G,B or A) is a half-word (4 bits), for a color depth of 16bpp, then the memory byte address as shown in Fig. 5 increments by two per pixel. Each scan line (row) of a 320 x 240 frame buffer is 320 pixels or 640 byte addresses. [00039] Fig. 6 is a simplified screen representation of a cluster of fans, made up of triangles 1-7. The cache takes advantage of the local nature of the triangle rendering order, assuming the triangles are rendered in clusters of fans, strips, or meshes, as shown in Fig. 6. In Fig. 6, gray rectangles represent the arrangement of cache lines as mapped to the screen. If a given cache line size is selected correctly, the blending block shown in Fig. 4 can take advantage of the burst access efficiency of the memory system. [00040] Referring back to Fig. 4, cache portion 72 comprises a frame buffer prefetch cache 84, which comprises a pixel-centric write-back data cache 93 and a prefetch mechanism 91. The illustrated cache mechanism 93 may simply comprise a standard direct-mapped cache. More complex cache mechanisms may be provided for more set associativity, for added performance at the expense of circuit area and power consumption. [00041] Every time a cache miss occurs, checked on a per-cache-line basis grouped from the linear pixel address inputs, the missed cache line is fetched by prefetch mechanism 91. That fetch occurs through accessing the frame buffers stored in main memory 16 via system bus 14. A write back of a cache line will occur when the cache line is missed and the associated dirty bit is set or when the whole cache is invalidated. The size of a cache line is based on a given integer number of pixels. In the illustrated embodiment, the cache line size is eight consecutive pixels with a linear pixel addressing scheme, disassociating the cache from varying frame buffer formats in the system. This translates to 16 bytes in consecutive memory addresses for a 16 bpp frame buffer, 24 bytes for a 24bpp frame buffer, and 32 bytes for a 32bpp frame buffer. [00042] The illustrated prefetching mechanism 91 takes advantage of the processing time in the blending process, and prefetches a next cache line identified by the next triangle pixel address within triangle pixel address buffer 76. Before the next cache line pixel group arrives at blending portion 74, the cache line accesses for that group are prefetched. Prefetch mechanism 91 determines if the next cache line access is a cache miss. If the cache line access is also "dirty," the cache content is written-back before performing the prefetch associated with the cache miss. In this way, cache line fetches are pipelined with the pixel processing time of the next group of pixels, and the pixel processing time is hidden inside the bus access delay, which further reduces the effect of the bus access delay. [00043] A collection of cache lines, e.g., 64 cache lines or 512 pixels, makes up a complete cache. The number of cache lines can be increased (thereby increasing the size of the cache) tc gain performance, again at the expense of circuit area and power consumption. Direct mapping of the cache to the screen buffer is disassociated with the actual scree size setting. Since the pixels reside i consecutive memory εdάresεeε from the top-left screen comer to the lower-right comer, using a 64 8-pixel line cache as an example, for a 320 x 240 maximum resolution, there are only 9600 cache line locations in the screen. Out of that, only 150 unique locations per line can be mapped to 28addresses. Therefore, using a simple address translation, pixel address bits [8:3] can be used as the tag index, and bits [16:9] can be used as the tag I.D. bits. [00044] Pixel data transfers between cache control unit 78 and main memory 16 are mediated through a bus interface block 19 (see Fig. 1). Pixel data transfer requests from other stages within the 3D graphics pipeline are also mediated through the same bus interface, in the illustrated embodiment. [00045] Fig. 7 is a detailed schematic diagram of a cache subsystem 100. The illustrated cache subsystem 100 comprises a pixel address register 102, a line start/count register 104, and a counter 106. In addition, a tag RAM 108, and a data RAM 110 are each provided. The illustrated cache subsystem 100 further comprises a cache control mechanism 112, a compare mechanism 114, a bus interface 116, color converters 118, 120, and a prefetch buffer 122. A register 124 is provided for storing a destination pixel. Gates 126a, 126b, and 126c are provided, for controlling data transfers from one element within cache subsystem 100 to another. [00046] The tag portion of pixel address register 102 determines whether there is a tag hit or miss. In other words, the tag portion comprises a cache line identifier. The index portion of pixel address register 102 indicates the cache position for a given pixel address. The portion to the right of pixel address register 102, between bits 2 and 0, comprises information concerning the start to finish pixels in a given line. Line start/count register 104 receives this information, and outputs a control signal to counter 106 for controlling when data concerning the cache position is input to an address input of tag RAM 108. When cache control 112 provides a write enable signal to tag RAM 108, the addressed data will be input into tag RAM 108 through an input location "IN." Data is output at an ouput location "OUT" of tag RAM 108 to a compare mechanism 114. The tag portion of pixel address register 102 is also input to compare mechanism 114. If the two values correspond to each other, then a determination is made that the data is in the cache and a hit signal is input to cache control mechanism 112. Depending upon the output of tag RAM 108, a valid or dirty signal will also be input into cache control 112. (TOG47! Cache control mechanism 112 further receives a next in queue valid signal indicating that a queue access address iε valid, and a next line start/count signal indicating that a next line within the cache is being started, and causing a reset of the count for that line. [00048] Data RAM 110 is used for cache data storage. Tag RAM 108 stores cache line identifiers. Gate 126a facilitates the selection between the cache data storage at data RAM 110 and the prefetch buffer 122, for outputting the selected pixel in destination pixel register 124. A cache enable gate 126c controls writing of data back to the main memory through bus interface 116. Color converters 118 and 120 facilitate the conversion of the precision of the pixels from one resolution to another as data is read in through bus interface 116, or as it is written back through bus interface 116. [00049] In cache subsystem 100, the pixel addresses coming into pixel address register 102 are bundled into cache line accesses. Cache control mechanism 112 determines if the address at the top of this queue is a cache hit or miss. If this address is a hit, cache line access is pushed onto a hit buffer. Two physical banks of the cache data RAM 110 may be provided in the prefetch cache, one for RGB and the other for alpha. The alpha bank is disabled (clock-gated) if the alpha buffer is disabled and if the output format is in the RGB mode. Otherwise, both alpha and color may be fetched to maintain the integrity of the cache. The input data to the data path and blending portion 74 of the circuit shown in Fig. 4 may be from data RAM 110 or from prefetch buffer 122 depending on whether the cache line access is a hit or a miss. [00050] As illustrated above, referring to, for example, Figs. 1, 2, and 4, frame buffer prefetch cache 84 is a pixel-centric write-back data cache with a prefetch mechanism 91, located between the pixel rendering output (the output of the shading stage) and the bus interface 19 of the 3D graphics entity. The linear pixel index may be the index that is generated from the rendering process performed by shading stage 25 (see Fig. 2). Those linear pixel indices are grouped into cache line accesses and are queued in a cache line access queue, such as triangle pixel address buffer 76 in Fig. 4. A cache hit or miss is checked on a per-cache-line basis. The cache line size is pixel- based rather than memory-based, representing consecutive pixels in a linear memory space, disassociating the cache from varying frame buffer formats in the possible different operating environments. Alternatively, the cache line may be non-linear. For example, a given cache line may correspond to a rectangular portion of the image, rather than a complete horizontal line scanned across the image. [00051] Prefetching mechanism 86 attempts to take advantage of the processing processing. Specifically, as indicated at act 56 in the process shown in Fig. 3, while the process actions not requiring triangle pixels are being commenced by the blending process, the triangle pixels can be prefetched by the prefetch mechanism 91, as indicated by act 54, which specifies that the triangle pixels are stored locally. This can be done on a cache line-by-cache line basis. Accordingly, the acts 52 and 58 shown in Fig. 3 may be performed not only for a given triangle, but may be repeated for each cache line required for all of the pixels of the given triangle. [00052] Fig. 8 illustrates a graphics entity 150, comprising, among other elements, one or more pipeline stages 164, a depth buffer control 162, and a depth buffer memory 160. Depth buffer memory 160 is local to the graphics entity (in the embodiment, embedded in the same IC as the graphics entity), and buffers depth values for access by the pipeline stages, particularly a hidden surface removal stage 165. Depth buffer control 162 facilitates writes and reads, and comprises a temporary storage 163. [00053] The number of cycles required for a read exceeds the number of cycles required for a write. Accordingly, whenever a write request is made, for example, by the hidden surface removal stage 165, the write is postponed by storing the write data in temporary storage 163, until such time as a read access is requested by hidden surface removal stage 165. [00054] This allows the read latency to be hidden, by overlapping the writing of data to the depth buffer memory 160 with the time between which a read access is made and the time at which the data to be read is transferred from depth buffer memory 160 to the requesting entity, in this case, the hidden surface removal pipeline stage 165. [00055] As illustrated in Fig. 8, the depth buffer memory is organized so that an addressed buffer unit (e.g., a buffer addressable buffer line) stores a given number of pixels, that number being any integer value M. The depth buffer memory addressed buffer units may correspond to pixels in the manner described above with respect to Fig. 5. [00056] A prefetching mechanism 170 may be provided to prefetch depth values from the depth buffer memory 160 and store those values in temporary storage 163. Accordingly, when a hidden surface removal stage 165 requests a given depth value, temporary storage 163, functioning as a cache, may not have this pixel depth value, resulting in a "miss," prompting prefetching mechanism 170 to obtain the requested depth value. Prefetching mechanism 170 prefetches a number of values, i.e., M values, by requesting a complete addressed buffer unit. [00057] Fig. 9 is a timing diagram illustrating the read and write timing for the depth buffer memory illustrated in Fig. 8. Waveform (a) is a clock signal, which can be used to control certain functions of the hidden surface removal stage 165 and depth buffer control 162, and depth buffer memory 160. Waveform (b) is a request signal sent from the hidden surface removal stage 165 to depth buffer control mechanism 162, indicating that the hidden surface removal stage should take priority, other requests should be ignored, and that accesses are being made to the depth buffer memory 160, involving the input of addresses to depth buffer control mechanism 162. The next waveform (c) is a write signal, indicating that a write address is being input during the time period at which that signal is high. Waveform (d) is the waveform within which the address information is provided by the hidden surface removal stage to the depth buffer control mechanism. Waveform (e) is the waveform within which the data to be written is input to the depth buffer control mechanism. Waveform (f) is the waveform output by the depth buffer control mechanism in response to the read access. Waveform (g) is an output data valid signal, which is high when the data being output by the depth buffer control mechanism to the hidden surface removal stage is valid. As shown in Fig. 9, during a first of three epochs, a read access is made. During the second epoch, a write access is made. The data is written to the depth buffer memory during the second epoch as shown in waveform (e), and the data is read from the depth buffer memory in the third epoch as shown in waveform (f). [00058] Each element described hereinabove may be implemented with a hardware processor together with computer memory executing software, or with specialized hardware for carrying out the same functionality. Any data handled in such processing or created as a result of such processing can be stored in any type of memory available to the artisan. By way of example, such data may be stored in a temporary memory, such as in a random access memory (RAM). In addition, or in the alternative, such data may be stored in longer-term storage devices, for example, magnetic disks, rewritable optical disks, and so on. For purposes of the disclosure herein, a computer- readable media may comprise any form of data storage mechanism, including such different memory technologies as well as hardware or circuit representations of such structures and of such data. [00059] While the invention has been described with reference to certain embodiments, the words which have been used herein are words of description, rather than words of limitation. Changes may be made, within the purview of the appended claims, without departing from the scope and spirit of the invention in its aspects. Although the invention has been described herein with reference to particular structures, acts, and materials, the invention is not to be limited to the particulars disclosed, but rather extends to all equivalent structures, acts, and materials, such as are within the scope of the appended claims. |
A method is provided for fabricating a semiconductor component that includes a capacitor having a high capacitance per unit area. The component is formed in and on a semiconductor on insulator (SOI) substrate having a first semiconductor layer, a layer of insulator on the first semiconductor layer, and a second semiconductor layer overlying the layer of insulator. The method comprises forming a first capacitor electrode in the first semiconductor layer and depositing a dielectric layer comprising Ba1-xCaxTi1-yZryO3 overlying the first capacitor electrode. A conductive material is deposited and patterned to form a second capacitor electrode overlying the dielectric layer, thus forming a capacitor having a high dielectric constant dielectric. An MOS transistor in then formed in a portion of the second semiconductor layer, the MOS transistor, and especially the gate dielectric of the MOS transistor, formed independently of forming the capacitor and electrically isolated from the capacitor. |
1、A method for fabricating a semiconductor component (20), the semiconductor component (20) comprising a semiconductor-on-insulator substrate (26) having a first semiconductor layer (32), An insulator layer (30) on a semiconductor layer, and a second semiconductor layer (28) overlying the insulator layer, the method comprising the steps of:Etching the hole (44) through the insulator layer (30) to expose a portion (43) of the first semiconductor layer (32);Depositing a first metal layer (50) over the second semiconductor layer (28) and into the hole (44), the first metal layer (50) being in contact with the exposed portion (43) of the first semiconductor layer;Depositing a dielectric layer (52) overlying the first metal layer, the dielectric layer (52) comprising Ba1-xCaxTi1-yZryO3;Depositing a second metal layer (54) overlying the dielectric layer (52);Annealing the dielectric layer (52) at a temperature exceeding 450 ° C;Removing a portion of the first metal layer (50), the dielectric layer (52), and the second metal layer (54) overlying the second semiconductor layer (28) to expose a surface of the second semiconductor layer;Forming a gate insulator layer (56) on the surface of the second semiconductor layer (28);A layer of gate electrode material (58) is deposited and patterned to form a gate electrode (70) overlying the gate insulator layer.2、The method of claim 1 wherein the step of depositing the first metal layer (50) comprises the step of depositing a layer of nickel, and the step of depositing the second layer of metal (54) comprises the step of depositing a layer of nickel.3、The method of claim 1 wherein the step of depositing the dielectric layer (52) comprises the step of depositing a dielectric layer comprising Ba0.96Ca0.04Ti0.84Zr0.16O3.4、The method of claim 1 further comprising implanting a conductivity determining ion (46) through the hole (44) and into the first semiconductor layer (32) to form a first electrode (48) of the capacitor (24) .5、A method for fabricating a semiconductor component (20), the semiconductor component (20) comprising a semiconductor-on-insulator substrate (26) having a first semiconductor layer (32), An insulator layer (30) on a semiconductor layer, and a second semiconductor layer (28) overlying the insulator layer, the method comprising the steps of:Etching the first hole extending through the second semiconductor layer (28) to the insulator layer (30);Depositing an oxide (38) over the second semiconductor layer and filling the first hole;The oxide (38) is planarized by a chemical mechanical planarization process to expose a surface of the second semiconductor layer (28);Etching a second hole (44) extending through the oxide (38) and the insulator layer (30) to expose a portion (43) of the first semiconductor layer (32);Injecting conductivity determining ions (46) through the second holes (44) to form impurity doped regions (48) in the first semiconductor layer (32);Contacting the impurity doped region (48) with the first metal layer (50);Depositing a dielectric layer (52) comprising Ba1-xCaxTi1-yZryO3 over the first metal layer;Depositing a second metal layer (54) overlying the dielectric layer;Removing a portion of the first metal layer (52), the dielectric layer (54), and the second metal layer (54) overlying the second semiconductor layer (28) by a chemical mechanical planarization process;Etching a third hole (92) through the first metal layer (50) to expose a portion of the impurity doped region (48);A first electrically conductive contact (100) is formed to the impurity doped region (48), and a second electrically conductive contact (102) is formed to the second metal layer (54).6、The method of claim 5 wherein the step of depositing the dielectric layer (52) comprises the step of depositing a dielectric layer comprising Ba0.96Ca0.04Ti0.84Zr0.16O3.7、The method of claim 6 wherein the step of depositing the dielectric layer (52) further comprises the step of doping the layer comprising Ba0.96Ca0.04Ti0.84Zr0.16O3 with a dopant material.8、A method for fabricating a semiconductor component (20), the semiconductor component (20) comprising a semiconductor-on-insulator substrate (26) having a first semiconductor layer (32), An insulator layer (30) on a semiconductor layer, and a second semiconductor layer (28) overlying the insulator layer, the method comprising the steps of:Forming a first capacitor electrode (48) in the first semiconductor layer (32);Depositing a dielectric layer (52) including Ba1-xCaxTi1-yZryO3 over the first capacitor electrode;Depositing and patterning a conductive material (54) to form a second capacitor electrode overlying the dielectric layer;Forming a MOS transistor (22) in a portion of the second semiconductor layer (28);The MOS transistor (22) is electrically isolated from the second capacitor electrode by a shallow trench isolation region (38).9、The method of claim 8 wherein the step of depositing a dielectric layer (54) comprises the step of depositing a dielectric layer comprising Ba0.96Ca0.04Ti0.84Zr0.16O3.10、The method of claim 9 wherein the step of depositing the dielectric layer (54) further comprises the step of doping the dielectric layer. |
Method for manufacturing a semiconductor component including a capacitor having a high capacitance per unit areaTechnical fieldThe present invention relates generally to methods for fabricating semiconductor components and, more particularly, to semiconductor components for fabricating capacitors having high dielectric constant dielectrics.Background techniqueMost current integrated circuits (ICs) are implemented using a plurality of interconnected field effect transistors (FETs), also known as metal oxide semiconductor field effect transistors (MOSFETs or MOS transistors). The IC is typically formed using both a P-channel and an N-channel FET, so the IC is referred to as a complementary MOS or CMOS circuit. Some improvements in the performance of FET ICs can be achieved by forming FETs in a thin layer of semiconductor material (the thin layer of semiconductor material overlying the insulator layer). One of the benefits of such a semiconductor on-insulator (SOI) FET is that it exhibits a lower junction capacitance and therefore operates at higher speeds.The MOS transistors formed in the SOI layer are interconnected to implement the desired circuit function. Some voltage buses are also connected to appropriate devices to power these devices as required by the circuit function. The voltage buses can include, for example, a Vdd bus, a Vcc bus, a Vss bus, and the like, and can include a bus coupled to an external power source and a bus coupled to an internally generated or internally changed power supply. As used herein, these terms will be used for both external and internal buses. Since various nodes in the circuit are charged or discharged during operation of the circuit, various buses must source or sink current to these nodes. In particular, as the switching speed of an integrated circuit increases, the need to supply or draw current via the bus may cause significant voltage spikes on the bus due to the inherent inductance of the bus. In order to avoid logic errors that may be caused by voltage spikes, it has become commonplace to place decoupling capacitors between the buses. For example, such decoupling capacitors can be connected between the Vdd and Vss buses. These decoupling capacitors are typically distributed along the length of the bus. The capacitor is usually formed as a MOS capacitor such that one plate of the capacitor is formed of the same material used to form the gate electrode of the MOS transistor, and the other plate of the capacitor is formed by the impurity doped region in the SOI layer, and the capacitor is divided. The dielectric of the two plates is formed by a gate dielectric.One problem with such decoupling capacitors formed in a conventional manner is the size of the capacitor. Therefore, in order to be able to fabricate an ever-increasing number of components on a particular size semiconductor chip, there has been an ongoing effort to reduce the size of integrated circuit components. The size of conventionally fabricated decoupling capacitors is an obstacle to this ongoing effort. In order to increase the capacitance per unit area of a conventionally fabricated decoupling capacitor which reduces the size of the capacitor, the thickness of the capacitor dielectric must be reduced. The reduction in the thickness of the capacitor dielectric results in a problem of increased leakage current and reduced reliability of the capacitor. Furthermore, the need to use the same dielectric material for both the gate dielectric and the capacitor dielectric of MOS transistors is disadvantageous because such requirements limit the flexibility of the manufacturing process.Accordingly, it would be desirable to provide a method for fabricating an integrated circuit that includes a capacitor having a high capacitance per unit area without relying on a very thin dielectric layer. Furthermore, it is desirable to provide a method for fabricating an integrated circuit including a capacitor in which a capacitor dielectric is formed separately from a gate insulating system of a MOS transistor of an IC. Further, other desirable features and characteristics of the present invention will become apparent from the Detailed Description and the appended claims.Summary of the inventionThe present invention provides a method for fabricating a semiconductor component comprising a capacitor having a high capacitance per unit area. The component is formed in and on a semiconductor-on-insulator (SOI) substrate having a first semiconductor layer, an insulator layer on the first semiconductor layer, and a second semiconductor layer overlying the insulator layer . The method includes forming a first capacitor electrode in a first semiconductor layer, and depositing a dielectric layer overlying the first capacitor electrode, the dielectric layer comprising Ba1-xCaxTi1-yZryO3. A conductive material is deposited and patterned to form a second capacitor electrode overlying the dielectric layer, thereby forming a capacitor having a high dielectric constant dielectric. Next, a MOS transistor is formed in a portion of the second semiconductor layer, the gate dielectric of the MOS transistor, particularly the MOS transistor, formed independently of the capacitor and electrically isolated from the capacitor.DRAWINGSThe present invention has been described in connection with the above-described drawings, wherein like reference numerals represent similar components, and wherein:1 through 12 illustrate, in cross-section, steps of a method for fabricating a semiconductor component in accordance with an embodiment of the present invention.Detailed waysThe following detailed description is merely illustrative in nature and is not intended to be limiting In addition, the present invention is not intended to limit the invention in any way that is expressed or implied by any of the foregoing technical fields, prior art, the invention or the following embodiments.A new method for fabricating a semiconductor integrated circuit (IC) is disclosed in U.S. Patent No. 6,936,514, the disclosure of which is incorporated herein by reference. The present invention overcomes certain shortcomings of the method disclosed in U.S. Patent No. 6,936,514, which incorporates a high dielectric constant ("high-K") insulator material as a capacitor dielectric to increase capacitance by providing a method for fabricating an IC. Efficiency (increasing the capacitance per unit area) and reducing leakage current without affecting the gate insulator film of the transistor that implements the IC.1 through 12 illustrate, in cross-section, steps of a method for fabricating a semiconductor component (20) in accordance with an embodiment of the present invention. The semiconductor component 20 includes a MOS transistor 22 and a decoupling capacitor 24. Those skilled in the art will appreciate that an IC can include a large number of MOS transistors similar to MOS capacitor 22, as well as a large number of decoupling capacitors, such as decoupling capacitors 24. The MOS transistor can include both N-channel and P-channel MOS transistors, and these transistors can be arrayed and interconnected to implement the desired integrated circuit. Decoupling capacitors can be coupled between appropriate locations (eg, Vdd and Vss buses) to help regulate the voltage supplied to these buses. Although the term "MOS device" properly refers to a device having a metal gate electrode and an oxide gate insulator, the term is used throughout the specification to mean any inclusion of a gate insulator (oxide or other insulator). a semiconductor transistor of a conductive gate electrode (metal or other conductive material) (the gate insulator is located above the semiconductor substrate). The various steps of fabricating a MOS component are known, so many conventional steps will be briefly mentioned or omitted altogether for brevity without providing known process details.As shown in FIG. 1, a method in accordance with an embodiment of the present invention begins by forming a semiconductor-on-insulator (SOI) substrate 26 comprising a thin semiconductor layer 28 over an insulator layer 39, the insulator layer The 30 series is supported by an additional semiconductor layer 32. Preferably, both the semiconductor layer 28 and the semiconductor layer 32 are single crystal silicon layers, but other semiconductor materials may also be used. As used herein, the terms "silicon layer" and "silicon substrate" will be used to encompass single crystal silicon materials that are typically used in the semiconductor industry and that are doped with relatively low or low impurity concentrations, as well as other elements ( Silicon such as germanium, carbon, etc. to form a substantially single crystal semiconductor material. Although those skilled in the art will appreciate that the semiconductor material can be one of other materials, such as germanium or compound semiconductor materials, for ease of discussion, the semiconductor materials described herein will be limited to the term "silicon" as defined above. .The SOI substrate 26 can be formed by some known processes, such as known layer transfer techniques. In this technique, a high dose of hydrogen is injected into the subsurface region of the oxidized single crystal silicon wafer to form a hydrogen stressed subsurface layer. Then, the implanted wafer is flip-bonded to the single crystal silicon substrate 32. Next, a two-stage heat treatment is performed to split and strengthen the bonding of the hydrogen implanted wafer body along the implanted region, bonding the thin single crystal silicon layer 28 to the single crystal silicon substrate, and the dielectric insulator layer 30 and the The substrate is separated. The single crystal silicon layer is then thinned and ground (e.g., by chemical mechanical planarization (CMP) techniques) to a thickness of from about 50 to 100 nanometers (nm), depending on the circuit function being implemented. Preferably, the single crystal silicon layer and the single crystal silicon carrier substrate have an electrical resistance of at least about 1 to 35 ohms per square. The silicon layer 28 may be doped with an impurity into an N-type or a P-type, but is preferably doped into a P-type. Preferably, the substrate layer 32 is doped to a P-type. The dielectric insulator layer 30, typically silicon dioxide, preferably has a thickness of from about 50 to 200 nm. Preferably, a pad oxide layer and a silicon nitride layer (in this and subsequent figures, a single layer 29) are formed on the surface of the silicon layer 28. The pad oxide may be grown by thermal oxidation to, for example, a thickness of 5 to 10 nm, and the silicon nitride may be deposited to a thickness of 10 to 50 nm by, for example, low pressure chemical vapor deposition (LPVCD). Those skilled in the art are aware of many uses of the pad oxide/nitride layer, such as protecting the surface of the silicon layer 28, terminating the polishing, and the like.As shown in FIG. 2, the method continues by electrically isolating various regions of the silicon layer 28, such as by forming shallow trench isolation (STI) regions 34, 36, and 38 that extend through the silicon. The thickness of the layer. As is known, there are many processes that can be used to form the STI, so the process need not be detailed here. Typically, the STI comprises shallow trenches that are etched into the surface of the semiconductor substrate and then filled with an insulating material. After the trench is filled with an insulating material such as silicon oxide, the surface is typically planarized, such as by chemical mechanical planarization (CMP). The pad oxide/nitride layer terminates as a CMP process and protects the residual portion of the surface of the silicon layer 28. The STI is used to isolate the MOS transistor 22 from the decoupling capacitor 24 and to provide isolation between the transistors required for the implemented circuit.As shown in FIG. 3, photoresist layer 40 is applied over the top of STI, pad oxide/nitride layer 29, and silicon layer 28, and is patterned to form an opening 42 that exposes a portion of STI 36. As shown in FIG. 4, the exposed portion of STI 38 is etched using patterned photoresist as an etch mask, such as by reactive ion etching (RIE). Reactive ion etching is continued, etched through oxide layer 30 to expose a portion 43 of silicon layer 32. Thus, the etched vias 44 extend through both STI 38 and oxide 30 to the underlying silicon.In accordance with an embodiment of the present invention, N-type conductivity determining ions are implanted (as indicated by arrow 46) through holes 44 to form N-type impurity doped regions 48 in exposed portions 43 of silicon layer 32, as shown in FIG. . The patterned photoresist mask 40 can serve as an ion implantation shield for this step. The pad oxide/nitride layer 29 protects the surface of the silicon layer 28 from damage by photoresist and chemicals used to remove the photoresist.After removing the patterned photoresist mask and carefully removing the surface of the doped region, a metal layer 50 is deposited onto the surface of the doped region and overlying the silicon layer 28 and the STI regions, as shown in FIG. The metal layer can be deposited by physical vapor deposition (PVD), such as by magnetron sputtering. Preferably, the metal layer 50 is a nickel layer having a thickness of about 100 nm. After depositing the metal layer, a layer 52 of dielectric material 52 comprising tantalum, calcium, titanium, zirconium, and oxygen (BCTZ) is deposited onto the metal layer 50. Preferably, the BCTZ layer has a composition defined by Ba1-xCaxTi1-yZryO3, and is preferably a component having a Ba0.96Ca0.04Ti0.84Zr0.16O3 definition. The BCZT layer can be "Low temperature deposited Ba0.96Ca0.04Ti0.84Zr0.16O3thin films on Pt electrodes by radio frequency magnesiumnet sputtering" by Cramer et al., Applied Physics Letters, Vol. 84 (Vol. 84), Volume 5 (No. 5). The method described in February 2004, pages 771-773 is deposited by radio frequency (rf) magnetron sputtering, all of which is incorporated herein by reference. Preferably, the BCTZ layer is deposited to a thickness of about 20 nm. According to one embodiment of the invention, the BCTZ layer is doped with impurities in the insitu to reduce leakage current through the BCZT layer. The BCZT layer can be sputtered from the target by radio frequency magnetron, and the target includes bismuth, calcium, titanium, zirconium, oxygen, and dopant materials such as ruthenium. After depositing the BCTZ layer, a second metal layer 54 is deposited onto the BCZT layer. Preferably, metal layer 54 is a nickel layer deposited by PVD to a thickness greater than about 150 nm. In the preferred embodiment of the present invention, the metal layer 50 and the metal layer 54 are both nickel, and the metal layer 50/BCZT layer 52/metal layer 54 is sequentially deposited by RF magnetron sputtering without damaging the splash. A seal in the plating apparatus. The preferred composition of the BCZT layer produces a stable, low leakage layer having a dielectric constant greater than about 10, consistent with subsequent standard MOS processing. Furthermore, the pad oxide/nitride layer 29 avoids unnecessary contact between the surface of the silicon layer 29 and the deposited metal layer.As shown in FIG. 7, the method in accordance with an embodiment of the present invention continues by planarizing the metal/BCZT/metal layer, such as by chemical mechanical planarization (CMP), which utilizes a pad oxide/nitride layer 29 for termination of the polishing, The deposited layer overlying the silicon layer 28 and the STI region is removed. Metal layer 50 together with impurity doped region 48 will form one plate of decoupling capacitor 24; BCZT layer 52 forms the dielectric layer of the capacitor; and metal layer 54 forms the other plate of the capacitor. The BCZT layer can be annealed to increase the dielectric constant of the layer before or after planarization. Preferably, the layer is rapidly hot-annealed (RTA) at a temperature greater than 450 ° C for a period of about 5 to 10 seconds, preferably at a temperature greater than 1000 ° C (eg, a temperature of about 1100 to 1150 ° C). Anneal for 10 seconds. Annealing at such high temperatures is possible because annealing occurs before the fabrication of MOS transistor 22. High temperature annealing increases the dielectric constant of the BCZT layer above that achievable with low temperature thermal annealing.In accordance with a further embodiment of the present invention (not shown), the CMP process may continue after the planarization step to cause the metal/BCZT/metal layer to recess below the upper surface plane of the silicon layer 28. In accordance with this embodiment of the invention, a layer of oxide or other dielectric material can be deposited into the recessed material and planarized by an additional CMP process. A layer of oxide or other dielectric material is used to coat the metal/BCZT/metal material, and the material is isolated from subsequent processing steps used to fabricate conventional MOS devices to achieve the desired integrated circuit function.To begin preparation of the MOS transistor 22, after the CMP and annealing steps, the pad oxide/nitride layer 29 is removed and the exposed surface of the silicon layer 28 is cleaned. MOS transistor 22 can be fabricated in accordance with standard MOS processing integrated with the steps used to complete the fabrication and interconnection of capacitor 24 as a circuit function. As shown in FIG. 8, a thin gate oxide layer 56 is thermally grown on the surface of the silicon layer 28. Preferably, the gate oxide 56 has a thickness of about 1 to 5 nm. The gate oxide can also be deposited by, for example, chemical vapor deposition (CVD) or low pressure chemical vapor deposition (LPCVD). As noted above, the gate insulator need not be a silicon oxide, but may be, for example, a high K dielectric material such as HfSiO or the like. The formation of the gate insulator is independent of the capacitor insulator 52. In accordance with an embodiment of the present invention, an undoped polysilicon layer 58 having a thickness of about 50 nm is deposited on the gate insulator. The polysilicon can be deposited, for example, by reducing silane by CVD. The photoresist layer 60 is applied to the surface of the polysilicon layer. Although not shown, it is generally known to deposit a layer of anti-reflective coating material between layers 58 and 60 to facilitate subsequent patterning of polysilicon layer 58.Photoresist layer 60 is patterned as an etch mask for subsequent patterning of polysilicon layer 58 to form the gate electrode of MOS transistor 22 and the gate electrode of other MOS transistors of the IC. As shown in FIG. 9, preferably, the photoresist is patterned in the masks 62, 64, 66 and 68 of the regular array. Next, a shield 62 is used to pattern the polysilicon layer 58 to form the gate electrode 70 of the MOS transistor 22. Shields 64, 66 and 68 are used to form dummy gates 72, 74 and 76. The fixed shield pattern reduces the proximity effect associated with the photolithography step used during formation of the gate electrode 70 and the dummy gate. The mask array is used as an etch mask, and the polysilicon layer 58 is etched by, for example, RIE to form the gate electrode 70 and the dummy gates 72, 74, and 76.After removing the patterned photoresist layer 60, sidewall spacers 80 may be formed on sidewalls of gate electrode 70 and dummy gates 72, 74, and 76. It is well known that sidewall spacers can be formed by depositing a silicon oxide layer or other spacer forming material. The spacer forming material is anisotropically etched, such as by RIE, to remove the material from the horizontal surface while leaving spacers on the vertical surface. As shown in FIG. 10, the source region 82 and the drain region 84 of the MOS transistor 22 are formed by using the gate electrode 70, the sidewall spacers 80, and the patterned photoresist layer (not shown) as ion implantation masks. The conductivity is determined by ion implantation into the silicon layer 28. The patterned photoresist layer protects those portions of the circuit that should not be implanted simultaneously with the source and drain regions. If the MOS transistor 22 is an n-channel transistor, the implanted ions may be, for example, arsenic or phosphorous; if the MOS transistor 22 is a p-channel transistor, the implanted ions may be boron. Those skilled in the art will appreciate that multiple sidewall spacers and multiple ion implantation can be used in the fabrication of MOS transistor 22, and multiple n-channel and/or p-channel MOS transistors can be fabricated to achieve the desired circuit function.A layer of dielectric material 90 is deposited over the MOS transistor 22 and the decoupling capacitor 24 and planarizes the top surface of the layer, such as by CMP. One or more contact openings 92 are etched through dielectric material 90, STI 38, and oxide layer 30 to expose a portion 94 of impurity doped region 48. Preferably, in order to expose a portion of the metal layer 50 from the contact holes, the contact holes 92 also form a portion adjacent to or through the metal layer 50. As shown in FIG. 11, the contact resistance of the portion 94 can be implanted into the surface of the doped region 48 by ion-doping to form a high impurity concentration doped region, or by forming a metal silicide on the surface. And lower. The high concentration doped region or metal silicide region is represented by component symbol 96. The high concentration doped regions or metal silicide regions may be formed through the contact holes 92 using the remaining portions of the dielectric material 90 as a shield. Next, one or more additional contact holes 98 are etched through the dielectric material 90 to expose a portion of the second metal layer 54. Although not illustrated, those skilled in the art will appreciate that additional contact holes (e.g., to the source, drain or gate electrode of MOS transistor 22) can be etched simultaneously with contact holes 92 or 98.As shown in FIG. 12, contact holes 92 and 98 are filled with conductive plugs 100 and 102, respectively. Conductive plugs 100 and 102 can be, for example, tungsten plugs formed from a continuous layer of titanium, titanium nitride, and tungsten. Excess conductive material is removed from the surface of the dielectric layer 90 by CMP. When the metal layer is exposed along the side of the contact hole 92, the conductive plug 100 is in electrical contact with the impurity doped region 48, and preferably also with the first metal layer 50. The resistance to the bottom plate of capacitor 24 is reduced by both the contact metal layer and the impurity doped region 48.Although not illustrated, the fabrication of integrated circuits can be accomplished by methods known to those skilled in the art by, for example, depositing and patterning additional dielectric layers, etching holes through the layers, depositing and patterning metal layers to contact. And various devices that interconnect the entire integrated circuit and the like. These steps are known and need not be detailed here.Although at least one exemplary embodiment has been described in the foregoing embodiments, it should be understood that many variations are possible. It is also to be understood that the illustrative embodiments are only illustrative and not intended to limit the scope, the Rather, the foregoing embodiments are provided to provide a blueprint for the convenience of those skilled in the art to practice the exemplary embodiments. It should be understood that various changes can be made in the arrangement of the function and the components without departing from the scope of the invention as set forth in the appended claims. |
Embodiments of apparatuses and methods for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system are disclosed. In one embodiment, an apparatus includes two processor cores, a micro-checker, a global checker, and fault logic. The micro-checker is to detect whether a value from a structure in one core matches a value from the corresponding structure in the other core. The global checker is to detect lockstep failures between the two cores. The fault logic is to cause the two cores to be resynchronized if there is a lockstep error but the micro-checker has detected a mismatch. |
CLAIMS What is claimed is: 1. An apparatus comprising: a first core including a first structure; a second core including a second structure; a micro-checker to detect whether a first value from the first structure matches a second value from the second structure; a global checker to detect a lockstep failure between the first core and the second core; and fault logic to cause the first core and the second core to be resynchronized if the global checker detects the lockstep failure and the micro-checker detects a mismatch between the first value and the second value. 2. The apparatus of claim 1, wherein the micro-checker includes a comparator to compare the first value and the second value. 3. The apparatus of claim 1, wherein the global checker includes a comparator to compare a first output of the first core and a second output of the second core. 4. The apparatus of claim 1, wherein the fault logic is also to indicate the detection of an uncorrectable error if the global checker detects the lockstep failure and the micro- checker detects that the first value matches the second value. 5. The apparatus of claim 1, wherein: the first core also includes a third structure and a fourth structure; the second core also includes a fifth structure and a sixth structure; the first structure includes first fingerprint logic to generate the first value based on a third value from the third structure and a fourth value from a fourth structure; and the second structure includes second fingerprint logic to generate the second value based on a fifth value from the fifth structure and a sixth value from the sixth structure. 6. The apparatus of claim 1, wherein: the architectural state of the first core is independent of the first value; and the architectural state of the second core is independent of the second value. 7. The apparatus of claim 6, wherein: the first structure is a first prediction structure; and the second structure is a second prediction structure. 8. The apparatus of claim 1, wherein the fault logic is also to cause the first value and the second value to be regenerated if the global checker detects the lockstep failure and the micro-checker detects the mismatch. 9. The apparatus of claim 8, wherein: the first structure is a first cache; the first result is a first cache entry; the second structure is a second cache; and the second result is a second cache entry. 10. The apparatus of claim 9, wherein the fault logic is also to cause the first cache entry and the second cache entry to be reloaded if the global checker detects the lockstep failure and the micro-checker detects the mismatch. 11. A method comprising: checking whether a first value from a first structure in a first core matches a second value from a second structure in a second core; detecting a lockstep failure between the first core and the second core; and resynchronizing the first core and the second core if a mismatch is detected between the first value and the second value. 12. The method of claim 11, further comprising indicating the detection of an uncorrectable error if the first value matches the second value. 13. The method of claim 12, further comprising: generating the first value based on a third value from a third structure in the first core and a fourth value from a fourth structure in the first core; and generating the second value based on a fifth value from the fifth structure in the second core and a sixth value from a sixth structure in the second core. 14. The method of claim 13, wherein: generating the first value includes generating a checksum based on the third value and the fourth value; and generating the second value includes generating a checksum based on the fifth value and the sixth value. 15. The method of claim 11, further comprising: predicting whether a first instruction is to be executed by the first core based on the first value; and predicting whether a second instruction is to be executed by the second core based on the second value. 16. The method of claim 11, further comprising regenerating the first value and the second value if the mismatch is detected. 17. The method of claim 16, further comprising: comparing the first value to the regenerated first value; comparing the second value to the regenerated second value; synchronizing the first core to the second core if the second value matches the regenerated second value; and synchronizing the second core to the first core if the first value matches the regenerated first value. 18. The method of claim 16, wherein the first structure is a first cache, the first value is a first cache entry, the second structure is a second cache, and the second value is a second cache entry, wherein regenerating the first value and the second value includes reloading the first cache entry and the second cache entry. 19. A system comprising: a dynamic random access memory; a first core including a first structure; a second core including a second structure; a micro-checker to detect whether a first value from the first structure matches a second value from the second structure; a global checker to detect a lockstep failure between the first core and the second core; and fault logic to cause the first core and the second core to be resynchronized if the global checker detects the lockstep failure and the micro-checker detects a mismatch between the first value and the second value. |
REDUCING THE UNCORRECTABLE ERROR RATE IN A LOCKSTEPPED DUAL-MODULAR REDUNDANCY SYSTEMBACKGROUND1. Field[0001] The present disclosure pertains to the field of data processing, and more particularly, to the field of error mitigation in data processing apparatuses.2. Description of Related Art[0002] As improvements in integrated circuit manufacturing technologies continue to provide for smaller dimensions and lower operating voltages in microprocessors and other data processing apparatuses, makers and users of these devices are becoming increasingly concerned with the phenomenon of soft errors. Soft errors arise when alpha particles and high-energy neutrons strike integrated circuits and alter the charges stored on the circuit nodes. If the charge alteration is sufficiently large, the voltage on a node may be changed from a level that represents one logic state to a level that represents a different logic state, in which case the information stored on that node becomes corrupted. Generally, soft error rates increase as circuit dimensions decrease, because the likelihood that a striking particle will hit a voltage node increases when circuit density increases. Likewise, as operating voltages decrease, the difference between the voltage levels that represent different logic states decreases, so less energy is needed to alter the logic states on circuit nodes and more soft errors arise. [0003] Blocking the particles that cause soft errors is extremely difficult, so data processing apparatuses often include techniques for detecting, and sometimes correcting, soft errors. These error mitigation techniques include dual-modular redundancy ("DMR") and triple-modular redundancy ("TMR"). With DMR, two identical processors or processor cores execute the same program in lockstep, and their results are compared. With TMR, three identical processors are run in lockstep.[0004] An error in any one processor is detectable using DMR or TMR, because the error will cause the results to differ. TMR provides an advantage in that recovery from the error may be accomplished by assuming that a matching result of two of the three processors is the correct result.[0005] Recovery in a DMR system is also possible by checking all results before they are committed to a register or otherwise allowed to affect the architectural state of the system. Then, recovery may be accomplished by re-executing all instructions since the last checkpoint if an error is detected. However, this approach may not be practical due to latency or other design constraints. Another approach is to add a rollback mechanism that would permit an old architectural state to be recovered if an error is detected. This approach may also be impractical due to design complexity, and may suffer from the problem that the results of re-execution from a previous state may differ from the original results due to the occurrence of a non-deterministic event, such as an asynchronous interrupt, or the re-execution of an output operation that is not idempotent. [0006] Additionally, DMR and TMR may actually increase the error rate because their implementation requires additional circuitry subject to soft errors, and because they may detect errors that would otherwise go undetected but not result in system failure. For example, an error in a structure used to predict which branch of a program should be speculatively executed may result in an incorrect prediction, but the processor would automatically recover when the branch condition was ultimately evaluated.BRIEF DESCRIPTION OF THE FIGURES[0007] The present invention is illustrated by way of example and not limitation in the accompanying figures.[0008] Figure 1 illustrates an embodiment of the present invention in a multicore processor.[0009] Figure 2 illustrates an embodiment of the present invention using micro-check fingerprint logic to reduce cross-core bandwidth.[0010] Figure 3 illustrates an embodiment of the present invention in a method for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system.] Figure 4 illustrates another embodiment of the present invention in a method for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system.[0012] Figure 5 illustrates another embodiment of the present invention in a method for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system.[0013] Figure 6 illustrates an embodiment of the present invention in a lockstepped dual-modular redundancy system. DETAILED DESCRIPTION[0014] The following describes embodiments of apparatuses and methods for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system. In the following description, numerous specific details, such as component and system configurations, may be set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art, that the invention may be practiced without such specific details. Additionally, some well known structures, circuits, techniques, and the like have not been described in detail, to avoid unnecessarily obscuring the present invention.[0015] DMR may be used to provide error detection and correction. However, it may also increase the error rate by detecting errors that would not result in system failure. Embodiments of the present invention may provide for reducing the error rate in a DMR system by using micro-checkers to detect such "false" errors so that they may be ignored. Other embodiments may provide for reducing the error rate in a DMR system by using micro-checkers for certain structures, such as a cache, for which values may be regenerated and compared to the original values to determine which of the two processors should be synchronized to the state of the other processor, thus avoiding the cost of a complete rollback mechanism. Such embodiments of the present invention may be desirable to provide some of the benefits of DMR (e.g., error detection and correction capability), while reducing some of the drawbacks (e.g., false errors, cost of complete recovery capability). [0016] Furthermore, embodiments of the present invention may be desirable to avoid protecting certain structures with parity or error correction code mechanisms, which may be costly, and may also be unnecessary for structures incapable of corrupting architectural state. Connecting these structures to a micro-checker according to an embodiment of the present invention may provide the capability to recover from an error without a need to determine, through parity or otherwise, in which of two DMR cores the error has occurred. [0017] Figure 1 illustrates an embodiment of the present invention in multicore processor 100. Generally, a multicore processor is a single integrated circuit including more than one execution core. An execution core includes logic for executing instructions. In addition to the execution cores, a multicore processor may include any combination of dedicated or shared resources within the scope of the present invention. A dedicated resource may be a resource dedicated to a single core, such as a dedicated level one cache, or may be a resource dedicated to any subset of the cores. A shared resource may be a resource shared by all of the cores, such as a shared level two cache or a shared external bus unit supporting an interface between the multicore processor and another component, or may be a resource shared by any subset of the cores. The present invention may also be embodied in an apparatus other than a multicore processor, such as in a multiprocessor system having at least two processors, each with at least one core. [0018] Processor 100 includes core 110 and core 120. Cores 110 and 120 may be based on the design of any of a variety of different types of processors, such as a processor in the Pentium(R) Processor Family, the Itanium(R) Processor Family, or other processor family from Intel Corporation, or another processor from another company. Processor 100 also includes global checker 130 and micro-checker 140. [0019] Global checker 130 compares an output from core 110 to an output from core120 according to any known technique for detecting a lockstep fault in a DMR system, such as with a comparator circuit. For example, the outputs of core 110 and 120 may be compared when cores 110 and 120 synchronously run identical copies of a program with identical inputs.[0020] Core 110 includes structure 111, which may be any circuit, logic, functional block, module, unit or other structure that generates or holds a value that should match a corresponding value from corresponding structure 121 included in core 120 when cores 110 and 120 operate in lockstep.[0021] In one embodiment, structures 111 and 121 may be structures that cannot alter the architectural state of processor 100 or a system including processor 100. For example, structures 111 and 121 may be prediction structures, such as conditional branch predictor, jump predictors, return-address predictors, or memory dependence predictors. [0022] In another embodiment, structures 111 and 121 may be structures whose content is duplicated elsewhere in a system including processor 100, or may be regenerated. For example, structure 111 and 121 may be cache structure, where each unmodified cache line or entry is a value that may be regenerated by reloading the cache line or entry from a higher level cache or other memory in the system. [0023] Micro-checker 140 compares a value from structure 111 to the corresponding value from structure 121. In different embodiments, the value compared may vary depending on the nature of structures 111 and 112, and may be, for example, a single bit indicating whether a conditional branch should be taken or a jump should occur, a multiple bit predicted return address, or a multiple bit cache line or entry. Therefore, the nature of micro-checker 140 may vary in different embodiments, and the comparison may be performed according to any known technique, such as with an exclusive or gate or a comparator circuit. [0024] In one embodiment, micro-checker 140 may be configured to retain the result of its comparison at least until lockstepped program execution has reached a point where a lockstep fault detected by global checker 130 could not be attributed to a mismatch between the values compared by micro-checker 140. This configuration of micro-checker 140 may be accomplished without any special storage element, for example, if micro- checker is combinational logic and the values compared remain static at least until each lockstep fault detection point is reached, or may be accomplished with a register or other storage element to store the result of micro-checker 140. In other embodiments, micro- checker need not be configured to retain the result of its comparison. [0025] Processor 100 also includes fault logic 150. Fault logic 150 may be any hardware, microcode, programmable logic, processor abstraction layer, firmware, software, or other logic to dictate the response of processor 100 to the detection of a lockstep fault by global checker 130. Upon the detection of a lockstep fault by global checker 130, if micro-checker 140 has detected a mismatch between the value from structure 111 and the corresponding value from structure 121, fault logic 150 causes the core 110 and core 120 to be resynchronized as described below. However, if micro- checker 140 has not detected a mismatch between the value from structure 111 and the corresponding value from structure 121, fault logic 150 indicates the detection of an uncorrectable error according to any known approach to indicating a system failure, such as reporting a fault code and halting operation.[0026] Although Figure 1 shows only structure 111 in core 110 and structure 121 in core 120 as providing inputs to micro-checker 140, any number of structures and micro- checkers may be used within the scope of the present invention. For example, Figure 2 shows an embodiment of the present invention using multiple structures per core, a single micro-checker, and fingerprint logic to reduce cross-core bandwidth. [0027] In Figure 2, processor 200 includes cores 210 and 220, global checker 230, micro-checker 240, and fault logic 250. Core 210 includes structures 211, 213, and 215, and processor core 220 includes structures 221, 223, and 225.[0028] Structure 211 includes fingerprint logic 212 to generate a fingerprint based on values from structures 213 and 215, where the structures 213 and 215 may be any structures as described above with respect to structure 111 of Figure 1. Similarly, structure 221 includes fingerprint logic 222 to generate a fingerprint, according to the same approach as used by fingerprint logic 212, based on values from structures 223 and 225.[0029] Fingerprint logic 212 and fingerprint logic 222 may be implemented with any known approach to combining two or more values into a single value, such as the generation of a checksum using a cyclic redundancy checker. Fingerprint logic 212 and fingerprint logic 222 may be used so that micro-checker 240 may detect mismatches between structures 213 and 223 and structures 215 and 225, instead of using one micro- checker for structures 213 and 223 and another for structures 215 and 225. [0030] Fingerprint logic 212 and fingerprint logic 222 may also be used to reduce cross-core bandwidth. For example, fingerprint logic 212 may be used to combine values from structures 213 and 215 such that the number of bits in the output of fingerprint logic 212 is less than the total number of bits in the two values. While in some embodiments it may be desirable for fingerprint logic 212 to output unique values for every combination of inputs, in other embodiments it may be desirable to accept less than 100% accuracy from micro-checker 240 in exchange for a reduction in the number of bits connected to each input of micro-checker 240. Less than 100% accuracy of micro-checker 240 may be acceptable because a failure of micro-checker 240 to detect a correctable lockstep failure would be interpreted as an uncorrectable lockstep failure, but not as correct lockstep operation that could lead to corruption of the system.[0031 J Figure 3 illustrates an embodiment of the present invention in method 300 for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system including processor 100 of Figure 1, where structures 111 and 121 are structures that cannot alter architectural state, e.g., prediction structures.[0032] In box 310, cores 110 and 120 are operating in lockstep. In box 311, structure 111 generates a first value and structure 121 generates a second value. The first value may or may not match the second value. In box 320, micro-checker 140 compares values from structures 111 and 121. In box 330, the result of the comparison in box 320 is stored.[0033] In box 331 , core 110 executes a first instruction based on the value generated by structure 111, and core 120 executes a second instruction based on the value generated by structure 121. The first and second instructions may or may not be the same instruction. The first and second values may serve as the basis for determining what instruction or instructions are executed by indicating the result of a conditional branch prediction, a jump prediction, a return-address prediction, a memory-dependence prediction, or any other prediction or result that cannot alter architectural state. [0034] From box 331 , method 300 proceeds directly to box 340, or proceeds to box340 after cores 110 and 120 execute any number of additional instructions. {0035] In box 340, global checker 130 compares outputs from cores 110 and 120. If the outputs match, lockstep operation of cores 110 and 120 continues in box 310, unaffected by any error correction, recovery, or notification technique, regardless of the result stored in box 330. However, if global checker 140 detects a lockstep fault in box 340, then method 300 continues to box 350.[0036] From box 350, if the result stored in box 330 indicates that the value from structure 111 matches the value from structure 121, method 300 proceeds to box 360. In box 360, fault logic 150 indicates the detection of an uncorrectable error, for example by reporting a fault code and halting the system.[0037] From box 350, if the result stored in box 330 indicates a mismatch between the values from structures 111 and 121, method 300 proceeds to box 370. In box 370, fault logic 150 causes the resynchronization of cores 110 and 120. This resynchronization may be accomplished by changing the architectural state of core 110 to match the architectural state of core 120, or vice versa. Method 300 then returns to box 310. [0038] Figure 4 illustrates an embodiment of the present invention in method 400 for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system including processor 100 of Figure 1, where structures 111 and 121 are structures whose content is duplicated elsewhere in the system, or may be regenerated, e.g., caches. [0039] In box 410, cores 110 and 120 are operating in lockstep. In box 411 , an instruction causing a load to an unmodified cache line in structure 111 is executed by core 110 and to an unmodified cache line in structure 121 generates a second value. From box 411, method 400 proceeds directly to box 420, or proceeds to box 420 after cores 110 and120 execute any number of additional instructions. [0040] In box 420, micro-checker 140 compares a value, e.g., the cache line loaded in box 411, from structure 111 to a value, e.g., the cache line loaded in box 411, from structure 121. In box 430, the result of the comparison in box 420 is stored.[0041] From box 430, method 400 proceeds directly to box 440, or proceeds to box440 after cores 110 and 120 execute any number of additional instructions.[0042] In box 440, global checker 130 compares outputs from cores 110 and 120. If the outputs match, lockstep operation of cores 110 and 120 continues in box 410, unaffected by any error correction, recovery, or notification technique, regardless of the result stored in box 430. However, if global checker 140 detects a lockstep fault in box440, then method 400 continues to box 450.[0043] From box 450, if the result stored in box 430 indicates that the value from structure 111 matches the value from structure 121, method 400 proceeds to box 460. In box 460, fault logic 150 indicates the detection of an uncorrectable error, for example by reporting a fault code and halting the system.[0044] From box 450, if the result stored in box 430 indicates a mismatch between the values from structures 111 and 121, method 400 proceeds to box 470. In boxes 470 to 473, fault logic 150 causes the resynchronization of cores 110 and 120.[0045] In box 470, the values from structures 111 and 121 are found elsewhere in the system, or otherwise regenerated, e.g., by reloading the cache line loaded in box 411. The regenerated value (e.g., if a single copy of the value is obtained from where it is duplicated in the system) or values (e.g., if one copy of the value per structure is obtained from where it is duplicated in the system) may be loaded into a register or registers, or other location or locations, provided for comparison to the values from structures 111 and 121. Alternatively, the values from structure 111 and 121 may be moved to registers or other locations provided for comparison to the regenerated value or values, which may be obtained, for example, by re-executing the instruction executed in box 411.[0046] In box 471 , the regenerated value or values are compared to the values from structures 111 and 121. If the regenerated value matches the value from structure 111, then, in box 472, core 120 is synchronized to core 110, e.g., by changing the architectural state of core 120 to match the architectural state of core 110. If the regenerated value matches the value from structure 121, then, in box 473, core 110 is synchronized to core120, e.g., by changing the architectural state of core 110 to match the architectural state of core 120. From boxes 472 and 473, method 400 returns to box 410.[0047] Figure 5 illustrates an embodiment of the present invention in method 500 for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system including processor 200 of Figure 2.[0048] In box 510, cores 210 and 220 are operating in lockstep. In box 511, structure213 generates a value and structure 223 generates a value. The value from structure 213 may or may not match the value from structure 223. In box 512, structure 215 generates a value and structure 225 generates a value. The value from structure 215 may or may not match the value from structure 225.[0049] In box 513, structure 211 generates a fingerprint value based on the values from structures 213 and 215, and structure 221 generates a fingerprint value based on the values from structures 223 and 225. The fingerprint values may be generated according to any known technique for combining values, such as using a cyclic redundancy checker to generate a checksum. [0050] In box 520, micro-checker 240 compares the fingerprint values from structures 211 and 221. In box 530, the result of the comparison in box 520 is stored. [0051] In box 540, global checker 230 compares outputs from cores 210 and 220. If the outputs match, lockstep operation of cores 210 and 220 continues in box 510, unaffected by any error correction, recovery, or notification technique, regardless of the result stored in box 530. However, if global checker 240 detects a lockstep fault in box 540, then method 500 continues to box 550.[0052] From box 550, if the result stored in box 530 indicates that the fingerprint value from structure 211 matches the fingerprint value from structure 221, method 500 proceeds to box 560. In box 560, fault logic 250 indicates the detection of an uncorrectable error, for example by reporting a fault code and halting the system. [0053] From box 550, if the result stored in box 530 indicates a mismatch between the values from structures 211 and 221, method 500 proceeds to box 570. In box 570, fault logic 250 causes the resynchronization of cores 210 and 220. This resynchronization may be accomplished by changing the architectural state of core 210 to match the architectural state of core 220, or vice versa. Method 500 then returns to box 510. [0054] Within the scope of the present invention, the methods illustrated in Figures 3, 4, and 5 may be performed in a different order, with illustrated steps omitted, with additional steps added, or with a combination of reordered, combined, omitted, or additional steps. For example, box 330, 430, or 530 (storing the result of the micro- checker's comparison) may be omitted if the output of the micro-checker remains static until box 350, 450, or 550 (examining the result of the micro-checker's comparison), respectively, is performed. [0055] Other examples of methods in which box 330 (storing the result of the micro- checker's comparison) may be omitted are embodiments of the present invention in which the output of the micro-checker does not need to be retained. In one such embodiment, a method may proceed from the micro-checker comparison of box 320 to the decision of box 350 based on the micro-checker comparison (or, boxes 320 and 350 may be merged). In this embodiment, if the micro-checker detects a mismatch (in either 320 or 350), a processor's existing branch misprediction recovery mechanism may be used to flush speculative state, and thus synchronize the cores to non-speculative state in box 370. If the micro-checker does not detect a mismatch, then the method of this embodiment may proceed to box 331 to execute instructions based on the prediction, then to box 340 for the global checker to check for a lockstep fault, then, if a lockstep fault is detected, to box 360 to indicate an unrecoverable error.[0056] Figure 6 illustrates an embodiment of the present invention in lockstepped dual-modular redundancy system 600. System 600 includes multicore processor 610 and system memory 620. Processor 610 may be any processor as described above for Figures 1 and 2. System memory 620 may be any type of memory, such as semiconductor based static or dynamic random access memory, semiconductor based flash or read only memory, or magnetic or optical disk memory. Processor 610 and system memory 620 may be coupled to each other in any arrangement, with any combination buses or direct or point-to-point connections, and through any other components. System 600 may also include any buses, such as a peripheral bus, or components, such as input/output devices, not shown in Figure 6. [0057] In system 600, system memory 620 may be used to store a value that may be loaded a structure such as structures 111, 121, 213, 215, 223, and 225 described above. Therefore, system memory 620 may be the source of the duplicate or regenerated value according to a method embodiment of the present invention, e.g., as shown in box 470 of Figure 4.[0058] Processor 100, processor 200, or any other component or portion of a component designed according to an embodiment of the present invention may be designed in various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally or alternatively, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level where they may be modeled with data representing the physical placement of various devices. In the case where conventional semiconductor fabrication techniques are used, the data representing the device placement model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce an integrated circuit. [0059] In any representation of the design, the data may be stored in any form of a machine-readable medium. An optical or electrical wave modulated or otherwise generated to transmit such information, a memory, or a magnetic or optical storage medium, such as a disc, may be the machine-readable medium. Any of these media may "carry" or "indicate" the design, or other information used in an embodiment of the present invention, such as the instructions in an error recovery routine. When an electrical carrier wave indicating or carrying the information is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, the acts of a communication provider or a network provider may be acts of making copies of an article, e.g., a carrier wave, embodying techniques of the present invention.[0060] Thus, apparatuses and methods for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system have been disclosed. While certain embodiments have been described, and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims. |
In described examples, a signal processing system (1400) includes a data memory component (1426) configured to store values corresponding to signal processing of a digital signal, a plurality of parity bits including a set of group parity bits for each group of memory words of a plurality of groups of memory words in the data memory component (1426), a processor (1444) configured to perform the signal processing of the digital signal and to check the plurality of parity bits for a memory error, and a parity management component (1425) configured to receive an address of a memory word in the data memory component (1426) and a value read from or written to the memory word during the signal processing, the parity management component (1425) configured to update group parity bits in the plurality of parity bits corresponding to the address of the memory word based on the value. |
CLAIMSWhat is claimed is:1. A signal processing system comprising:a data memory component configured to store values corresponding to signal processing of at least one digital signal;a plurality of parity bits associated with the data memory component, the plurality of parity bits including a set of group parity bits for each group of memory words of a plurality of groups of memory words in the data memory component;a processor coupled to receive the at least one digital signal, the processor configured to perform the signal processing of the at least one digital signal and to check the plurality of parity bits for a memory error; anda parity management component coupled to the plurality of parity bits and coupled to receive an address of a memory word in the data memory component and a value read from or written to the memory word by the processor during the signal processing, the parity management component configured to update group parity bits in the plurality of parity bits corresponding to the address of the memory word based on the value.2. The signal processing system of claim 1, in which the plurality of groups are non-overlapping groups of memory words in which a single soft error can affect only one memory word per group.3. The signal processing system of claim 1, in which the signal processing is configured to write and read each memory word of a plurality of memory words of the data memory such that for each write of a value to a memory word of the plurality of memory words, group parity bits corresponding to a group of the memory word are updated based on the value and a single read of the value from the memory word is performed in which the group parity bits are updated based on the value.4. The signal processing system of claim 3, in which the parity management component is coupled to receive a parity enable flag indicating whether parity updates are enabled, and in which the signal processing is configured to manage a value of the parity enable flag to ensure that for each write of a value to a memory word with parity updates enabled, a single read of the value from the memory word is performed with parity updates enabled.5. The signal processing system of claim 1, in which the signal processing system is a radar system and the at least one digital signal is a plurality of digital intermediate frequency (IF) signals generated by a plurality of receive channels of the radar system, each receive channel configured to receive a reflected signal from transmission of a frame of chirps and to generate a digital IF signal of samples of the reflected signal.6. The signal processing system of claim 5, in which the signal processing is configured to: write first values corresponding to the plurality of digital IF signals into a plurality of memory words in the data memory, in which, for each memory word of the plurality of memory words, the parity management component updates the group parity bits in the plurality of parity bits corresponding to a group of the memory word based on a value written in the memory word; andread the first values from the plurality of memory words, in which, for each memory word of the plurality of memory words, the parity management component updates the group parity bits corresponding to the group of the memory word based on a value read from the memory word.7. The signal processing system of claim 6, in which the signal processing is configured to: write second values corresponding to the plurality of digital IF signals into the plurality of memory words in the data memory, in which for each memory word in the plurality of memory words, group parity bits corresponding to the group of the memory word are updated based on a value written into the memory word;read the second values from the plurality of memory words, in which for each memory word in the plurality of memory words, the group parity bits corresponding to the group of the memory word are updated based on a value read from the memory word; andperform signal processing on the second values to generate the first values.8. The signal processing system of claim 1, in which each set of group parity bits consists of a parity bit for each bit position of a memory word.9. The signal processing system of claim 1, in which each set of group parity bits consists of P parity bits for each bit position of a memory word, in which a value of P depends on a number of words N in a group.10. The signal processing system of claim 9, in which the value of P is chosen as a smallest value that satisfies N>2P-P-1.11. The signal processing system of claim 9, in which the parity management component is configured to determine a subset of group parity bits corresponding to a memory word of a group based on ordinality of the memory word in the group.12. The signal processing system of claim 1 1, in which the parity management component includes a parity identification circuit configured to determine the subset of group parity bits corresponding to a memory word of a group, the parity identification circuit including:a first component coupled to receive a binary representation of the ordinality of the memory word, the first component configured to output a first binary representation of an index of a left most non-zero bit of the binary representation of the ordinality;a first adder coupled to the first component to receive the first binary representation and coupled to receive the binary representation of the ordinality, the first adder configured to output a second binary representation of a sum of the first binary representation and the binary representation of the ordinality;a second component coupled to the first adder to receive the second binary representation, the second component configured to output a third binary representation of an index of a left most non-zero bit of the second binary representation; anda second adder coupled to the second component to receive the third binary representation and coupled to receive the binary representation of the ordinality, the second adder configured to output a fourth binary representation of a sum of the third binary representation and the binary representation of the ordinality.13. A method for data memory protection in a signal processing system, the method comprising:dividing memory words of a data memory of the signal processing system into a plurality of groups, in which a plurality of parity bits associated with the data memory includes a set of group parity bits for each group of the plurality of groups;performing signal processing on a least one digital signal, in which each memory word of a plurality of memory words of the data memory is written and read such that for each write of a value to a memory word of the plurality of memory words, group parity bits corresponding to a group of the memory word are updated based on the value and a single read of the value from the memory word is performed in which the group parity bits are updated based on the value; and determining whether a soft error has occurred based on the plurality of parity bits.14. The method of claim 13, in which the plurality of groups are non-overlapping groups of memory words in which a single soft error can affect only one memory word per group.15. The method of claim 14, in which each set of group parity bits consists of a parity bit for each bit position of a memory word.16. The method of claim 14, in which each set of group parity bits consists of P parity bits for each bit position of a memory word, in which a value of P depends on a number of words N in a group.17. The method of claim 16, in which the value of P is chosen as a smallest value that satisfies N>2P-P-1.18. The method of claim 16, in which a subset of group parity bits corresponding to a memory word of a group are determined based on ordinality of the memory word in the group.19. The method of claim 13, in which performing signal processing includes disabling parity updates to allow a read of the value from the memory word without changing the group parity bits.20. The method of claim 13, in which the signal processing system is a radar system and the at least one digital signal is a plurality of digital intermediate frequency (IF) signals generated by a plurality of receive channels in the radar system.21. The method of claim 20, in which performing signal processing includes: writing first values corresponding to signal processing of the digital IF signals into the plurality of memory words, in which for each memory word in the plurality of memory words, group parity bits corresponding to a group of the memory word are updated; and reading the first values from the plurality of memory words, in which for each memory word in the plurality of memory words, the group parity bits corresponding to the group of the memory word are updated.22. The method of claim 21, in which performing signal processing includes: writing second values corresponding to signal processing of the digital IF signals into the plurality of memory words, in which for each memory word in the plurality of memory words, group parity bits corresponding to the group of the memory word are updated; reading the second values from the plurality of memory words, in which for each memory word in the plurality of memory words, the group parity bits corresponding to the group of the memory word are updated; and performing signal processing on the second values to generate the first values. |
PROTECTING DATA MEMORY IN A SIGNAL PROCESSING SYSTEM[0001] This relates generally to signal processing systems, and more particularly to protecting signal data memory in signal processing systems.BACKGROUND[0002] The use of embedded frequency modulated continuous wave (FMCW) radar systems in automotive applications is evolving rapidly. For example, embedded FMCW radar systems may be used in a number of applications associated with a vehicle such as adaptive cruise control, collision warning, blind spot warning, lane change assist, parking assist and rear collision warning. To be used in automotive applications, an embedded FMCW radar system is required to meet stringent functional safety requirements. Functional safety in automotive radar is the prevention of harm to humans due to failure of components in the radar. Meeting these requirements necessitates the inclusion of various protection mechanisms in the radar system that minimize or eliminate failures due to malfunction of components, such as any processors, digital logic, and memory incorporated in the radar system. Other signal processing systems may also include similar protection mechanisms when used in environments with stringent functional safety requirements.SUMMARY[0003] Described examples relate to methods and apparatus for protection of signal data memory in signal processing systems, such as radar systems. In one aspect, a signal processing system includes a data memory component configured to store values corresponding to signal processing of at least one digital signal, a plurality of parity bits associated with the data memory component, the plurality of parity bits including a set of group parity bits for each group of memory words of a plurality of groups of memory words in the data memory component, a processor coupled to receive the at least one digital signal, the processor configured to perform the signal processing of the at least one digital signal and to check the plurality of parity bits for a memory error, and a parity management component coupled to the plurality of parity bits and coupled to receive an address of a memory word in the data memory component and a value read from or written to the memory word by the processor during the signal processing, the parity management component configured to update group parity bits in the plurality of parity bits corresponding to the address of the memory word based on the value.[0004] In one aspect, a method for data memory protection in a signal processing system includes: dividing memory words of a data memory of the signal processing system into a plurality of groups, in which a plurality of parity bits associated with the data memory includes a set of group parity bits for each group of the plurality of groups; performing signal processing on a least one digital signal, in which each memory word of a plurality of memory words of the data memory is written and read, such that for each write of a value to a memory word of the plurality of memory words, group parity bits corresponding to a group of the memory word are updated based on the value, and a single read of the value from the memory word is performed in which the group parity bits are updated based on the value; and determining whether a soft error has occurred based on the plurality of parity bits.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 is an example illustrating the affect of a single soft error on memory.[0006] FIG. 2 is an example illustrating a radar data memory with a diagonal grouping.[0007] FIG. 3 is an example illustrating parity bit assignment for the radar data memory of FIG. 2 for protection against one soft error occurrence.[0008] FIG. 4 is a flow diagram of a method for updating parity words of a radar data memory.[0009] FIG. 5 is an example illustrating parity bit assignment for the radar data memory of FIG. 2 for protection against two soft error occurrences.[0010] FIG. 6 is a block diagram of a parity identification circuit.[0011] FIG. 7 is a flow diagram of a method for flow diagram of a method for updating parity words of a radar data memory.[0012] FIG. 8 is an example illustrating ordinality of memory words in a group.[0013] FIG. 9 is an example illustrating the method of FIG. 7.[0014] FIG. 10 is a block diagram of an example parity management component.[0015] FIG. 11 is an example illustrating parity bit assignment for radar data memory for protection against one soft error occurrence with a row wise memory grouping.[0016] FIG. 12 is an example illustrating parity bit assignment for radar data memory for protection against two soft error occurrences with a row wise memory grouping. [0017] FIG. 13 is a flow diagram of a method for protection of a radar data memory in a radar system.[0018] FIG. 14 is a block diagram of an example frequency modulated continuous wave (FMCW) radar system.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0019] Like elements in the various figures are denoted by like reference numerals for consistency.[0020] A frequency modulated continuous wave (FMCW) radar transmits, via one or more transmit antennas, a radio frequency (RF) frequency ramp referred to as a chirp. Further, multiple chirps may be transmitted in a unit referred to as a frame. The transmitted chirps are reflected from any objects in the field of view (FOV) the radar and are received by one or more receive antennas. The received signal for each receive antenna is down-converted to an intermediate frequency (IF) signal and then digitized. The digitized samples are pre-processed and stored in memory, which is referred to as radar data memory herein. After the data for an entire frame is stored in the radar data memory, the data is post-processed to detect any objects in the FOV and to identify the range, velocity and angle of arrival of detected objects.[0021] The pre-processing may include performing a range Fast Fourier Transform (FFT) on the digitized samples of each reflected chirp to convert the data to the frequency domain. Peak values correspond to ranges (distances) of objects. This processing is usually performed in-line, so the range FFT is performed on the digitized samples of a previous chirp while samples are being collected for the current chirp. The results of the range FFTs for each receive antenna are saved in the radar data memory for further processing. Kl range results are stored for the chirp. Thus, if K2 chirps are in a frame, an array of KlxK2 range values is generated by the range FFTs. In this array, each of the Kl columns corresponds to a specific range value across the K2 chirps. KlxK2 range values are generated for each receive channel in the system.[0022] For each range, a Doppler FFT is performed over each of the corresponding range values of the chirps in the frame. Accordingly, a Doppler FFT is performed on each of the Kl columns of the Klx K2 array. The peaks in the resulting Klx K2 range-Doppler array correspond to the range and relative speed (velocity) of potential objects. To perform the Doppler FFTs, each column of range values is read from the radar data memory and a Doppler FFT is performed on the range values of the column. The Doppler FFT values may be stored back in the same column memory locations.[0023] After the Doppler FFTs, other post-processing, e.g., object detection and angle estimation, may be performed on the KlxK2 range-Doppler array stored the radar data memory to detect objects in the FOV and to identify the range, velocity and angle of arrival of detected objects. After the post-processing is complete, the data in the radar data memory can be discarded.[0024] All the digitized data corresponding to a frame is required to be in the radar data memory before the post-processing such as Doppler FFT, angle estimation, object detection, etc., can begin. Further, resolution expectations, i.e., range resolution, which is controlled by the number of digitized samples per chirp, velocity resolution, which is controlled by the number of chirps per frame, and angle resolution, which is controlled by the number of receive antennas, directly impact the size of the radar data memory. In the automotive radar application space, the current radar data memory size needed is on the order of one to two megabytes (MB) and is expected to increase in coming years as increased resolution is demanded.[0025] As mentioned hereinabove, the functional safety requirements for use of an embedded radar system in automotive applications necessitate the incorporation of protection mechanisms for various components of the system. Radar data memory is one of the components that need an effective protection mechanism. Soft errors, which change the value of one or more bits in a memory location, are one category of memory error that is of concern. Soft errors may be caused by radiation or radioactive particles striking a memory cell causing the cell to change state, i.e., a T changes to a '0' or vice-versa.[0026] The current industry solution for soft error protection is using Error Correction Coded (ECC) memory. In an ECC memory, each word in memory is protected by a set of parity bits. Whenever data is written to a specific memory word, the parity value corresponding to this data are computed and stored in the associated parity bits. When the memory word is read, the parity value is recomputed and validated against the stored parity value. Any difference between the stored and recomputed parity values indicates the presence of bit errors.[0027] Depending on the specific parity code used, the parity bits can be used either to detect bit errors or to detect and correct bit errors. Some typical ECC memories use an extended Hamming code parity scheme with the capability to correct a single bit error and detect up to two bit errors in a word. The number of parity bits needed for an extended Hamming code scheme depends on the length of the memory word. For example, a sixteen bit memory word would require six parity bits and a thirty -two bit memory word would require seven parity bits.[0028] The size of a memory word in ECC memory is chosen as the smallest unit in which data is expected to be read/written. In the context of the radar data memory, the memory word size is based on the typical size of a complex sample, e.g., 32 bits. Thus implementing an extended Hamming code parity scheme for radar data memory would require an overhead of seven bits for every thirty-two bits, which is an overhead of approximately 22%. Thus, approximately 400 KB (kilobytes) of overhead would be needed for 2 MB of radar data memory. This is a significant amount of overhead for cost-sensitive embedded radar solutions.[0029] Example embodiments provide an alternative technique for radar data memory protection with minimal overhead. This memory protection technique is based on two key observations regarding radar data memory. One observation is that most accesses to radar data memory are not random. Instead, a well-defined write phase exists during pre-processing of received signals corresponding to a frame of chirps in which range values are stored in the memory. This write phase is followed by a read phase during post-processing of the stored range values in which all the range values are read, although not in the same order in which the values were stored. The other observation is that error correction is relatively unimportant for radar data memory. Error detection is sufficient because the data read and written in radar data memory is independent from one frame of chirps to the next. If a memory error is detected during post-processing, the data in the memory can be discarded.[0030] Embodiments of the radar data memory protection technique provide for protection of the entire radar data memory with a common set of parity bits rather than requiring a set of parity bits for each memory word as in ECC memories. These common parity bits may be updated each time a word in the radar data memory is read or written as part of the radar data processing. As described in more detail herein, the protection technique ensures that in the absence of memory errors the common parity bits are zero at the end of processing the radar data corresponding to a frame of chirps as long as each word in the radar data memory that is used for storing the radar data is written and read an equal number of times. A non-zero value in any of the common parity bits at the end of processing indicates at least one memory error. [0031] In embodiments of the radar data memory protection, radar data memory is divided into non-overlapping groups and each group is protected by a set of parity bits in the common parity bits. In some embodiments, the grouping is selected to ensure that a single soft error affects a maximum of one word per group. The parity check technique used is based on the number of soft errors to be protected against. For example, as described in more detail herein, a simple check sum scheme may be used if single soft error protection is needed, while a Hamming code scheme may be used if protection for two soft errors is needed.[0032] As mentioned hereinabove, a soft error is a radiation induced bit flip in memory. As shown in the example of FIG. 1, a single soft error can cause bit errors in multiple adjacent bits in memory, both row-wise and column-wise, so multiple adjacent bits in a single word (and multiple column-wise adjacent words) can be in affected by a single soft error. In the example of FIG. 1, a single soft error has affected four column-wise adjacent 8-bit words, with four affected adjacent bits in each word. The example of FIG. 1 assumes that the maximum number of adjacent bit affected and the maximum number of column-wise adjacent words affected is four.[0033] In some embodiments of the radar data memory protection, the radar data memory is divided into M non-overlapping groups where the value of M is chosen such that a single soft error affects no more than one word per group. In some embodiments, the maximum number of adjacent bits in a single word and the maximum number of column-wise adjacent words affected by a single soft error are the same and M is set to be this maximum number. In some embodiments, the maximum number of adjacent bits affected and the maximum number of column-wise adjacent words affected is not the same and M is set to be the larger of the two numbers. The maximum number of adjacent bits affected and the maximum number of column-wise adjacent words affected may increase as the level of miniaturization increases. Accordingly, the value of M depends on factors such as the physics of the memory device and the memory architecture.[0034] For simplicity of explanation, radar data memory protection is initially described in reference to the examples of FIGS. 2, 3, 5, and FIGS. 8 - 10, which assume M = 4, a 32-bit memory word, and a direct memory access (DMA) controller that can transfer up to four consecutive 32-bit words in a cycle. In these examples, each cell represents a 32-bit word and the cells are "shaded" to indicate group assignment. To avoid memory stalling and throughput loss, the words are grouped diagonally such that four adjacent words both row-wise and column-wise are in different groups. The diagonal grouping depicted in the examples is from top-left to bottom right. Such a grouping ensures that no parity word needs to be updated more than once per memory cycle. FIG. 2 is an example illustrating a radar data memory 200 with a diagonal grouping. Each memory cell is "shaded" to indicate group assignment. Each memory cell is also numbered for descriptive purposes.[0035] Each of the M groups is protected by a set of parity bits, which may be referred to as group parity bits herein. The number of parity bits needed for each group and how the parity bits are updated during memory accesses depends on the functional safety requirements of the particular application of the radar system. In some embodiments, the memory protection protects against a single soft error occurrence during the processing of radar data for a frame of chirps. In such embodiments, a parity bit is allocated for the protection of each bit position of the words assigned to a group. For example, for a 32-bit memory word, 32 bit positions exist, so a 32-bit parity word is needed for each group. Thus, M 32-bit parity words are needed. Further, as described in more detail in reference to the method of FIG. 4, the value stored in the associated parity word is essentially the checksum of the memory group.[0036] FIG. 3 is an example illustrating this parity bit assignment for the radar data memory 200. In this example, the memory words 1, 5, 10, 14, 55, 60, 64 are assigned to the same group. The M x M area 300 illustrates an example portion of memory that may be affected by a single soft error. As the area 300 shows, dividing radar data memory into M groups in this manner ensures that the single soft error affects no more than one word per group. On the right side of FIG. 3, the memory words for this group are stacked vertically for illustration and the associated 32-bit parity word is shown below the stack. An example column 302 of bits at an illustrative bit position in each word of the group along with the associated parity bit in the parity word is also shown.[0037] FIG. 4 is a flow diagram of a method for updating parity words of a radar data memory assuming the above described memory protection for a single soft error. The method is performed for a read or a write of a memory word in the radar data memory. When a memory word is accessed, the row and column coordinates in the radar data memory of the memory word are determined 400 based on the address of the memory word. The row number of an address may be computed as R = floor(Address/NumColumns) and the column number of an address may be computed as C = mod(Address, NumColumns) where NumColumns is the number of columns in the radar data memory and the function mod refers to the modulo operator in which the result of mod(a, b) is the remainder of a divided by b. In the above equation, the memory words are assumed to be addressed horizontally across rows. Further the address of the memory word, the row number R, and the column number C are enumerated from zero. The number of columns and rows of a memory are determined by the memory design.[0038] The group index of the memory word is then determined 402 based on the row and column coordinates. Assuming the groups are indexed 0, 1, M-l, an index identifying the group of the memory word, group idx, may be computed bygroup idx = mod(R - mod(C, M), M)where R is the memory row number and C is the memory column number of the memory word. The group parity word, as identified by the group index, is then updated 404. More specifically, the parity word identified by the group index is updated by performing an XOR operation between the data of the memory word and the current contents of the identified parity word.[0039] In some embodiments, the memory protection protects against two soft error occurrences during the processing of radar data for a frame of chirps. Thus, up to two words in each group may be affected by the soft errors. FIG. 5 is an example illustrating the radar data memory 200 affected by two soft errors designated by the M x M area 500 and the M x M area 502. In this example, two words from each group are affected. In such embodiments, a set of P parity bits is allocated for the protection of each bit position of the words assigned to a group. For example, for a 32-bit memory word, 32 bit positions exist, so P 32-bit parity words are needed for each group. Thus, M x P 32-bit parity words are needed. As described in more detail herein, the P parity bits correspond to a Hamming code that detects up to two bit errors.[0040] FIG. 5 illustrates this parity bit assignment. In this example, the memory words 1, 5, 10, 14, ... , 55, 60, 64 are assigned to the same group. On the right side of FIG. 5, the memory words for this group are stacked vertically for illustration and the associated P 32-bit parity words are shown below the stack. An example column 504 of bits at an illustrative bit position in each word of the group along with the associated parity bits in the parity words is also shown. Each such column of bits in a group is protected by a column of P parity bits. Thus, each column in a group can be viewed as a Hamming code word of N data bits where N is the number of memory words in the group and P parity bits. Thus, the thirty-two columns of a group can be viewed as thirty -two Hamming code words each capable of detecting up to two errors. [0041] A Hamming code is chosen because a Hamming code can detect up to two bit errors in a given input bit stream. For a given input bit stream of N bits, a Hamming code computes P parity bits. The number of parity bits P depends on the number of input bits N and any suitable value of P may be used. In some embodiments, the value of P is chosen as the smallest P that satisfies the following relation: N>2P-P-1.[0042] Hamming encoders are generally described assuming that the N input bits are a priori available to the encoder and can be used to determine the parity bits. However, in embodiments of the radar data memory protection, this will not be the case. The input stream to the Hamming code is a stream of N words, i.e., the words in a group. Thus, the parity bits need to be determined as individual memory words in the N input words are accessed. Further, the memory access pattern is not predetermined, which means that the words in a group cannot be assumed to be accessed in any specific order. Embodiments of the radar memory protection thus implement a technique for updating the parity bits of the Hamming code that does not assume all N words of a group are available at a given time or that the N words are accessed in any particular order.[0043] Consider a Hamming encoder that takes in a sequence of N input bits to compute P parity bits. A Hamming code has the property that, in the process of computing the parity bits, every bit in the input bit sequence is required to update a unique subset of the P parity bits that includes at least two parity bits. For example, the first input bit updates parity bits 1 and 2, the second input bit updates parity bits 1 and 3, the third input updates parity bits 2 and 3, the 4th input bit updates parity bits 1, 2, and 3, etc. Thus, the subset of parity bits updated by a specific input bit depends on the ordinality of that input bit.[0044] More specifically, let G be the ordered sequence of numbers in which each number in the sequence has a binary representation including two or more one bits, i.e.,G = {3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17,... }.This sequence is essentially all numbers that are not powers of two. The set of parity bits associated with the kth bit in the input stream is given by the binary representation of G(k), i.e., by the binary representation of the number in the kth position in sequence G, k>0. For example, if k = 5, then G(k) = 9 = 1001, thus indicating that 5th input bit is associated with parity bits 1 and 4.[0045] Further, G(k) can be computed byG(k) = L(L(k) + k) +k where L(n) is a function that outputs the index of the left most non-zero bit of the binary representation of n. For example, L(9) is 4 and L(18) is 5. If k = 4, L(k) = 3, L(L(k)+k) = L(3+4) = L(7) = 3, and L(L(k)+k)+k = 3 + 4 = 7. Thus, G(4) = 7 and given the binary representation of seven is 0111, this indicates that the parity bits to be updated are parity bits 1, 2, and 3.[0046] FIG. 6 illustrates a parity identification circuit 600 that may be implemented in a radar system to determine G(k) by the above equation. The input to the identification circuit 600 is the binary representation of k and the output is a P-bit parity register 608. The components LI 602 and L2 606 each output the binary representation of the index of the left most non-zero bit of the binary representation of the input to the respective component. The adder 604 coupled between the components LI 602 and L2 606 adds the output of the LI component 602 to the binary representation of k and provides the result to the L2 component 606. The adder 607 coupled to the output of the component L2 606 adds the output of the L2 component 606 to the binary representation of k and provides the result to the parity register 608. The components LI 602 and L2 606 may be implemented using any suitable circuit design. Some suitable circuit designs that may be used are described in V. Oklobdzija, "An Algorithmic and Novel Design of a Leading Zero Detector Circuit: Comparison with Logic Synthesis", IEEE Transactions on Very Large Scale Integration(VLSI) Systems, Vol. 2., No. 1, March 1994, pp. 124-128.[0047] The input to LI 602 is k, and the input to L2 606 is the output of LI added to k. The output of L2 606 is again added to k to produce the final result, i.e., G(k). The bit representation of this final result is stored in the parity register 608. The indices of the non-zero bits in the parity register 608 identify the parity bits associated with the input bit of ordinality k. For example, if k = 5, the parity register 608 will contain the bit sequence 10010 ... 0, thus indicating that parity bits 1 and 4 are associated with the 5th input bit.[0048] The above description illustrated updating P parity bits based on a sequence of N input bits. In the case of radar data memory, P parity words associated with N input memory words are updated, where N is the number of memory words in a group. As described in more detail in reference to the method of FIG. 6, when a word of the N words of a group is accessed, the subset of parity words of the P parity words that are associated with the word may be identified in a similar fashion to the bit identification process described above after the ordinality of the word in the group is determined. Accordingly, G(k) where k is the ordinality of a word in a group identifies which of the parity words for the group are to be updated. [0049] FIG. 7 is a flow diagram of a method for updating parity words of a radar data memory assuming the above described Hamming code. The method is performed for a read or a write of a memory word in the radar data memory. The method is described in reference to the examples of FIG. 8 and FIG. 9 and assumes the grouping of the previous examples. Referring to FIG. 7, when a memory word is accessed, i.e., read or written, in the radar data memory, e.g., the radar data memory 800 of FIG. 8, the row and column coordinates in the radar data memory of the memory word are determined 700 based on the address of the memory word. Determination of row and column coordinates is described hereinabove.[0050] The group index of the memory word is determined 702 based on the row and column coordinates. Assuming the groups are indexed 0, 1, M-l, an index identifying the group of the memory word, group idx, may be computed bygroup idx = mod(R - mod(C, M), M)where R is the memory row number and C is the memory column number of the memory word.[0051] The ordinality k of the memory word in the group to which the word is assigned is determined 704 based on the row and column coordinates. The ordinality k may be computed by k = (numRows/M) x C + floor(R/M) + 1where C is the column coordinate of the memory word, R is the row coordinate of the memory word, and numRows is the number of rows in the radar data memory. The example of FIG. 8 illustrates the ordinality 1, 2, ... N 801 of words in a group for the radar data memory 800.[0052] The subset of parity words of the group parity words corresponding to the memory word are determined 706 given the ordinality k of the memory word. Accordingly, G(k) is determined. In some embodiments, G(k) may be computed by the above equation for G(k). In some embodiments, G(k) may be determined by a parity identification circuit such as that of FIG. 6.[0053] The identified parity words are then updated 708. As described hereinabove, the particular parity words of the group parity words are identified by the non-zero bit position of the binary representation of G(k). Given G(k) and the group index, the identified parity words in the group parity words are updated by performing an XOR operation between the data of the memory word and the current contents of the identified parity words.[0054] FIG. 9 is an example illustrating the above method. This figure shows a radar data memory 900 divided into groups and the parity words 902 associated with each group. Assuming a 32-bit word, e.g., 110... 1011 is to be written into the radar data memory at a specified address. According to steps of the above method, the address is used to identify the row and column of the memory word where this value is to be stored. The ordinality k of the memory word within the group and the group identifier are determined based on the row and column. In this example, the memory word is identified as belonging to the group with a group index of 1. The ordinality k is then used to identify the subset of parity words corresponding to the group that should be updated based on writing the word into memory. The identified parity words are then updated by performing an XOR operation between each of the identified parity words and the data value 110 ... 1011.[0055] FIG. 10 is a block diagram illustrating an example parity management component 1000 implementing an embodiment of the above described technique for protecting radar data memory against two soft errors. This example assumes M =4, a memory word is 32 bits, and a radar data memory size of 2MB. For example, N = 2MB/4/M = 131072, and the value of P may be chosen, according to the above-described relation, i.e., P = 18. Given P = 18, eighteen 32-bit parity words are needed per group for a total of M * 18 = 72 parity words or 288 bytes.[0056] The parity management component 1000 includes a parity data component 1002, a parity word identification component 1004, and a parity memory 1006. The parity data component 1002 receives the address ADDR of a word in the radar data memory that is read or written. The parity data component 1002 includes functionality to determine the row and column coordinates of the memory location identified by the address. The parity data component 1002 further includes functionality to determine the group index of the memory word and the ordinality k of the memory word in the group to which the word is assigned based on the row and column coordinates. The parity data component 1002 is coupled to the parity memory 1006 to provide the group index of the memory address and to the parity word identification component 1004 to provide the binary representation of the ordinality k of the memory address. Determination of the row and column coordinates, the group index, and the ordinality assuming a diagonal grouping assignment is described hereinabove.[0057] The parity word identification component 1004 includes functionality to identify the parity words in a group that are to be updated based on the ordinality k, i.e., functionality to determine G(k). Identification of parity words based on ordinality is described hereinabove. In some embodiments, the parity identification component 1004 includes the circuitry of FIG. 6. The parity identification component 1004 is coupled to the parity memory 1006 to provide the binary value of G(k).[0058] The parity memory 1006 includes sufficient memory to store the 72 parity words and functionality to identify the subset of parity words assigned to a group based on the group index and the particular words in the group to be updated based on the binary value provided by the parity word identification component. The parity memory 1006 further includes functionality to receive the data corresponding to the address ADDR and to XOR that data with the parity words identified by the group index and the non-zero bits in the binary value.[0059] The parity memory 1006 may also be coupled to receive the value of an enable flag 1008. The value of this flag indicates whether parity is to be updated for the current memory access. As described hereinabove, the radar data memory protection assumes that during processing of data corresponding to a frame of chirps, each memory location is read and written an equal number of times. More specifically, for each write of a value to a memory word during which parity is updated, a single read of the value from the memory word during which parity is also updated must be performed. During post-processing, e.g., after the Doppler FFTs, a need may exist to read some of the data without updating the corresponding parity bits. In such instances, the value of this flag is set to indicate no parity updates.[0060] The above-described example figures assumed a diagonal pattern for grouping radar data memory words. Other grouping techniques may be used if the property that a single soft error can affect a maximum of one word per group is preserved. FIG. 11 and FIG 12 are examples illustrating an alternative grouping pattern for a radar data memory 1100 assuming M = 4 and a 32-bit memory word in which memory words are assigned to groups by row.[0061] FIG. 11 is an example illustrating parity bit assignment for protection against a single soft error occurrence with this row-wise grouping pattern. In this example, memory words in rows 1, 4, and 7 are in the same group. The M x M area 1102 illustrates an example portion of memory that may be affected by a single soft error. As with the example of FIG. 3, a parity bit is allocated for the protection of each of the 32 bit positions. On the right side of FIG. 11, the memory words for the example group are stacked vertically for illustration and the associated 32-bit parity word is shown below the stack. An example column 1104 of bits at an illustrative bit position in each word of the group along with the associated parity bit in the parity word is also shown. The method of FIG. 4 for single soft error protection may be used with a row-wise grouping with a modification to the way the group index is determined.[0062] FIG. 12 is an example illustrating parity bit assignment for protection against two soft errors using the row-wise grouping pattern. In this example, memory words in rows 1, 4, and 7 are in the same group. The M x M area 1200 and the M x M area 1202 illustrate example portions of memory that may be affected by two soft errors. As with the example of FIG. 5, a set of P parity bits is allocated for the protection of each bit position of the words assigned to a group. On the right side of FIG. 12, the memory words for the example group are stacked vertically for illustration and the 32-bit parity words corresponding to the group are shown below the stack. An example column 1204 of bits at an illustrative bit position in each word of the group along with the associated parity bits in the parity words is also shown. The method of FIG. 7 for protection from two soft errors may be used with a row-wise grouping with some changes to the group index and ordinality equations. More specifically, the group index may be computed by group idx = mod(R,M)and the ordinality k may be computed byk = NumColumns x floor(R/M) +C.[0063] FIG. 13 is a flow diagram of a method for protection of a radar data memory in a radar system, such as the radar system of FIG. 14. This method may be performed during the processing of the radar data corresponding to a frame of chirps. Prior to storing data in the radar data memory, all the parity words are initialized 1300 to zero. As pre-processing of digitized IF signals is performed, i.e., as range FFTs are applied to the incoming digitized samples to generate range values, each range value is stored 1302 in a memory word in the radar data memory and the parity bits corresponding to the memory word are updated 1304. The storing of range values and the parity updating continues until range FFTs are complete 1305, i.e., the pre-processing is completed.[0064] After all of the range values corresponding to the frame are stored 1305 in the radar data memory, the post-processing is initiated. During the post-processing, Doppler FFTs are performed in which each column of range values is read 1306 from the radar data memory, a Doppler FFT performed on the range values, and the resulting Doppler FFT values are written 1310 in the radar data memory in the memory locations that stored the column of range values. The parity bits corresponding to each memory word of the column of range values are updated 1308 when the column of range values is read. Similarly, the parity bits corresponding to each memory word where a value of the Doppler FFT is stored are updated 1312 when the value is written to the memory word.[0065] After the Doppler FFTs are complete 1314, parity updates are disabled 1316, and some of the data is read 1318 from the radar data memory to complete the post processing, e.g., to detect objects and to identify the range, velocity and angle of arrival of detected objects. After the post processing is complete, parity updates are enabled 1320.[0066] Each memory word in the radar data memory that was used to store radar data during the pre and post processing is then read 1322 to trigger an update 1322 of the corresponding parity bits. After this step is complete, the memory words have been read and written an equal number of times for parity updating. The parity bits are then checked 1324 for a memory error. If all the parity bits are zero 1324, no error has occurred; otherwise, a memory error is signaled 1326.[0067] The updating of the parity bits corresponding to a memory word may be performed using the method of FIG. 4 if an embodiment of single soft error memory protection as described is implemented by the radar system. The updating of the parity bits corresponding to a memory word may be performed using the method of FIG. 7 if an embodiment of two soft error memory protection as described herein is implemented by the radar system.[0068] FIG. 14 is a block diagram of an example FMCW radar system 1400 configured to perform radar data memory protection as described herein. In this embodiment, the radar system is a radar integrated circuit (IC) suitable for use in embedded applications. The radar IC 1400 may include multiple transmit channels 1404 for transmitting FMCW signals and multiple receive channels 1402 for receiving the reflected transmitted signals. Any suitable number of receive channels and transmit channels and the number of receive channels and the number of transmit channels may differ.[0069] A transmit channel includes a suitable transmitter and antenna. A receive channel includes a suitable receiver and antenna. Further, each of the receive channels 1402 are identical and include a low-noise amplifier (LNA) 1405, 1407 to amplify the received radio frequency (RF) signal, a mixer 1406, 1408 to mix the transmitted signal with the received signal to generate an intermediate frequency (IF) signal (alternatively referred to as a dechirped signal, beat signal, or raw radar signal), a baseband bandpass filter 1410, 1412 for filtering the beat signal, a variable gain amplifier (VGA) 1414, 1416 for amplifying the filtered IF signal, and an analog-to-digital converter (ADC) 1418, 1420 for converting the analog IF signal to a digital IF signal.[0070] The receive channels 1402 are coupled to a digital front end (DFE) component 1422 to provide the digital IF signals to the DFE 1422. The DFE includes functionality to perform decimation filtering on the digital IF signals to reduce the sampling rate and bring the signal back to baseband. The DFE 1422 may also perform other operations on the digital IF signals, e.g., DC offset removal. The DFE 1422 is coupled to the signal processor component 1444 to transfer the output of the DFE 1422 to the signal processor component 1444.[0071] The signal processor component 1444 is configured to perform signal processing on the digital IF signals of a frame of radar data to detect any objects in the FOV of the radar system 1400 and to identify the range, velocity and angle of arrival of detected objects. The signal processor component 1444 is coupled to the radar data storage component 1424 to read and write data to the radar data memory 1426 during the signal processing.[0072] To perform the signal processing, such as the above-described pre-processing and post processing, the signal processor component 1444 executes software instructions stored in the memory component 1448. These software instructions may include instructions to check the parity bits of the radar data storage component 1424 for memory errors after processing data corresponding to a frame of chirps. Further, the software instructions may cause the results of the signal processing to be ignored if a memory error is indicated.[0073] The signal processor component 1444 may include any suitable processor or combination of processors. For example, the signal processor component 1444 may be a digital signal processor, an MCU, an FFT engine, a DSP+MCU processor, a field programmable gate array (FPGA), or an application specific integrated circuit (ASIC).[0074] The radar data storage component 1424 provides protected radar data storage according to an embodiment of the radar data memory protection techniques described herein. The radar data storage component 1424 includes a parity management component 1425 and a radar data memory component 1426. The radar data memory component 1426 may be any suitable random access memory (RAM), e.g., static RAM. The radar data memory component 1426 includes sufficient memory to store radar data corresponding to the largest expected frame of chirps.[0075] The parity management component 1425 implements parity updating for the radar data memory component 1426. In some embodiments, the parity management component 1425 implements an embodiment of the above-described parity scheme for protection against a single soft error in the radar data memory component 1426. In such embodiments, a parity bit is allocated for the protection of each bit position of the memory words assigned to a group. Thus, if a memory word is Nw bits, Nw parity bits are needed for a group. The parity management component 1425 includes sufficient storage for the Nw-bit parity information for each group. Further, the parity management component 1425 includes functionality to implement an embodiment of the method for updating parity words of FIG. 4.[0076] In some embodiments, the parity management component 1425 implements an embodiment of the above-described parity scheme for protection against two soft errors in the radar data memory component 1426. In such embodiments, as described hereinabove, a column of parity bits is allocated for the protection of each bit position column of the memory words assigned to a group. Thus, if a memory word is Nw bits, Nw columns of parity bits are needed for a group. The number of parity bits P in a column of parity bits depends on the number of memory words in a group. The choice of the value of P is described hereinabove. Thus, P* Nw parity bits are needed for a group. The parity management component 1425 includes sufficient storage for the P* Nw -bit parity information for each group. Further, the parity management component 1425 includes functionality to implement an embodiment of the method for updating parity words of FIG. 7. In some embodiments, the parity management component 1425 may be implemented by the parity management component 1000 of FIG. 10 appropriately configured for the amount of memory in the radar data memory component 1426.[0077] In some embodiments, the parity management component 1425 includes an input for an enable flag (not shown). In such embodiments, the parity management component 1425 performs parity updates when words of the radar data memory component 1426 are read or written unless this flag is set to indicate parity updates are not to be performed. The signal processing software executed by the signal processing component 1444 may set this flag as needed during the processing of the data corresponding to a frame of chirps to ensure that the parity bits corresponding to each memory word are updated by an equal number of reads and writes. For example, as described in reference to the method of FIG. 13, the parity updating may be disabled during part of the post-processing.[0078] The on-chip memory component 1448 provides on-chip storage (e.g., a computer readable medium) that may be useful to communicate data between the various components of the radar IC 1400, to store software programs executed by processors on the radar IC 1400, etc. The on-chip memory component 1448 may include any suitable combination of read-only memory and/or random access memory (RAM), e.g., static RAM.[0079] The direct memory access (DMA) component 1446 is coupled to the radar data storage component 1424 to perform data transfers between the radar data memory 1426 and the signal processor component 1444.[0080] The control component 1427 includes functionality to control the operation of the radar IC 1400. For example, the control component 1427 may include an MCU that executes software to control the operation of the radar IC 1400.[0081] The serial peripheral interface (SPI) 1428 provides an interface for external communication of the results of the radar signal processing. For example, the results of the signal processing performed by the signal processor component 1444 may be communicated to another processor for application specific processing such as object tracking, rate of movement of objects, direction of movement, etc.[0082] The programmable timing engine 1442 includes functionality to receive chirp parameter values for a sequence of chirps in a radar frame from the control module 1427 and to generate chirp control signals that control the transmission and reception of the chirps in a frame based on the parameter values. For example, the chirp parameters are defined by the radar system architecture and may include a transmitter enable parameter for indicating which transmitters to enable, a chirp frequency start value, a chirp frequency slope, an analog-to-digital (ADC) sampling time, a ramp end time, a transmitter start time, etc.[0083] The radio frequency synthesizer (RF SYNTH) 1430 includes functionality to generate FMCW signals for transmission based on chirp control signals from the timing engine 1442. In some embodiments, the RF SYNTH 1430 includes a phase locked loop (PLL) with a voltage controlled oscillator (VCO).[0084] The multiplexor 1432 is coupled to the RF SYNTH 1430 and the input buffer 1436. The multiplexor 1432 is configurable to select between signals received in the input buffer 1436 and signals generated by the RFSYNTH 1430. For example, the output buffer 1438 is coupled to the multiplexor 1432 and may be used to transmit signals selected by the multiplexor 1432 to the input buffer of another radar IC. [0085] The clock multiplier 1440 increases the frequency of the transmission signal to the frequency of the mixers 1406, 1408. The clean-up PLL (phase locked loop) 1434 operates to increase the frequency of the signal of an external low frequency reference clock (not shown) to the frequency of the RFSYNTH 1434 and to filter the reference clock phase noise out of the clock signal.[0086] Example embodiments have been described herein assuming that the number of groups of radar data memory words is four, i.e., M = 4, but other embodiments are possible in which number of groups is more or less than four.[0087] Also, example embodiments have been described herein assuming that no bit packing or memory compression is used such that a subset of bits in a memory word is accessed, but other embodiments are possible in which bit packing/memory compression is used. For example, the data used for parity updating may be zero filled to allow the XOR operation with the full parity word. For example, if bits 8 to 15 of a 32 bit word are accessed, the 32-bit data used for parity updating can include these bits with zeros in the other bit positions. In another example, only the parity bits corresponding to the bits accessed are updated. For example, if bits 8 to 15 of a 32-bit word are accessed, then bits 8 to 15 of the parity word or words are updated.[0088] Other example embodiments have been described herein assuming a diagonal grouping form top left to bottom right, but other embodiments are possible in which the diagonal grouping is from bottom left to top right.[0089] In some example embodiments, the radar system is an embedded radar system in a vehicle, but other embodiments are possible for other applications of embedded radar systems, such as surveillance and security applications, and maneuvering a robot in a factory or warehouse.[0090] More example embodiments have been described herein in the context of an FMCW radar system, but other embodiments are possible for other radar systems in which the signal processing of radar signals is performed such that each memory word is written and read an equal number of times.[0091] Further example embodiments of the memory protection have been described herein in the context of an FMCW radar system, but other embodiments are possible for other signal processing systems used in safety critical applications in which a large amount of data is stored for signal processing, and the data accesses corresponding to the signal processing are such that an equal number of read and write accesses per memory word can be ensured, either with the parity enable flag (in some embodiments) or without the parity enable flag (in some embodiments).[0092] In some example embodiments, memory is grouped such that single soft error affects only one memory word per group, and a checksum based technique is used for protecting against a single soft error, and a Hamming code technique is used for protecting against two soft errors. But other embodiments are possible in which a less efficient memory word grouping is used. For example, a grouping may be used such that a single soft error affects two memory words in a group. In some embodiments, a Hamming code may be used to protect against a single soft error.[0093] Although method steps may be presented and described herein in a sequential fashion, one or more of the steps shown in the figures and described herein may be performed concurrently, may be combined, and/or may be performed in a different order than the order shown in the figures and/or described herein.[0094] Components in radar systems may be referred to by different names and/or may be combined in ways not shown herein without departing from the described functionality. In this description, the term "couple" and derivatives thereof are intended to mean an indirect, direct, optical, and/or wireless electrical connection. For example, if a first device couples to a second device, that connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical electrical connection, and/or through a wireless electrical connection.[0095] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. |
The invention relates to a memory module and memory packages including graphene layers for thermal management. Systems, apparatuses, and methods relating to memory devices and packaging are described.A device, such as a dual inline memory module (DIMM) or other electronic device package, may include a substrate with a layer of graphene configured to conduct thermal energy (e.g., heat) away from components mounted or affixed to the substrate. In some examples, a DIMM includes an uppermost or top layer of graphene that is exposed to the air and configured to allow connection of memory devices (e.g., DRAMs) to be soldered to the conducting pads of the substrate. The graphene may be in contact with parts of the memory device other than the electrical connections with the conducting pads and may thus be configured as a heat sink for the device. Other thin, conductive layers of may be used in addition to or as an alternative to graphene. Graphene may be complementary to other heat sink mechanisms. |
1.A device that contains:A substrate having an uppermost layer of graphene, the uppermost layer of graphene including a plurality of openings corresponding to and exposing a plurality of substrate pads; andA memory die is disposed on the uppermost layer of graphene above the substrate, and has a plurality of electrical connections, each of the plurality of electrical connections is connected to the plurality of substrate pads The corresponding contact.2.The device of claim 1, wherein the uppermost layer of graphene is in contact with the memory die.3.The device of claim 1, wherein the uppermost layer of graphene is configured to transfer thermal energy from the memory die.4.The device of claim 1, wherein the uppermost layer of graphene is electrically insulated from the plurality of electrical connections.5.The apparatus of claim 1, wherein the substrate is a printed circuit board.6.The device according to claim 1, wherein the uppermost layer of graphene comprises a plurality of single-layer graphenes.7.The apparatus of claim 1, wherein the plurality of electrical connections includes a plurality of solder joints.8.The apparatus of claim 1, wherein the plurality of substrate pads is a first plurality of substrate pads, wherein the memory die is a first memory die, and wherein the plurality of electrical connections are The first plurality of electrical connections, and wherein the plurality of openings are the first plurality of openings, the apparatus further includes:A second memory die, which is disposed on the substrate and has a second plurality of electrical connections, each of the second plurality of electrical connections corresponds to the second plurality of substrate pads A contact ofThe uppermost layer of graphene includes a second plurality of openings, and the second plurality of openings correspond to and expose the second plurality of substrate pads.9.The apparatus of claim 1, wherein the apparatus is a dual in-line memory module, DIMM, and wherein the memory device is a DRAM device.10.A device that contains:A printed circuit board PCB, which includes a graphene layer, an edge connector, and a plurality of electrical pads operably coupled to the edge connector; andA plurality of memory devices, each of the memory devices includes a plurality of electrical contacts, each of the electrical contacts is operably coupled to a corresponding one of the plurality of electrical pads,Wherein the graphene layer is in contact with the plurality of memory devices and is configured to transfer thermal energy from the memory device, and the graphene layer includes a plurality of openings corresponding to and exposing the The plurality of electrical pads of each of the plurality of memory devices.11.The device of claim 10, wherein:The plurality of electrical pads are the first plurality of electrical pads located on the first side of the PCB,The plurality of memory devices is a first plurality of memory devices,The graphene layer is the first graphene layer, andThe printed circuit board includes a second graphene layer on a second side of the PCB opposite to the first side and a second plurality of electrical pads operably coupled to the edge connector,The apparatus further includes a second plurality of memory devices, each of the second plurality of memory devices includes a plurality of electrical contacts, each of the electrical contacts operably connected to the second plurality The corresponding one of the electric pads,The second graphene layer is in contact with the second plurality of memory devices and is configured to transfer thermal energy from the second plurality of memory devices, and the second graphene layer includes a plurality of openings, so The plurality of openings correspond to and expose the second plurality of electrical pads.12.The apparatus of claim 10, wherein the graphene layer is electrically insulated from the plurality of electrical pads and the plurality of electrical contacts of the plurality of memory devices.13.The device of claim 10, wherein the graphene layer includes a plurality of single-layer graphenes.14.9. The apparatus of claim 10, wherein the plurality of electrical contacts of the plurality of memory devices are operably connected to a corresponding one of the plurality of electrical pads through solder joints.15.The device according to claim 10, wherein the graphene layer is the uppermost layer of the printed circuit board.16.A semiconductor device package, including:A substrate, which includes a plurality of substrate pads;A semiconductor die including a plurality of electrical contacts, each of the plurality of electrical contacts operably coupled to a corresponding one of the plurality of substrate pads; andA graphene layer located between the substrate and the semiconductor die and in contact with the semiconductor die, the graphene layer includes a plurality of openings corresponding to and exposing the plurality of A substrate pad, and the graphene layer is configured to transfer thermal energy from the semiconductor die.17.The semiconductor device package of claim 16, wherein the semiconductor die comprises one or more memory die, controller die, or some combination thereof.18.The semiconductor device package of claim 16, further comprising a thermal structure that is in contact with the graphene layer and is configured to extract heat from the semiconductor device package.19.The semiconductor device package of claim 16, wherein the graphene layer is electrically insulated from the plurality of electrical contacts and the plurality of substrate pads.20.The semiconductor device package of claim 16, wherein the graphene layer includes a plurality of single-layer graphenes. |
Memory module and memory package including graphene layer for thermal managementTechnical fieldThe present disclosure relates generally to memory modules and memory packages, and more specifically to memory modules and memory packages including graphene layers for thermal management.Background techniqueSemiconductor memories are generally provided in memory modules or memory packages for use in system applications. As memory devices are provided with greater capacity and faster performance, the amount of heat generated poses challenges to memory module and package design. It is necessary to transfer heat from the storage device to the heat dissipation structure, which can be cooled by, for example, forced air cooling.Summary of the inventionIn one aspect, the present application provides a device, including: a substrate having an uppermost layer of graphene, the uppermost layer of graphene includes a plurality of openings, the plurality of openings correspond to and expose a plurality of substrate pads; And a memory die, which is disposed on the substrate on the uppermost layer of graphene, and has a plurality of electrical connections, each of the plurality of electrical connections is soldered to the plurality of substrates Corresponding contact in the disk.In another aspect, the present application provides an apparatus including: a printed circuit board PCB including a graphene layer, an edge connector, and a plurality of electrical pads operably coupled to the edge connector; and a plurality of memory devices Each of the memory devices includes a plurality of electrical contacts, each of the electrical contacts is operably coupled to a corresponding one of the plurality of electrical pads, wherein the graphene layer is connected to the The plurality of memory devices contact and are configured to transfer thermal energy from the memory device, the graphene layer includes a plurality of openings corresponding to and exposing each of the plurality of memory devices Of the plurality of electrical pads.In yet another aspect, the present application provides a semiconductor device package, including: a substrate including a plurality of substrate pads; a semiconductor die including a plurality of electrical contacts, among the plurality of electrical contacts Each one is operably coupled to a corresponding one of the plurality of substrate pads; and a graphene layer located between the substrate and the semiconductor die and in contact with the semiconductor die, The graphene layer includes a plurality of openings corresponding to and exposing the plurality of substrate pads, and the graphene layer is configured to transfer thermal energy from the semiconductor die.Description of the drawings1A and 1B are simplified side and cross-sectional views of a memory module having a graphene layer for thermal management according to an embodiment of the present technology.2 is a more detailed cross-sectional view of a memory module having a graphene layer for thermal management according to an embodiment of the present technology.3 is a simplified cross-sectional view of a semiconductor device package having a graphene layer for thermal management according to an embodiment of the present technology.detailed descriptionAs discussed above, the thermal management of memory packages and modules poses a variety of challenges, especially in the case of heat generated by higher capacity and higher bandwidth memory devices. For example, a memory module (e.g., DDR4 DIMM) may include a printed circuit board (PCB) with edge connectors, a plurality of memory devices (e.g., DRAM devices), and a registered clock driver (RCD). In order to transfer thermal energy from the memory device and the RCD during operation, conventional methods have adopted a heat conduction structure attached to the memory device and/or RCD (e.g., heat sink). The heat sink may comprise a metal or other heat-conducting structure configured to increase heat dissipation/allow heat exchange with cooling gas (e.g., on a larger surface area than the outer surface of the memory device and other heat-generating components ) Surface area.The aforementioned methods for thermal management are affected by several shortcomings that limit their performance and applicability. Heat dissipation structures can be expensive, consume too much space (e.g., reduce airflow between adjacent memory modules), and tend to provide only direct thermal connection to the "back" side of the heat-generating component (ie, memory device and RCD The active circuit systems of the are usually located on the same side as their electrical contacts, so that the side of the heating device facing the PCB is the side where heat is generated).Several embodiments of the present technology can provide a memory module and a memory package by providing a graphene heat transfer layer between a heat-generating semiconductor device (for example, a memory device, RCD, controller, etc.) and a substrate or PCB to which it is attached. Improved thermal management. For example, some embodiments of the present technology relate to a memory module including: a substrate having a plurality of substrate pads; and a memory die disposed on the substrate and having a plurality of Electrical connections. Each of the plurality of electrical connections is in contact with a corresponding one of the plurality of substrate pads. The substrate has an uppermost layer of graphene, and the uppermost layer of graphene includes a plurality of openings corresponding to and exposing a plurality of substrate pads.1A and 1B are simplified side and cross-sectional views of a memory module having a graphene layer for thermal management according to an embodiment of the present technology. As can be seen with reference to FIG. 1A, the memory module 100 includes a printed circuit board (PCB) 101 having an edge connector 102, a plurality of memory devices 103 (for example, DRAM devices), and a registered clock driver (RCD) 104. In order to transfer thermal energy from the memory device 103 and the RCD 104 during operation, a graphene layer (shown in dotted shading) is provided as the uppermost layer of the substrate 101, which is located below the memory device 103 and the RCD 104 and Contact them. This arrangement can be more easily understood with reference to the cross-sectional view of the memory module 100 illustrated in FIG. 1B in which the graphene layer 106 is shown as the uppermost layer of the substrate 101 and extends between the substrate 101 and the memory device 103. As can be further seen with reference to FIG. 1B, the memory module 100 is shown as a DIMM with memory devices 103 on both sides thereof. Accordingly, the graphene layer 106 may be provided on both sides of the module 101 to provide thermal management for the semiconductor die on both sides thereof.Because the graphene layer 106 extends between the memory device 103 and the substrate 101, the graphene layer can be placed in contact with the front side of the memory device 103 (and other semiconductor die). Since the front side of the die is usually where the most heat is generated, this provides improved heat transfer efficiency. Graphene has very high thermal conductivity, even if it is provided in very thin layers. For example, a single single-layer graphene can allow transmission of 300 to 1500 W/mK. In some embodiments of the present technology, the graphene layer 106 may be correspondingly thin as a single monolayer (eg, about 25 μm thick). In other embodiments, the graphene layer 106 may include more than one single layer, and in some embodiments extends to 1000 μm thick.According to an aspect of the present technology, since graphene is also highly conductive, electrical insulation of the graphene layer from the circuit elements of the memory module is provided. In this regard, FIG. 2 is a more detailed cross-sectional view of the memory module 200 according to an embodiment of the present technical details. As can be seen with reference to FIG. 2, the graphene layer 206 of the memory module 200 includes a plurality of openings that are aligned with the corresponding plurality of electrical pads (for example, substrate pads) 208 in the PCB 201 and expose the plurality of openings. One electrical pad 208. The electrical pads 208 provide electrical connections to corresponding electrical contacts 209 on the memory die 203 via solder joints 207, and through traces and vias can provide connections to the edge connector 202 of the module 200. The openings in the graphene layer 206 provide electrical insulation from the pads, contacts, and solder joints (e.g., prevent inadvertent electrical contact by retreating enough distance from these circuit elements). The opening in the graphene layer 206 may further include a dielectric liner or other insulator between the graphene and the circuit element (not shown).According to one aspect of the present technology, the graphene layer 206 may be applied to the PCB 201 in any of a variety of ways known to those skilled in the art. For example, physical vapor deposition (PVC), chemical vapor deposition (CVD), laminating previously grown graphene films, etc. In one embodiment of the present technology, the graphene layer 206 may be applied before forming the substrate pad 208, and the same mask and etching or drilling operations used to form the pad may be utilized accordingly.The memory module 200 may further include a thermal structure 205 configured to radiate or otherwise dissipate (eg, through heat exchange with a cooling gas) heat generated by the semiconductor components of the memory module 200. Unlike the heat sink 105 shown in the memory module 100 of FIGS. 1A and 1B, the thermal structure 205 of the memory module 200 is disposed on the graphene layer 206 and is laterally offset from the memory device 203 accordingly. This arrangement allows the use of thermal structures 205 that may be too large for use in systems with limited space.Although in the foregoing example embodiments, the memory module has been shown and described with reference to a dual in-line memory module (DIMM) having a DRAM memory device, various other embodiments of the present technology are also applicable to having different formats and Other semiconductor device packages involving different semiconductor devices. For example, FIG. 3 is a simplified cross-sectional view of a semiconductor device package having a graphene layer for thermal management according to an embodiment of the present technology. The semiconductor device package 300 includes a substrate 301 and at least one semiconductor device 303 (for example, a memory device, a controller, a processor, or any other integrated circuit device). In order to transfer thermal energy from the semiconductor device 303 during operation, a graphene layer 306 is provided between the substrate 301 and the semiconductor device 303, and the graphene layer 306 is located under and in contact with the semiconductor device 303.The graphene layer 306 of the semiconductor device package 300 includes a plurality of openings that are aligned with the corresponding plurality of electrical pads (for example, substrate pads) 308 in the substrate 301 and expose the plurality of electrical pads 308 . The electrical pads 308 provide electrical connections to corresponding electrical contacts 309 on the semiconductor device 303 via interconnects 307 (eg, solder joints), and through traces and vias may be provided to the package contacts 310 of the package 300 Connection. The openings in the graphene layer 306 provide electrical insulation from pads, contacts, and solder joints (e.g., prevent inadvertent electrical contact by retreating enough distance from these circuit elements). The opening in the graphene layer 306 may further include a dielectric liner or other insulator between the graphene and the circuit element (not shown).In the operation of the package 300, the graphene layer 306 is configured to transfer thermal energy from the semiconductor device 303 to the thermal structure 305 (eg, package lid). The thermal structure 305 may comprise a metal or other thermally conductive structure configured to increase the surface area available for heat dissipation/allowing heat exchange with the cooling gas (eg, on a surface area larger than the outer surface of the semiconductor device 303).It should be noted that the above method describes possible implementations, and the operations and steps can be rearranged or modified in other ways, and other implementations are possible. In addition, embodiments from two or more of these methods can be combined.This document describes specific details of several embodiments of semiconductor devices. The term "semiconductor device" generally refers to a solid-state device that includes semiconductor material. For example, a semiconductor device may include a semiconductor substrate, wafer or die singulated by a wafer or substrate. Throughout this disclosure, semiconductor devices are generally described in the context of semiconductor dies; however, semiconductor devices are not limited to semiconductor dies.The term "semiconductor device package" may refer to an arrangement in which one or more semiconductor devices are incorporated into a common package. The semiconductor package may include a housing or housing that partially or completely encloses at least one semiconductor device. The semiconductor device package may also include an interposer substrate that carries one or more semiconductor devices and is attached to or otherwise incorporated into the housing. The term "semiconductor device component" can refer to a component of one or more semiconductor devices, semiconductor device packages, and/or substrates (eg, interposers, supports, or other suitable substrates). The semiconductor device components can be manufactured, for example, in a discrete package form, a strip or matrix form, and/or a wafer panel form. As used herein, the terms "vertical", "lateral", "upper" and "lower" may refer to the relative direction or position of features in a semiconductor device or device assembly in view of the orientation shown in the drawings. For example, "upper" or "uppermost" can refer to features that are positioned closer to or closest to the top of the page than another feature or part of the same feature, respectively. However, these terms should be interpreted broadly to include semiconductor devices with other orientations, such as inverted or inclined orientations, where top/bottom, above/below, above/below, top/bottom, and left/right can be interchanged according to the orientation.The devices discussed herein including memory devices may be formed on semiconductor substrates or dies (such as silicon, germanium, silicon germanium alloys, gallium arsenide, gallium nitride, etc.). In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or an epitaxial layer of semiconductor material on another substrate. The conductivity of the substrate or sub-regions of the substrate can be controlled by doping with various chemical substances (including but not limited to phosphorus, boron, or arsenic). The doping can be performed by ion implantation or any other doping means during the initial formation or growth of the substrate.The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. Other examples and implementations are within the scope of this disclosure and appended claims. The features that implement the function may also be physically located at different locations including being distributed such that parts of the function are implemented at different physical locations.As used herein, including in the claims, as used in a list of items "or" (for example, a list of items beginning with phrases such as "at least one of" or "one or more of" ) Indicates an inclusive list such that, for example, a list of at least one of A, B, or C represents A or B or C or AB or AC or BC or ABC (ie, A and B and C). And, as used herein, the phrase "based on" should not be interpreted as a reference to a set of closed conditions. For example, an exemplary step described as "based on condition A" may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should be interpreted in the same way as the phrase "based at least in part."Based on the foregoing, it can be understood that specific embodiments of the present invention have been described herein for illustrative purposes, but various modifications can be made without departing from the scope of the present invention. On the contrary, in the foregoing description, various specific details are discussed to provide a detailed and feasible description of the embodiments of the present technology. However, those skilled in the relevant art will recognize that the present disclosure may be practiced without one or more of the specific details. In other cases, well-known structures or operations generally associated with storage systems and devices are not shown or described in detail to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems, and methods in addition to those specific embodiments disclosed herein may also be within the scope of the present technology. |
A method for providing a safety margin for a reverse data link encapsulation packet in a mobile display digital interface (MDDI) communication system, the method comprising the steps of calculating a round trip delay between a host and a client with a round trip delay measurement packet; providing a first predetermined time period for a host driver to enable; providing a second predetermined time period for a client driver to disable; and introducing a turnaround 2 field length in the reverse data link encapsulation packet that is greater than the sum of the calculated round trip delay, the first predetermined time period and the second predetermined time period as well as a corresponding system. |
A method for providing a safety margin for a reverse data link encapsulation packet in a mobile display digital interface (MDDI) communication system, the method comprising the steps of:calculating a round trip delay between a host and a client with a round trip delay measurement packet;providing a first predetermined time period for a host driver to enable;providing a second predetermined time period for a client driver to disable; andintroducing a turnaround 2 field length in the reverse data link encapsulation packet that is greater than the sum of the calculated round trip delay, the first predetermined time period and the second predetermined time period.A system for providing a safety margin for a reverse data link encapsulation packet in a mobile display digital interface (MDDI) communication system, the system comprising:means for calculating a round trip delay between a host and a client with a round trip delay measurement packet;means for providing a first predetermined time period for a host driver to enable;means for providing a second predetermined time period for a client driver to disable; andmeans for introducing a turnaround 2 field length in the reverse data link encapsulation packet that is greater than the sum of the calculated round trip delay, the first predetermined time period and the second predetermined time period.A computer program product, comprising:computer readable medium comprising:code for causing providing a safety margin for a reverse data link encapsulation packet in a mobile display digital interface (MDDI) communication system, the computer code comprising:code for causing a round trip delay between a host and a client to be calculated with a round trip delay measurement packet;code for causing a first predetermined time period for a host driver to enable to be provided;code for causing a second predetermined time period for a client driver to disable to be provided; andcode for causing a turnaround 2 field length in the reverse data link encapsulation packet that is greater than the sum of the calculated round trip delay, the first predetermined time period and the second predetermined time period to be introduced. |
The present Application for Patent claims priority to Provisional Application No. 60/527,996 entitled "Switchable Threshold Differential Interface" filed December 8, 2003. BACKGROUND Field Embodiments of the invention in this disclosure relate to a digital signal protocol and process for communicating or transferring signals between a host device and a client device at high data rates. More specifically, the disclosure relates to a technique for transferring multimedia and other types of digital signals from a host or controller device to a client device for presentation or display to an end user using a low power high data rate transfer mechanism having internal and external device applications. Background Computers, electronic game related products, and various video technologies (for example DVD's and High Definition VCRs) have advanced significantly over the last few years to provide for presentation of increasingly higher resolution still, video, video-on-demand, and graphics images, even when including some types of text, to end users of such equipment. These advances in turn mandated the use of higher resolution electronic viewing devices such as high definition video monitors, HDTV monitors, or specialized image projection elements. Combining such visual images with high-definition or -quality audio data, such as when using CD type sound reproduction, DVDs, surround-sound, and other devices also having associated audio signal outputs, is used to create a more realistic, content rich, or true multimedia experience for an end user. In addition, highly mobile, high quality sound systems and music transport mechanisms, such as MP3 players, have been developed for audio only presentations to end users. This has resulted in increased expectations for typical users of commercial electronics devices, from computers to televisions and even telephones, now being accustomed to and expecting high or premium quality output.In a typical video presentation scenario, involving an electronics product, video data is typically transferred using current techniques at a rate that could be best termed as slow or medium, being on the order of one to tens of kilobits per second. This data is then either buffered or stored in transient or longer-term memory devices, for delayed (later) play out on a desired viewing device. For example, images may be transferred "across" or using the Internet using a program resident on a computer having a modem or other type of Internet connection device, to receive or transmit data useful in digitally representing an image. A similar transfer can take place using wireless devices such as portable computers equipped with wireless modems, or wireless Personal Data Assistants (PDAs), or wireless telephones.Once received, the data is stored locally in memory elements, circuits, or devices, such as RAM or flash memory, including internal or external storage devices such as small size hard drives, for playback. Depending on the amount of data and the image resolution, the playback might begin relatively quickly, or be presented with longer-term delay. That is, in some instances, image presentation allows for a certain degree of real time playback for very small or low resolution images not requiring much data, or using some type of buffering, so that after a small delay, some material is presented while more material is being transferred. Provided there are no interruptions in the transfer link, or interference from other systems or users relative to the transfer channel being used, once the presentation begins the transfer is reasonably transparent to the end user of the viewing device. Naturally, where multiple users share a single communication path, such as a wired Internet connection, transfers can be interrupted or slower than desired.The data used to create either still images or motion video are often compressed using one of several well known techniques such as those specified by the Joint Photographic Experts Group (JPEG), the Motion Picture Experts Group (MPEG), and other well known standards organizations or companies in the media, computer, and communications industries to speed the transfer of data over a communication link. This allows transferring images or data faster by using a smaller number of bits to transfer a given amount of information.Once the data is transferred to a "local" device such as a computer having a storage mechanism such as memory, or magnetic or optical storage elements, or to other recipient devices, the resulting information is un-compressed (or played using special decoding players), and decoded if needed, and prepared for appropriate presentation based on the corresponding available presentation resolution and control elements. For example, a typical computer video resolution in terms of a screen resolution of X by Y pixels typically ranges from as low as 480x640 pixels, through 600x800 to 1024x1024, although a variety of other resolutions are generally possible, either as desired or needed.Image presentation is also affected by the image content and the ability of given video controllers to manipulate the image in terms of certain predefined color levels or color depth (bits per pixel used to generate colors) and intensities, and any additional overhead bits being employed. For example, a typical computer presentation would anticipate anywhere from around 8 to 32, or more, bits per pixel to represent various colors (shades and hues), although other values are encountered.From the above values, one can see that a given screen image is going to require the transfer of anywhere from 2.45 Megabits (Mb) to around 33.55 Mb of data over the range from the lowest to highest typical resolutions and depth, respectively. When viewing video or motion type images at a rate of 30 frames per second, the amount of data required is around 73.7 to 1,006 Megabits of data per second (Mbps), or around 9.21 to 125.75 Megabytes per second (MBps). In addition, one may desire to present audio data in conjunction with images, such as for a multimedia presentation, or as a separate high resolution audio presentation, such as CD quality music. Additional signals dealing with interactive commands, controls, or signals may also be employed, Each of these options adding even more data to be transferred. Furthermore, newer transmission techniques involving High Definition (HD) television and movie recordings may add even more data and control information. In any case, when one desires to transfer high quality or high resolution image data and high quality audio information or data signals to an end user to create a content rich experience, a high data transfer rate link is required between presentation elements and the source or host device that is configured to provide such types of data.Data rates of around 115 Kilobytes (KBps) or 920 Kilobits per second (Kbps) can be routinely handled by some modem serial interfaces. Other interfaces such as USB serial interfaces, can accommodate data transfers at rates as high as 12 MBps, and specialized high speed transfers such as those configured using the Institute of Electrical and Electronics Engineers (IFFE) 1394 standard, can occur at rates on the order of 100 to 400 MBps. Unfortunately, these rates fall short of the desired high data rates discussed above which are contemplated for use with future wireless data devices and other services for providing high resolution, content rich, output signals for driving portable video displays or audio devices. This includes computers for business and other presentations, gaming devices, and so forth. In addition, these interfaces require the use of a significant amount of host or system and client software to operate. Their software protocol stacks also create an undesirably large amount of overhead, especially where mobile wireless devices or telephone applications are contemplated. Such devices have severe memory and power consumption limitations, as well as already taxed computational capacity. Furthermore, some of these interfaces utilize bulky cables which are too heavy and unsatisfactory for highly aesthetic oriented mobile applications, complex connectors which add cost, or simply consume too much power.There are other known interfaces such as the Analog Video Graphics Adapter (AVGA), Digital Video Interactive (DVI) or Gigabit Video Interface (GVIF) interfaces. The first two of these are parallel type interfaces which process data at higher transfer rates, but also employ heavy cables and consume large amounts of power, on the order of several watts. Neither of these characteristics are amenable to use with portable consumer electronic devices. Even the third interface consumes too much power and uses expensive or bulky connectors.For some of the above interfaces, and other very high rate data systems/protocols or transfer mechanisms associated with data transfers for fixed installation computer equipment, there is another major drawback. To accommodate the desired data transfer rates also requires substantial amounts of power and/or operation at high current levels. This greatly reduces the usefulness of such techniques for highly mobile consumer oriented products.Generally, to accommodate such data transfer rates using alternatives such as say optical fiber type connections and transfer elements, also requires a number of additional converters and elements that introduce much more complexity and cost, than desired for a truly commercial consumer oriented product. Aside from the generally expensive nature of optical systems as yet, their power requirements and complexity prevents general use for lightweight, low power, portable applications.What has been lacking in the industry for portable, wireless, or mobile applications, is a technique to provide a high quality presentation experience, whether it be audio, video, or multimedia based, for highly mobile end users. That is, when using portable computers, wireless phones, PDAs, or other highly mobile communication devices or equipment, the current video and audio presentation systems or devices being used simply cannot deliver output at the desired high quality level. Often, the perceived quality that is lacking is the result of unobtainable high data rates needed to transfer the high quality presentation data. This can include both transfer to more efficient, advanced or feature laden external devices for presentation to an end user, or transfer between hosts and clients internal to portable devices such as computers, gaming machines, and wireless devices such as mobile telephones.In this latter case, there have been great strides made in adding higher and higher resolution internal video screens, and other specialty input and/or output devices and connections to wireless devices like so called third generation telephones, and to so called laptop computers. However, internal data busses and connections which may include bridging across rotating or sliding hinge or hinge-like structures which mount or connect video screens or other elements to the main housing where the host and/or various other control elements and output components reside. These are generally high-bandwidth or high throughput interfaces. It is very difficult to construct high throughput data transfers interfaces using prior techniques which can require up to 90 conductors, or more, to achieve the desired throughput, on say a wireless telephone, as one example. Current solutions typically involve employing parallel type interfaces with relatively high signal levels which can cause the interconnection to be more costly, less reliable and potentially generate radiated emissions which could interfere with device functions. This presents many manufacturing, cost, and reliability challenging issues to overcome.Such issues and requirements are also being seen on fixed location installations where communication or computing type devices, as one example, are added to appliances and other consumer devices to provide advanced data capabilities, Internet and data transfer connections, or built in entertainment. Another example would be airplanes and busses where individual video and audio presentation screen are mounted in seat-backs. However, in these situations it is often more convenient, efficient, and easily serviceable to have the main storage, processing, or communication control elements located a distance from visible screens or audio outputs with an interconnecting link or channel for the presentation of information. This link will need to handle a significant amount of data to achieve the desired throughput, as discussed above.Therefore, a new transfer mechanism is needed to increase data throughput between host devices providing the data and client display devices or elements presenting an output to end users.Applicants have proposed such new transfer mechanisms in U.S. Patent Application Serial No. 10/020,520 filed December 14, 2001 , now U.S. Patent No 6,760,772, issued July 6, 2004 to Zou et al. , and U.S. Patent Application Serial No. 10/236,657 filed September 6, 2002 , both entitled "Generating And Implementing A Communication Protocol And Interface For High Data Rate Signal Transfer." Also, U.S. Application Serial No. 10/860,116 filed on June 2, 2004 , entitled "Generating and Implementing a Signal Protocol and Interface for Higher Data Rates." The techniques discussed in those applications can greatly improve the transfer rate for large quantities of data in high speed data signals. However, the demands for ever increasing data rates, especially as related to video presentations, continue to grow. Even with other ongoing developments in data signal technology, there is still a need to strive for even faster transfer rates, improved communication link efficiencies, and more powerful communication links. Therefore, there is a continuing need to develop a new or improved transfer mechanism which is needed to increase data throughput between host and client devices. SUMMARY The above drawback, and others, existent in the art are addressed by embodiments of the invention in which a new protocol and data transfer means, method and mechanism have been developed for transferring data between a host device and a recipient client device at high data rates.Embodiments for the invention are directed to a Mobile Data Digital Interface (MDDI) for transferring digital data at a high rate between a host device and a client device over a communication path which employs a plurality or series of packet structures to form a communication protocol for communicating a pre-selected set of digital control and presentation data between the host and client devices. The signal communications protocol or link layer is used by a physical layer of host or client link controllers, receivers, or drivers. At least one link controller or driver residing in the host device is coupled to the client device through the communications path or link, and is configured to generate, transmit, and receive packets forming the communications protocol, and to form digital presentation data into one or more types of data packets. The interface provides for bi-directional transfer of information between the host and client, which can reside within a common overall housing or support structure.The implementation is generally all digital in nature with the exception of differential drivers and receivers which can be easily implemented on a digital CMOS chip, requires a few as 6 signals, and operates at almost any data rate that is convenient for the system designer. The simple physical and link layer protocol makes it easy to integrate, and this simplicity plus a hibernation state enables the portable system to have very low system power consumption.To aid in use and acceptance, the interface will add very little to the cost of a device, will allow for consumption of very little power while able to power displays through the interface using standard battery voltages, and can accommodate devices having a pocket-able form-factor. The interface is scalable to support resolutions beyond HDTV, supports simultaneous stereo video and 7.1 audio to a display device, performs conditional updates to any screen region, and supports multiple data types in both directions.In further aspects of embodiments of the invention, at least one client link controller, receiver, device, or driver is disposed in the client device and is coupled to the host device through the communications path or link. The client link controller is also configured to generate, transmit, and receive packets forming the communications protocol, and to form digital presentation data into one or more types of data packets. Generally, the host or link controller employs a state machine for processing data packets used in commands or certain types of signal preparation and inquiry processing, but can use a slower general purpose processor to manipulate data and some of the less complex packets used in the communication protocol. The host controller comprises one or more differential line drivers; while the client receiver comprises one or more differential line receivers coupled to the communication path.The packets are grouped together within media frames that are communicated between the host and client devices having a pre-defined fixed length with a pre-determined number of packets having different variable lengths. The packets each comprise a packet length field, one or more packet data fields, and a cyclic redundancy check field. A Sub-frame Header Packet is transferred or positioned at the beginning of transfers of other packets from the host link controller. One or more Video Stream type packets and Audio Stream type packets are used by the communications protocol to transfer video type data and audio type data, respectively, from the host to the client over a forward link for presentation to a client device user. One or more Reverse Link Encapsulation type packets are used by the communications protocol to transfer data from the client device to the host link controller. These transfer in some embodiments include the transfer of data from internal controllers having at leas one MDDI device to internal video screens. Other embodiments include transfer to internal sound systems, and transfers from various input devices including joysticks and complex keyboards to internal host devices.Filler type packets are generated by the host link controller to occupy periods of forward link transmission that do not have data. A plurality of other packets are used by the communications protocol to transfer video information. Such packets include Color Map, Bit Block Transfer, Bitmap Area Fill, Bitmap Pattern Fill, and Transparent Color Enable type packets. User-Defined Stream type packets are used by the communications protocol to transfer interface-user defined data. Keyboard Data and Pointing Device Data type packets are used by the communications protocol to transfer data to or from user input devices associated with said client device. A Link Shutdown type packet is used by the communications protocol to terminate the transfer of data in either direction over said communication path.The communication path generally comprises or employs a cable having a series of four or more conductors and a shield. In addition, printed wires or conductors can be used, as desired, with some residing on flexible substrates.The host link controller requests display capabilities information from the client device in order to determine what type of data and data rates said client is capable of accommodating through said interface. The client link controller communicates display or presentation capabilities to the host link controller using at least one Client Capability type packet. Multiple transfer modes are used by the communications protocol with each allowing the transfer of different maximum numbers of bits of data in parallel over a given time period, with each mode selectable by negotiation between the host and client link controllers. These transfer modes are dynamically adjustable during transfer of data, and the same mode need not be used on the reverse link as is used on the forward link.In other aspects of some embodiments of the invention, the host device comprises a wireless communications device, such as a wireless telephone, a wireless PDA, or a portable computer having a wireless modem disposed therein. A typical client device comprises a portable video display such as a micro-display device, and/or a portable audio presentation system. Furthermore, the host may use storage means or elements to store presentation or multimedia data to be transferred to be presented to a client device user.In still other aspects of some embodiments, the host device comprises a controller or communication link control device with drivers as described below residing within a portable electronic device such as a wireless communications device, such as a wireless telephone, a wireless PDA, or a portable computer. A typical client device in this configuration comprises a client circuit or integrated circuit or module coupled to the host and residing within the same device, and to an internal video display such as a high resolution screen for a mobile phone, and/or a portable audio presentation system, or in the alternative some type of input system or device.These and other objects and advantages are provided by a method, system and computer program product according to the appended claims 1, 2 and 3. BRIEF DESCRIPTION OF THE DRAWINGS Further features and advantages, as well as the structure and operation of various embodiments of the invention, are described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements or processing steps, and the drawing in which an element first appears is indicated by the leftmost digit(s) in the reference number.FIG. 1A illustrates a basic environment in which embodiments of the invention might operate including the use of a micro-display device, or a projector, in conjunction with a portable computer or other data processing device.FIG. 1B illustrates a basic environment in which embodiments of the invention might operate including the use of a micro-display device or a projector, and audio presentation elements used in conjunction with a wireless transceiver.FIG. 1C illustrates a basic environment in which embodiments of the invention might operate including the use of internal display or audio presentation devices used in a portable computer.FIG. 1D illustrates a basic environment in which embodiments of the invention might operate including the use of internal display or audio presentation elements used in a wireless transceiver.FIG. 2 illustrates the overall concept of a Mobile Digital Data Interface with a host and client interconnection.FIG. 3 illustrates the structure of a packet useful for realizing data transfers from a client device to a host device.FIG. 4 illustrates the use of an MDDI link controller and the types of signals passed between a host and a client over the physical data link conductors for a Type 1 interface.FIG. 5 illustrates the use of an MDDI link controller and the types of signals passed between a host and a client over the physical data link conductors for Types 2, 3, and 4 interfaces.FIG. 6 illustrates the structure of frames and sub-frames used to implement the interface protocol.FIG. 7 illustrates the general structure of packets used to implement the interface protocol.FIG. 8 illustrates the format of a Sub-frame Header Packet.FIG. 9 illustrates the format and contents of a Filler Packet.FIG. 10 illustrates the format of a Video Stream Packet.FIGS. 11A-11E illustrate the format and contents for the Video Data Format Descriptor used in FIG. 10 .FIG. 12 illustrates the use of packed and unpacked formats for data.FIG. 13 illustrates the format of an Audio Stream Packet.FIG. 14 illustrates the use of byte-aligned and packed PCM formats for data.FIG. 15 illustrates the format of a User-Defined Stream Packet.FIG. 16 illustrates the format of a Color Map Packet.FIG. 17 illustrates the format of a Reverse Link Encapsulation Packet.FIG. 18 illustrates the format of a Client Capability Packet.FIG. 19 illustrates the format of a Keyboard Data Packet.FIG. 20 illustrates the format of a Pointing Device Data Packet.FIG. 21 illustrates the format of a Link Shutdown Packet.FIG. 22 illustrates the format of a Client Request and Status Packet.FIG. 23 illustrates the format of a Bit Block Transfer Packet.FIG. 24 illustrates the format of a Bitmap Area Fill Packet.FIG. 25 illustrates the format of a Bitmap Pattern Fill Packet.FIG. 26 illustrates the format of a Communication Link Data Channel Packet.FIG. 27 illustrates the format of a Interface Type Handoff Request Packet.FIG. 28 illustrates the format of an Interface Type Acknowledge Packet.FIG. 29 illustrates the format of a Perform Type Handoff Packet.FIG. 30 illustrates the format of a Forward Audio Channel Enable Packet.FIG. 31 illustrates the format of a Reverse Audio Sample Rate Packet.FIG. 32 illustrates the format of a Digital Content Protection Overhead Packet.FIG. 33 illustrates the format of a Transparent Color Enable Packet.FIG. 34 illustrates the format of a Round Trip Delay Measurement Packet.FIG. 35 illustrates the timing of events during the Round Trip Delay Measurement Packet.FIG. 36 illustrates a sample implementation of a CRC generator and checker useful for implementing the invention.FIG. 37A illustrates the timing of CRC signals for the apparatus of FIG. 36 when sending data packets.FIG. 37B illustrates the timing of CRC signals for the apparatus of FIG. 36 when receiving data packets.Fig. 38 illustrates processing steps for a typical service request with no contention.FIG. 39 illustrates processing steps for a typical service request asserted after the link restart sequence has begun, contending with link start.FIG. 40 illustrates how a data sequence can be transmitted using DATA-STB encoding.FIG. 41 illustrates circuitry useful for generating the DATA and STB signals from input data at the host, and then recovering the data at the client.FIG. 42 illustrates drivers and terminating resistors useful for implementing one embodiment.FIG. 43 illustrates steps and signal levels employed by a client to secure service from the host and by the host to provide such service.FIG. 44 illustrates relative spacing between transitions on the Data0, other data lines (DataX), and the strobe lines (Stb).FIG. 45 illustrates the presence of a delay in response that can occur when a host disables the host driver after transferring a packet.FIG. 46 illustrates the presence of a delay in response that can occur when a host enables the host driver to transfer a packet.FIG. 47 illustrates leakage current analysis.FIG. 48 illustrates switching characteristics and relative timing relationships for host and client output enabled and disable time.FIG. 49 illustrates a high level diagram of signal processing steps and conditions by which synchronization can be implemented using a state machine.FIG. 50 illustrates typical amounts of delay encountered for signal processing on the forward and reverse paths in a system employing the MDDI.FIG. 51 illustrates marginal round trip delay measurement.FIG. 52A illustrates Reverse Link data rate changes.FIG. 52B illustrates an example of advanced reverse data sampling.FIG. 53 illustrates a graphical representation of values of the Reverse Rate Divisor versus forward link data rate.FIGS. 54A and 54B illustrate steps undertaken in the operation of an interface.FIG. 55 illustrates an overview of the interface apparatus processing packets.FIG. 56 illustrates the format of a Forward Link Packet.FIG. 57 illustrates typical values for propagation delay and skew in an Type 1 Link interface.FIG. 58 illustrates Data, Stb, and Clock Recovery Timing on a Type 1 Link for exemplary signal processing through the interface.FIG. 59 illustrates typical values for propagation delay and skew in Type 2, Type 3 or Type 4 Link interfaces.FIGS. 60A, 60B, and 60C illustrate different possibilities for the timing of two data signals and MDDI_Stb with respect to each other, being ideal, early, and late, respectively.FIG. 61 illustrates interface pin assignments exemplary connectors used with a Type-I/Type 2 interfaces.FIGS. 62A and 62B illustrate possible MDDI_Data and MDDI_Stb waveforms for both Type 1 and Type 2 Interfaces, respectively.FIG. 63 illustrates a high level diagram of alternative signal processing steps and conditions by which synchronization can be implemented using a state machine.FIG. 64 illustrates exemplary relative timing between a series of clock cycles and the timing of a various reverse link packets bits and divisor values.FIG. 65 illustrates exemplary error code transfer processing.FIG. 66 illustrates apparatus useful for error code transfer processing.FIG. 67A illustrates error code transfer processing for code overloading.FIG. 67B illustrates error code transfer processing for code reception.FIG. 68A illustrates processing steps for a host initiated wake-up.FIG. 68B illustrates processing steps for a client initiated wake-up.FIG. 68C illustrates processing steps for host and client initiated wake-up with contention.FIG. 69 illustrates the format of a Request VCP Feature Packet.FIG. 70 illustrates the format of a VCP Feature Reply Packet.FIG. 71 illustrates the format of a VCP Feature Reply List.FIG. 72 illustrates the format of a Set VCP Feature Packet.FIG. 73 illustrates the format of a Request Valid Parameter Packet.FIG. 74 illustrates the format of a Valid Parameter Reply Packet.FIG. 75 illustrates the format of a Scaled Video Stream Capability Packet.FIG. 76 illustrates the format of a Scaled Video Stream Setup Packet.FIG.77 illustrates the format of a Scaled Video Stream Acknowledgement Packet.FIG. 78 illustrates the format of a Scaled Video Stream Packet.FIG. 79 illustrates the format of a Request Specific Status Packet.FIG. 80 illustrates the format of a Valid Status Reply List Packet.FIG. 81A illustrates the format of a Packet Processing Delay Parameters Packet.FIG. 81B illustrates the format of a Delay Parameters List item.FIG. 82 illustrates the format of a Personal Display Capability Packet.FIG. 83 illustrates elements in the Points of Field Curvature List.FIG. 84A illustrates the format of a Client Error Report Packet.FIG. 84B illustrates the format of an Error Report List item.FIG. 85 illustrates the format of a Client Identification Packet.FIG. 86 illustrates the format of an Alternate Display Capability Packet.FIG. 87 illustrates the format of a Register Access Packet.'FIG. 88A-88C illustrate use of two display buffers to reduce visible artifacts.FIG. 89 illustrates two buffers with display refresh faster than image transfer.FIG. 90 illustrates two buffers with display refresh slower than image transfer.FIG. 91 illustrates two buffers with display refresh much faster than image transfer.FIG. 92 illustrates three buffers with display refresh faster than image transfer.FIG. 93 illustrates three buffers with display refresh slower than image transfer.FIG. 94 illustrates one buffer with display refresh faster than image transfer.FIG. 95 illustrates host-client connection via daisy -chain and hub.FIG. 96 illustrates client devices connected via a combination of hubs and daisy chains.FIG. 97 illustrates a color map. DETAILED DESCRIPTION I. Overview A general intent of the invention is to provide a Mobile Display Digital Interface (MDDI), as discussed below, which results in or provides a cost-effective, low power consumption, transfer mechanism that enables high- or very-high- speed data transfer over a short-range communication link between a host device and a client device, such as a display element, using a "serial" type of data link or channel. This mechanism lends itself to implementation with miniature connectors and thin flexible cables which are especially useful in connecting internal (interior to a housing or support frame) display or output elements or devices, or input devices to a central controller or communication element or device. In addition, this connection mechanism is very useful for connecting external display elements or devices such as wearable micro-displays (goggles or projectors) or other types of visual, audible, tactile information presentation devices to portable computers, wireless communication devices, or entertainment devices.Although the terms Mobile and Display are associated with the naming of the protocol, it is to be understood that this is for convenience only in terms of having a standard name easily understood by those skilled in the art working with the interface and protocol. As it will relate to a VESA standard and various applications of that standard. However, it will be readily understood after a review of the embodiments presented below that many non-mobile and non-display related applications will benefit from application of this protocol, resulting interface structure, or transfer mechanism, and the MDDI label is not intended to imply any limitations to the nature or usefulness of the invention or its various embodiments.An advantage of embodiments of the invention is that a technique is provided for data transfer that is low in complexity, low cost, has high reliability, fits well within the environment of use, and is very robust, while remaining very flexible.Embodiments of the invention can be used in a variety of situations to communicate or transfer large quantities of data, generally for audio, video, or multimedia applications from a host or source device where such data is generated, manipulated, such as for transfer to specific deices, or otherwise processed, or stored; to a client or receiving device, such as a video display or projection element, audio speakers, or other presentation device at a high rate. A typical application, which is discussed below, is the transfer of data from either a portable computer or a wireless telephone or modem to a visual display device such as a small video screen or a wearable micro-display appliance, such as in the form of goggles or helmets containing small projection lenses and screens, or from a host to client device within such components. That is, from a processor or controller to an internal screen or other presentation element, as well as from various internal, or external input devices employing a client to an internally located (collocated within same device housing or support structure) host, or connected thereto by a cable or conductors.The characteristics or attributes of the MDDI are such that they are independent of specific display or presentation technology. This is a highly flexible mechanism for transferring data at a high rate without regards to the internal structure of that data, nor the functional aspects of the data or commands it implements. This allows the timing of data packets being transferred to be adjusted to adapt to the idiosyncrasies of particular client devices, such as for unique display desires for certain devices, or to meet the requirements of combined audio and video for some A-V systems, or for certain input devices such as joysticks, touch pads, and so forth. The interface is very display element or client device agnostic, as long as the selected protocol is followed. In addition, the aggregate serial link data, or data rate, can vary over several orders of magnitude which allows a communication system or host device designer to optimize the cost, power requirements, client device complexity, and client device update rates.The data interface is presented primarily for use in transferring large amounts of high rate data over a "wired" signal link or small cable. However, some applications may take advantage of a wireless link as well, including optical based links, provided it is configured to use the same packet and data structures developed for the interface protocol, and can sustain the desired level of transfer at low enough power consumption or complexity to remain practical. II. Environment A typical application can be seen in FIGS. 1A and 1B where a portable or laptop computer 100 and wireless telephone or PDA device 102 are shown communicating data with display devices 104 and 106, respectively, along with audio reproduction systems 108 and 112. In addition, FIG 1A shows potential connections to a larger display or screen 114 or an image projector 116, which are only shown in one figure for clarity, but are connectable to wireless device 102 as well. The wireless device can be currently receiving data or have previously stored a certain amount of multimedia type data in a memory element or device for later presentation for viewing and/or hearing by an end user of the wireless device. Since a typical wireless device is used for voice and simple text communications most of the time, it has a rather small display screen and simple audio system (speakers) for communicating information to the device 102 user.Computer 100 has a much larger screen, but still inadequate external sound system, and still falls short of other multimedia presentation devices such as a high definition television, or movie screens. Computer 100 is used for purposes of illustration and other types of processors, interactive video games, or consumer electronics devices can also be used with the invention. Computer 100 can employ, but is not limited to or by, a wireless modem or other built in device for wireless communications, or be connected to such devices using a cable or wireless link, as desired.This makes presentation of more complex or "rich" data a less than a useful or enjoyable experience. Therefore, the industry is developing other mechanisms and devices to present the information to end users and provide a minimum level of desired enjoyment or positive experience.As previously discussed above, several types of display devices have or are currently, being developed for presenting information to end users of device 100. For example, one or more companies have developed sets of wearable goggles that project an image in front of the eyes of a device user to present a visual display. When correctly positioned such devices effectively "project" a virtual image, as perceived by a user's eyes, that is much larger than the element providing the visual output. That is, a very small projection element allows the eye(s) of the user to "see" images on a much larger scale than possible with typical LCD screens and the like. The use of larger virtual screen images also allows the use of much higher resolution images than possible with more limited LCD screen displays. Other display devices could include, but are not limited to, small LCD screens or various flat panel display elements, projection lenses and display drivers for projecting images on a surface, and so forth.There may also be additional elements connected to or associated with the use of wireless device 102 or computer 100 for presenting an output to another user, or to another device which in turn transfers the signals elsewhere or stores them. For example, data may be stored in flash memory, in optical form, for example using a writeable CD media or on magnetic media such as in a magnetic tape recorder and similar devices, for later use.In addition, many wireless devices and computers now have built-in MP3 music decoding capabilities, as well as other advanced sound decoders and systems. Portable computers utilize CD and DVD playback capabilities as a general rule, and some have small dedicated flash memory readers for receiving pre-recorded audio files. The issue with having such capabilities is that digital music files promise a highly increased feature rich experience, but only if the decoding and playback process can keep pace. The same holds true for the digital video files.To assist with sound reproduction, external speakers 114 are shown in FIG. 1A , which could also be accompanied by addition elements such as sub-woofers, or "surround-sound" speakers for front and rear sound projection. At the same time, speakers or earphones 108 are indicated as built-in to the support frame or mechanism of micro-display device 106 of FIG. 1B . As would be known, other audio or sound reproduction elements can be used including power amplification or sound shaping devices.In any case, as discussed above, when one desires to transfer high quality or high resolution image data and high quality audio information or data signals from a data source to an end user over one or more communication links 110, a high data rate is required. That is, transfer link 110 is clearly a potential bottleneck in the communication of data as discussed earlier, and is limiting system performance, since current transfer mechanisms do not achieve the high data rates typically desired. As discussed above for example, for higher image resolutions such as 1024 by 1024 pixels, with color depths of 24-32 bits per pixel and at data rates of 30 fps, the data rates can approach rates in excess of 755 Mbps or more. In addition, such images may be presented as part of a multimedia presentation which includes audio data and potentially additional signals dealing with interactive gaming or communications, or various commands, controls, or signals, further increasing the quantity or data and the data rate.It is also clear that fewer cables or interconnections required for establishing a data link, means that mobile devices associated with a display are easier to use, and more likely to be adopted by a larger user base. This is especially true where multiple devices are commonly used to establish a full audio-visual experience, and more especially as the quality level of the displays and audio output devices increases.Another typical application related to many of the above and other improvements in video screens and other output or input devices can be seen in FIGS. 1C and 1D where a portable or laptop computer 130 and wireless telephone or PDA device 140 are shown communicating data with "internal" display devices 134 and 144, respectively, along with audio reproduction systems 136 and 146.In FIGS. 1C and 1D , small cut-away sections of the overall electronic devices or products are used to show the location of one or more internal hosts and controllers in one portion of the device with a generalized communication link, here 138 and 148, respectively, connecting them to the video display elements or screens having the corresponding clients, across a rotating joint of some known type used throughout the electronics industry today. One can see that the amount of data involved in these transfers requires a large number of conductors to comprise links 138 and 148. It is estimated that such communication links are approaching 90 or more conductors in order to satisfy today's growing needs for utilizing advanced color and graphical interfaces, display elements, on such devices because of the types of parallel or other known interface techniques available for transferring such data.Unfortunately, the higher data rates exceed current technology available for transferring data. Both in terms of the raw amount of data needing to be transferred per unit time, and in terms of manufacturing reliable cost effective physical transfer mechanisms.What is needed is a technique, a structure, means or method, for transferring data at higher rates for the data transfer link or communication path between presentation elements and the data source, which allows for consistently low(er) power, light weight, and as simple and economical a cabling structure as possible. Applicants have developed a new technique, or method and apparatus, to achieve these and other goals to allow an array of mobile, portable, or even fixed location devices to transfer data to desired displays, micro-displays, or audio transfer elements, at very high data rates, while maintaining a desired low power consumption, and complexity. III. High Rate Digital Data Interface System Architecture In order to create and efficiently utilize a new device interface, a signal protocol and system architecture has been formulated that provides a very high data transfer rate using low power signals. The protocol is based on a packet and common frame structure, or structures linked together to form a protocol for communicating a pre-selected set of data or data types along with a command or operational structure imposed on the interface. A. Overview The devices connected by or communicating over the MDDI link are called the host and client, with the client typically being a display device of some type, although other output and input devices are contemplated. Data from the host to the display travels in the forward direction (referred to as forward traffic or link), and data from the client to the host travels in the reverse direction (reverse traffic or link), as enabled by the host. This is illustrated in the basic configuration shown in FIG. 2 . In FIG. 2 , a host 202 is connected to a client 204 using a bi-directional communication channel 206 which is illustrated as comprising a forward link 208 and a reverse link 210. However, these channels are formed by a common set of conductors whose data transfer is effectively switched between the forward or reverse link operations. This allows for greatly reduced numbers of conductors, immediately addressing one of the many problems faced with current approaches to high speed data transfer in low power environments such as for mobile electronic devices.As discussed elsewhere, the host comprises one of several types of devices that can benefit from using the present invention. For example, host 202 could be a portable computer in the form of a handheld, laptop, or similar mobile computing device. It could also be a Personal Data Assistant (PDA), a paging device, or one of many wireless telephones or modems. Alternatively, host 202 could be a portable entertainment or presentation device such as a portable DVD or CD player, or a game playing device.Furthermore, the host can reside as a host device or control element in a variety of other widely used or planned commercial products for which a high speed communication link is desired with a client. For example, a host could be used to transfer data at high rates from a video recording device to a storage based client for improved response, or to a high resolution larger screen for presentations. An appliance such as a refrigerator that incorporates an onboard inventory or computing system and/or Bluetooth connections to other household devices, can have improved display capabilities when operating in an internet or Bluetooth connected mode, or have reduced wiring needs for in-the-door displays (a client) and keypads or scanners (client) while the electronic computer or control systems (host) reside elsewhere in the cabinet. In general, those skilled in the art will appreciate the wide variety of modem electronic devices and appliances that may benefit from the use of this interface, as well as the ability to retrofit older devices with higher data rate transport of information utilizing limited numbers of conductors available in either newly added or existing connectors or cables.At the same time, client 204 could comprise a variety of devices useful for presenting information to an end user, or presenting information from a user to the host. For example, a micro-display incorporated in goggles or glasses, a projection device built into a hat or helmet, a small screen or even holographic element built into a vehicle, such as in a window or windshield, or various speaker, headphone, or sound systems for presenting high quality sound or music. Other presentation devices include projectors or projection devices used to present information for meetings, or for movies and television images. Another example would be the use of touch pads or sensitive devices, voice recognition input devices, security scanners, and so forth that may be called upon to transfer a significant amount of information from a device or system user with little actual "input" other than touch or sound from the user. In addition, docking stations for computers and car kits or desk-top kits and holders for wireless telephones may act as interface devices to end users or to other devices and equipment, and employ either clients (output or input devices such as mice) or hosts to assist in the transfer of data, especially where high speed networks are involved.However, those skilled in the art will readily recognize that the present invention is not limited to these devices, there being many other devices on the market, and proposed for use, that are intended to provide end users with high quality images and sound, either in terms of storage and transport or in terms of presentation at playback. The present invention is useful in increasing the data throughput between various elements or devices to accommodate the high data rates needed for realizing the desired user experience.The inventive MDDI and communication signal protocol may be used to simplify the interconnect between a host processor, controller, or circuit component (for example), and a display within a device or device housing or structure (referred to as an internal mode) in order to reduce the cost or complexity and associated power and control requirements or constraints of these connections, and to improve reliability, not just for connection to or for external elements, devices, or equipment (referred to as an external mode).The aggregate serial link data rate on each signal pair used by this interface structure can vary over many orders of magnitude, which allows a system or device designer to easily optimize cost, power, implementation complexity, and the display update rate for a given application or purpose. The attributes of MDDI are independent of display or other presentation device (target client) technology. The timing of data packets transferred through the interface can be easily adjusted to adapt to idiosyncrasies of particular clients such as display devices, sound systems, memory and control elements, or combined timing requirements of audio-video systems. While this makes it possible to have a system with a very small power consumption, it is not a requirement of the various clients to have frame buffers in order to make use of the MDDI protocol at least at some level. B. Interface Types The MDDI is contemplated as addressing at least fours, and potentially more, somewhat distinct physical types of interfaces found in the communications and computer industries. These are labeled simply as Type 1, Type 2, Type 3, and Type 4, although other labels or designations may be applied by those skilled in the art depending upon the specific applications they are used for or industry they are associated with. For example, simple audio systems use fewer connections than more complex multimedia systems, and may reference features such as "channels" differently, and so forth.The Type 1 interface is configured as a 6-wire, or other type of conductor or conductive element, interface which makes it suitable for mobile or wireless telephones, PDAs, electronic games, and portable media players, such as CD players, or MP3 players, and similar devices or devices used on similar types of electronic consumer technology. In one embodiment, a an interface can be configured as an 8-wire (conductor) interface which is more suitable for laptop, notebook, or desktop personal computers and similar devices or applications, not requiring rapid data updates and which do not have a built-in MDDI link controller. This interface type is also distinguishable by the use of an additional two-wire Universal Serial Bus (USB) interface, which is extremely useful in accommodating existing operating systems or software support found on most personal computers.Type 2, Type 3, and Type 4 interfaces are suitable for high performance clients or devices and use larger more complex cabling with additional twisted-pair type conductors to provide the appropriate shielding and low loss transfers for data signals.The Type 1 interface passes signals which can comprise display, audio, control, and limited signaling information, and is typically used for mobile clients or client devices that do not require high-resolution full-rate video data. A Type 1 interface can easily support SVGA resolution at 30 fps plus 5.1 channel audio, and in a minimum configuration might use only three wire pairs total, two pairs for data transmission and one pair for power transfer. This type of interface is primarily intended for devices, such as mobile wireless devices, where a USB host is typically not available within the such device for connection and transfer of signals. In this configuration, the mobile wireless device is a MDDI host device, and acts as the "master" that controls the communication link from the host, which generally sends data to the client (forward traffic or link) for presentation, display or playback.In this interface, a host enables receipt of communication data at the host from the client (reverse traffic or link) by sending a special command or packet type to the client that allows it to take over the bus (link) for a specified duration and send data to the host as reverse packets. This is illustrated in FIG. 3 , where a type of packet referred to as an encapsulation packet (discussed below) is used to accommodate the transfer of reverse packets over the transfer link, creating the reverse link. The time interval allocated for the host to poll the client for data is pre-determined by the host, and is based on the requirements of each specified application. This type of half-duplex bi-directional data transfer is especially advantageous where a USB port is not available for transfer of information or data from the client.High-performance displays capable of HDTV type or similar high resolutions require around 1.5 Gbps rate data streams in order to support full-motion video. The Type 2 interface supports high data rates by transmitting 2 bits in parallel, the Type 3 by transmitting 4 bits in parallel, and the Type 4 interface transfers 8 bits in parallel. Type 2 and Type 3 interfaces use the same cable and connector as Type 1 but can operate at twice and four times the data rate to support higher-performance video applications on portable devices. A Type 4 interface is suited for very high performance clients or displays and requires a slightly larger cable that contains additional twisted-pair data signals.The protocol used by the MDDI allows each Type 1, 2, 3, or 4 host to generally communicate with any Type 1, 2, 3, or 4 client by negotiating what is the highest data rate possible that can be used. The capabilities or available features of what can be referred to as the least capable device is used to set the performance of the link. As a rule, even for systems where the host and client are both capable using Type 2, Type 3, or Type 4 interfaces, both begin operation as a Type 1 interface. The host then determines the capability of the target client, and negotiates a hand-off or reconfiguration operation to either Type 2, Type 3, or Type 4 mode, as appropriate for the particular application.It is generally possible for the host to use the proper link-layer protocol (discussed further below) and step down or again reconfigure operation at generally any time to a slower mode to save power or to step up to a faster mode to support higher speed transfers, such as for higher resolution display content. For example, a host may change interface types when the system switches from a power source such as a battery to AC power, or when the source of the display media switches to a lower or higher resolution format, or a combination of these or other conditions or events may be considered as a basis for changing an interface type, or transfer mode.It is also possible for a system to communicate data using one mode in one direction and another mode in another direction. For example, a Type 4 interface mode could be used to transfer data to a display at a high rate, while a Type 1 mode is used when transferring data to a host device from peripheral devices such as a keyboard or a pointing device. It will be appreciated by one skilled in the art that hosts and clients may communicate outgoing data at different rates.Often, users of the MDDI protocol may distinguish between an "external" mode and an "internal" mode. An external mode describes the use of the protocol and interface to connect a host in one device to a client outside of that device that is up to about 2 meters or so from the host. In this situation, the host may also send power to the external client so that both devices can easily operate in a mobile environment. An internal mode describes when the host is connected to a client contained inside the same device, such as within a common housing or support frame or structure of some kind. An example would be applications within a wireless phone or other wireless device, or a portable computer or gaming device where the client is a display or display driver, or an input device such as a keypad or touch-pad, or a sound system, and the host is a central controller, graphics engine, or CPU element. Since a client is located much closer to the host in internal mode applications as opposed to external mode applications, there are generally no requirements discussed for the power connection to the client in such configurations. C. Physical Interface Structure The general disposition of a device or link controller for establishing communications between host and client devices is shown in FIGS. 4 and 5 . In FIGS. 4 and 5 , a MDDI link controller 402 and 502 is shown installed in a host device 202 and a MDDI link controller 404 and 504 is shown installed in a client device 204. As before, host 202 is connected to a client 204 using a bi-directional communication channel 406 comprising a series of conductors. As discussed below, both the host and client link controllers can be manufactured as an integrated circuit using a single circuit design that can be set, adjusted, or programmed to respond as either a host controller (driver) or a client controller (receiver). This provides for lower costs due to larger scale manufacturing of a single circuit device.In FIG. 5 , a MDDI link controller 502 is shown installed in a host device 202' and a MDDI link controller 504 is shown installed in a client device 204'. As before, host 202' is connected to a client 204' using a bi-directional communication channel 506 comprising a series of conductors. As discussed before, both the host and client link controllers can be manufactured using a single circuit design.Signals passed between a host and a client, such as a display device, over the MDDI link, or the physical conductors used, are also illustrated in FIGS. 4 and 5 . As seen in FIGS. 4 and 5 , the primary path or mechanism for transferring data through the MDDI uses data signals labeled as MDDI_Data0+/- and MDDI_Stb+/-. Each of these are low voltage data signals that are transferred over a differential pair of wires in a cable. There is only one transition on either the MDDI_Data0 pair or the MDDI_Stb pair for each bit sent over the interface. This is a voltage based transfer mechanism not current based, so static current consumption is nearly zero. The host drives the MDDI_Stb signals to the client display.While data can flow in both the forward and reverse directions over the MDDI_Data pairs, that is, it is a bi-directional transfer path, the host is the master or controller of the data link. The MDDI_Data0 and MDDI-Stb signal paths are operated in a differential mode to maximize noise immunity. The data rate for signals on these lines is determined by the rate of the clock sent by the host, and is variable over a range of about 1 kbps up to 400 Mbps or more.The Type 2 interface contains one additional data pair or conductors or paths beyond that of the Type 1, referred to as MDDI_Data1+/-. The Type 3 interface contains two additional data pairs or signal paths beyond that of the Type 2 interface referred to as MDDI_Data2+/-, and MDDI_Data3+/-. The Type 4 interface contains four more data pairs or signal paths beyond that of the Type 3 interface referred to as: MDDI_data4+/-, MDDI_Data5+/-, MDDI_Data6+/-, and MDDI_Data7+/-, respectively. In each of the above interface configurations, a host can send power to the client or display using the wire-pair or signals designated as HOST_Pwr and HOST_Gnd. As discussed further below, power transfer can also be accommodated, if desired, in some configurations on the MDDI_data4+/-, MDDI_Data5+/-, MDDI_Data6+/-, or MDDI_Data7+/- conductors when an interface "Type" is being used that employs fewer conductors than are available or present for the other modes. This Power transfer is generally employed for external modes, there generally being no need for internal modes, although some applications may differ.A summary of the signals passed between the host and client (display) over the MDDI link for various modes are illustrated in Table I, below, in accordance with the interface type.Table IHOST_Pwr/GndHOST_Pwr/GndHOST_Pwr/GndHOST_Pwr/GndMDDI_Stb+/-MDDI_Stb+/-MDDI_Stb+/-MDDI_Stb+/-MDDT_Data0+/-MDDI_Data0+/-MDDI_Data0+/-MDDI_Data0+/-MDDI_Data1+/-MDDI_Data1+/-MDDI_Data1+/-MDDI_Data2+/-MDDI_Data2+/-MDDI_Data3+/-MDDI_Data3+/-Optional PwrOptional PwrOptional PwrMDDI_Data4+/-Optional PwrOptional PwrOptional PwrMDDI_Data5+/-Optional PwrOptional PwrOptional PwrMDDT_Data6+/-Optional PwrOptional PwrOptional PwrMDDI_Data7+/-Also note that the HOST_Pwr/Gnd connections for transfer from the host are provided generally for external modes. Internal applications or modes of operation generally have clients that draw power directly from other internal resources, and do not use MDDI to control power distribution, as would be apparent to one skilled in the art, so such distribution is not discussed in further detail here. However, it is certainly possible to allow power to be distributed through the MDDI to allow for certain kinds of power control, synchronization, or interconnection convenience, for example, as would be understood by one skilled in the art.Cabling generally used to implement the above structure and operation is nominally on the order of 1.5 meters in length, generally 2 meters or less, and contains three twisted pairs of conductors, each in turn being multi-strand 30 AWG wire. A foil shield covering is wrapped or otherwise formed above the three twisted pairs, as an additional drain wire. The twisted pairs and shield drain conductor terminate in the client connector with the shield connected to the shield for the client, and there is an insulating layer, covering the entire cable, as would be well known in the art. The wires are paired as: HOST_Gnd with HOST_Pwr; MDDI_Stb+ with MDDI_Stb-; MDDI_Data0+ with MDDI_Data0-; MDDI_Data1+ with MDDI_Data1-; and so forth. However, a variety of conductors and cabling can be used, as would be understood in the art, to implement the embodiments of the invention, depending upon specific applications. For example, heavier outside coatings or even metallic layers may be used to protect the cable in some applications, while thinner, flatter conductive ribbon type structures may be well suited to other applications. D. Data Types and Rates To achieve a useful interface for a full range of user experiences and applications, the Mobile Digital Data Interface (MDDI) provides support for a variety of clients and display information, audio transducers, keyboards, pointing devices, and many other input or output devices that might be integrated into or working in concert with a mobile display device, along with control information, and combinations thereof. The MDDI is designed to be able to accommodate a variety of potential types of streams of data traversing between the host and client in either the forward or reverse link directions using a minimum number of cables or conductors. Both isochronous streams and asynchronous stream (updates) are supported. Many combinations of data types are possible as long as the aggregate data rate is less than or equal to the maximum desired MDDI link rate, which is limited by the maximum serial rate and number of data airs employed. These could include, but are not limited to, those items listed in Tables II and III below.Table IIisochronous video data720x480,12bit, 30f/s~124.5 Mbpsisochronous stereo audio data44.1kHz, 16bit, stereo∼ 1.4 Mbpsasynchronous graphics data800x600, 12bit, 10f/s, stereo~115.2 Mbpsasynchronous controlMinimum<< 1.0 MbpsTable IIIisochronous voice data8 kHz, 8bit<< 1.0 Mbpsisochronous video data640x480, 12bit, 24f/s~ 88.5 Mbpsasynchronous status, user input, etc.minimum<< 1.0 MbpsThe interface is not fixed but extensible so that it can support the transfer of a variety of information "types" which includes user-defined data, for future system flexibility. Specific examples of data to be accommodated are: full-motion video, either in the form of full or partial screen bitmap fields or compressed video; static bitmaps at low rates to conserve power and reduce implementation costs; PCM or compressed audio data at a variety of resolutions or rates; pointing device position and selection, and user-definable data for capabilities yet to be defined. Such data may also be transferred along with control or status information to detect device capability or set operating parameters.Embodiments of the invention advance the art for use in data transfers that include, but are not limited to: watching a movie (video display and audio); using a personal computer with limited personal viewing (graphics display, sometimes combined with video and audio); playing a video game on a PC, console, or personal device (motion graphics display, or synthetic video and audio); "surfing" the Internet, using devices in the form of a video phone (bi-directional low-rate video and audio), a camera for still digital pictures, or a camcorder for capturing digital video images; using a phone, computer, or PDA docked with a projector to give a presentation or docked with a desktop docking station connected to a video monitor, keyboard, and mouse; and for productivity enhancement or entertainment use with cell phones, smart phones, or PDAs, including wireless pointing devices and keyboard data.The high speed data interface as discussed below is presented in terms of providing large amounts of A-V type data over a communication or transfer link which is generally configured as a wire-line or cable type link. However, it will be readily apparent that the signal structure, protocols, timing, or transfer mechanism could be adjusted to provide a link in the form of an optical or wireless media, if it can sustain the desired level of data transfer.The MDDI signals use a concept known as the Common Frame Rate (CFR) for the basic signal protocol or structure. The idea behind using of a Common Frame Rate is to provide a synchronization pulse for simultaneous isochronous data streams. A client device can use this common frame rate as a time reference. A low CF rate increases channel efficiency by decreasing overhead to transmit the sub-frame header. On the other hand, a high CF rate decreases the latency, and allows a smaller elastic data buffer for audio samples. The CF rate of the present inventive interface is dynamically programmable and may be set at one of many values that are appropriate for the isochronous streams used in a particular application. That is, the CF value is selected to best suit the given client and host configuration, as desired.The number of bytes generally required per sub-frame, which is adjustable or programmable, for isochronous data steams that are most likely to be used with an application, such as for a video or micro-display are shown in Table IV.Table IVComputer Game72048024301248.832103680Computer Graphics80060024101115.20048000Video6404801229.97 or 301221.18492160CD Audio111644117021.4112588Voice118800010.06426-2/3Fractional counts of bytes per sub-frame are easily obtained using a simple programmable M/N counter structure. For example, a count of 26-2/3 bytes per CF is implemented by transferring 2 frames of 27 bytes each followed by one sub-frame of 26 bytes. A smaller CF rate may be selected to produce an integer number of bytes per sub-frame. However, generally speaking, to implement a simple M/N counter in hardware should require less area within an integrated circuit chip or electronic module used to implement part or all of embodiments of the invention than the area needed for a larger audio sample FIFO buffer.An exemplary application that illustrates the impact of different data transfer rates and data types is a Karaoke system. For Karaoke, a system where an end user, or users, sings along with a music video program. Lyrics of the song are displayed somewhere on, typically at the bottom of, a screen so the user knows the words to be sung, and roughly the timing of the song. This application requires a video display with infrequent graphics updates, and mixing of the user's voice, or voices, with a stereo audio stream.If one assumes a common frame rate of 300 Hz, then each sub-frame will consist of: 92,160 bytes of video content and 588 bytes of audio content (based on 147 16-bit samples, in stereo) over the forward link to the client display device, and an average of 26.67 (26-2/3) bytes of voice are sent back from a microphone to the mobile Karaoke machine. Asynchronous packets are sent between the host and the display, possibly head mounted. This includes at most 768 bytes of graphics data (quarter-screen height), and less than about 200 bytes (several) bytes for miscellaneous control and status commands.Table V, shows how data is allocated within a sub-frame for the Karaoke example. The total rate being used is selected to be about 279 Mbps. A slightly higher rate of 280 Mbps allows about another 400 bytes of data per sub-frame to be transferred which allows the use of occasional control and status messages.Table VMusic Video at 640 x 480 pixels and 30 fps2 * 28 = 5692160Lyric Text at 640 x 120 pixels and 1 fps Updated in 10 sub-frames, 1/30 sec.28768CD Audio at 44,100 sps, stereo, 16-bit2* 16 = 32588Voice at 8,000 sps, mono, 8-bit28+8+8+(4*16)+ (3*27) =12526.67Sub-frame Header22Total Bytes/CF263115815Total Rate (Mbps)(263+115815)*8*300 = 278.5872 III.(Continued) High Rate Digital Data Interface System Architecture E. Link Layer Data transferred using the MDDI high-speed serial data signals consists of a stream of time-multiplexed packets that are linked one after the other. Even when a transmitting device has no data to send, a MDDI link controller generally automatically sends filler packets, thus, maintaining a stream of packets. The use of a simple packet structure ensures reliable isochronous timing for video and audio signals or data streams.Groups of packets are contained within signal elements or structures referred to as sub-frames, and groups of sub-frames are contained within signal elements or structures referred to as a media-frame. A sub-frame contains one or more packets, depending on their respective size and data transfer uses, and a media-frame contains one more sub-frames. The largest sub-frame provided by the protocol employed by the embodiments presented here is on the order of 232-1 or 4,294,967,295 bytes, and the largest media-frame size then becomes on the order of 216-1 or 65,535 sub-frames.A special sub-frame header packet contains a unique identifier that appears at the beginning of each sub-frame, as is discussed below. That identifier is also used for acquiring the frame timing at the client device when communication between the host and client is initiated. Link timing acquisition is discussed in more detail below.Typically, a display screen is updated once per media-frame when full-motion video is being displayed. The display frame rate is the same as the media-frame rate. The link protocol supports full-motion video over an entire display, or just a small region of full-motion video content surrounded by a static image, depending on the desired application. In some low-power mobile applications, such as viewing web pages or email, the display screen may only need to be updated occasionally. In those situations, it is advantageous to transmit a single sub-frame and then shut down or inactivate the link to minimize power consumption. The interface also supports effects such as stereo vision, and handles graphics primitives.Sub-frames allow a system to enable the transmission of high-priority packets on a periodic basis. This allows simultaneous isochronous streams to co-exist with a minimal amount of data buffering. This is one advantage embodiments provide to the display process, allowing multiple data streams (high speed communication of video, voice, control, status, pointing device data, etc.) to essentially share a common channel. It transfers information using relatively few signals. It also enables display-technology-specific actions to exist, such as horizontal sync pulses and blanking intervals for a CRT monitor, or for other client-technology-specific actions. F. Link Controller The MDDI link controller shown in FIGS. 4 and 5 is manufactured or assembled to be a completely digital implementation with the exception of the differential line receivers which are used to receive MDDI data and strobe signals. However, even the differential line drivers and receivers can be implemented in the same digital integrated circuits with the link controller, such as when making a CMOS type IC. No analog functions or phase lock loops (PLLs) are required for bit recovery or to implement the hardware for the link controller. The host and client link controllers contain very similar functions, with the exception of the client interface which contains a state machine for link synchronization. Therefore, the embodiments of the invention allow the practical advantage of being able to create a single controller design or circuit that can be configured as either a host or client, which can reduce manufacturing costs for the link controllers, as a whole. IV. Interface Link Protocol A. Frame structure The signal protocol or frame structure used to implement the forward link communication for packet transfer is illustrated in FIG. 6 . As shown in FIG. 6 , information or digital data is grouped into elements known as packets. Multiple packets are in turn grouped together to form what are referred to as a "sub-frame," and multiple sub-frames are in turn grouped together to form a "media" frame. To control the formation of frames and transfer of sub-frames, each sub-frame begins with a specially predefined packet referred to as a Sub-frame Header Packet (SHP).The host device selects the data rate to be used for a given transfer. This rate can be changed dynamically by the host device based on both the maximum transfer capability of the host, or the data being retrieved from a source by the host, and the maximum capability of the client, or other device the data is being transferred to.A recipient client device designed for, or capable of, working with the MDDI or inventive signal protocol is able to be queried by the host to determine the maximum, or current maximum, data transfer rate it can use, or a default slower minimum rate may be used, as well as useable data types and features supported. This information could be transferred using a Client Capability Packet (DCP), as discussed further below. The client display device is capable of transferring data or communicating with other devices using the interface at a pre-selected minimum data rate or within a minimum data rate range, and the host will perform a query using a data rate within this range to determine the full capabilities of the client devices.Other status information defining the nature of the bitmap and video frame-rate capabilities of the client can be transferred in a status packet to the host so that the host can configure the interface to be as efficient or optimal as practical, or desired within any system constraints.The host sends filler packets when there are no (more) data packets to be transferred in the present sub-frame, or when the host cannot transfer at a rate sufficient to keep pace with the data transmission rate chosen for the forward link. Since each sub-frame begins with a sub-frame header packet, the end of the previous sub-frame contains a packet (most likely a filler packet) the exactly fills the previous sub-frame. In the case of a lack of room for data bearing packets per se, a filler packet will most likely be the last packet in a sub-frame, or at the end of a next previous sub-frame and before a sub-frame header packet. It is the task of the control operations in a host device to ensure that there is sufficient space remaining in a sub-frame for each packet to be transmitted within that sub-frame. At the same time, once a host device initiates the sending of a data packet, the host must be able to successfully complete a packet of that size within a frame without incurring a data under-run condition.In one aspect of embodiments, sub-frame transmission has two modes. One mode is a periodic sub-frame mode, or periodic timing epochs, used to transmit live video and audio streams. In this mode, the Sub-frame length is defined as being non-zero. The second mode is an asynchronous or non-periodic mode in which frames are used to provide bitmap data to a client when new information is available. This mode is defined by setting the sub-frame length to zero in the Sub-frame Header Packet. When using the periodic mode, sub-frame packet reception may commence when the client has synchronized to the forward link frame structure. This corresponds to the "in sync" states defined according to the state diagram discussed below with respect to FIG. 49 or FIG. 63 . In the asynchronous non-periodic sub-frame mode, reception commences after the first Sub-frame Header packet is received. B. Overall Packet Structure The format or structure of packets used to formulate the communication or signal protocol, or method or means for transferring data, implemented by the embodiments are presented below, keeping in mind that the interface is extensible and additional packet structures can be added as desired. The packets are labeled as, or divided into, different "packet types" in terms of their function in the interface, that is, commands, information, value, or data they transfer or are associated with. Therefore, each packet type denotes a pre-defined packet structure for a given packet which is used in manipulating the packets and data being transferred. As will be readily apparent, the packets may have pre-selected lengths or have variable or dynamically changeable lengths depending on their respective functions. The packets could also bear differing names, although the same function is still realized, as can occur when protocols are changed during acceptance into a standard. The bytes or byte values used in the various packets are configured as multi-bit (8- or 16-bit) unsigned integers. A summary of the packets being employed along with their "type" designations, listed in type order, is shown in Tables VI-1 through VI-4.Each table represents a general "type" of packet within the overall packet structure for ease in illustration and understanding. There is no limitation or other impact implied or being expressed for the invention by these groupings, and the packets can be organized in many other fashions as desired. The direction in which transfer of a packet is considered valid is also noted.Table VI-1Sub-frame Header Packet15359xFiller Packet0xxReverse Link Encapsulation Packet65xLink Shutdown Packet69xInterface Type Handoff Request Packet75xInterface Type Acknowledge Packet76xPerform Type Handoff Packet77xRound Trip Delay Measurement Packet82xForward Link Skew Calibration Packet83xTable VI-2Video Stream Packet16xxAudio Stream Packet32xx1-15,Reserved Stream Packets18 - 31,xx33 - 55User-Defined Stream Packets56 - 63xxColor Map Packet64xxForward Audio Channel Enable Packet78xReverse Audio Sample Rate Packet79xTransparent Color Enable Packet81xTable VI -3Client Capability Packet66xKeyboard Data Packet67xxPointing Device Data Packet68xxClient Request and Status Packet70xDigital Content Protection Overhead Packet80xxRequest VCP Feature Packet128xVCP Feature Reply Packet129xSet VCP Feature Packet130xRequest Valid Parameter Packet131xValid Parameter Reply Packet132xRequest Specific Status Packet138xValid Status Reply List Packet139xPacket Processing Delay Parameters Packet140xPersonal Display Capability Packet141xClient Error Report Packet142xScaled Video Stream Capability Packet143xClient Identification Packet144xAlternate Display Capability Packet145xRegister Access Packet146xxTable VI -4Bit Block Transfer Packet71 x Bitmap Area Fill Packet72 x Bitmap Pattern Fill Packet73 x Read Frame Buffer Packet74 x Scaled Video Stream Capability Packet143 x Scaled Video Stream Setup Packet136 x Scaled Video Stream Acknowledgement Packet137 x Scaled Video Stream Packet18 x Something that is clear from other discussions within this text is that while the Reverse Encapsulation Packet, Client Capability Packet, and Client Request and Status Packet are each considered very important to, or even required in many embodiments of communication interfaces, for External Mode operation, while they can be or are more likely to be considered optional for Internal Mode operation. This creates yet another type of MDDI protocol which allows communication of data at very high speeds with a reduced set of communications packets, and corresponding simplification of control and timing.Packets have a common basic structure or overall set of minimum fields comprising a Packet Length field, a Packet Type field, Data Bytes field(s), and a CRC field, which is illustrated in FIG. 7 . As shown in FIG. 7 , the Packet Length field contains information, in the form of a multi-bit or -byte value, that specifies the total number of bits in the packet, or its length between the packet length field and the CRC field. In one embodiment, the packet length field contains a 16-bit or 2-byte wide, unsigned integer, that specifies the packet length. The Packet Type field is another multi-bit field which designates the type of information that is contained within the packet. In an exemplary embodiment, this is an 16-bit or 2-byte wide value, in the form of an 16-bit unsigned integer, and specifies such data types as display capabilities, handoff, video or audio streams, status, and so forth.A third field is the Data Bytes field, which contains the bits or data being transferred or sent between the host and client devices as part of that packet. The format of the data is defined specifically for each packet type according to the specific type of data being transferred, and may be separated into a series of additional fields, each with its own format requirements. That is, each packet type will have a defined format for this portion or field. The last field is the CRC field which contains the results of a 16-bit cyclic redundancy check calculated over the Data Bytes, Packet Type, and Packet Length fields, which is used to confirm the integrity of the information in the packet. In other words, calculated over the entire packet except for the CRC field itself. The client generally keeps a total count of the CRC errors detected, and reports this count back to the host in the Client Request and Status Packet (see further below).Generally, these field widths and organization are designed to keep 2-byte fields aligned on an even byte boundary, and 4-byte fields aligned on 4-byte boundaries. This allows packet structures to be easily built in a main memory space of, or associated with, a host and a client without violating the data-type alignment rules encountered for most or typically used processors or control circuits.During transfer of the packets, fields are transmitted starting with the Least Significant Bit (LSB) first and ending with the Most Significant Bit (MSB) transmitted last. Parameters that are more than one byte in length are transmitted using the least significant byte first, which results in the same bit transmission pattern being used for a parameter greater than 8 bits in length, as is used for a shorter parameter where the LSB is transmitted first. The data fields of each packet are generally transmitted in the order that they are defined in the subsequent sections below, with the first field listed being transmitted first, and the last field described being transmitted last. The data on the MDDI_Data0 signal path is aligned with bit '0' of bytes transmitted on the interface in any of the modes, Type 1, Type 2, Type 3, or Type-4.When manipulating data for displays, the data for arrays of pixels are transmitted by rows first, then columns, as is traditionally done in the electronics arts. In other words, all pixels that appear in the same row in a bit map are transmitted in order with the left-most pixel transmitted first and the right-most pixel transmitted last. After the right-most pixel of a row is transmitted then the next pixel in the sequence is the left-most pixel of the following row. Rows of pixels are generally transmitted in order from top to bottom for most displays, although other configurations can be accommodated as needed. Furthermore, in handling bitmaps, the conventional approach, which is followed here, is to define a reference point by labeling the upper-left corner of a bitmap as location or offset "0,0." The X and Y coordinates used to define or determine a position in the bitmap increase in value as one approaches the right and bottom of the bitmap, respectively. The first row and first column (upper left corner of an image) start with an index value of zero. The magnitude of the X coordinate increases toward the right side of the image, and the magnitude of the Y coordinate increases toward the bottom of the image as viewed by the user of the display.A display window is the visible portion of a bitmap, the portion of the pixels in the bitmap that can be seen by the user on the physical display medium. It is often the case that the display window and the bitmap are the same size. The upper-left corner of a display window always displays bitmap pixel location '0,0'. The width of the display window corresponds to the X axis of the bitmap, and the display window width for this embodiment is less than or equal to the width of the corresponding bitmap. The height of the window corresponds to the Y axis of the bitmap, and the display window height for this embodiment is less than or equal to the height of the corresponding bitmap. The display window itself is not addressable in the protocol because it is only defined as the visible portion of a bitmap.The relationship between a bitmaps and display windows is well known in the computer, electronic art, Internet communication, and other electronics related arts. Therefore, no further discussion or illustration of these principles is provided here. C. Packet Definitions I. Sub-Frame Header Packet The Sub-Frame Header packet is the first packet of every sub-frame, and has a basic structure as illustrated in FIG. 8 . The Sub-Frame Header Packet is used for host-client synchronization, every host should be able to generate this packet, while every client should be able to receive and interpret this packet. As can be seen in one embodiment in FIG. 8 , this type of packet is structured to have Packet Length, Packet Type, Unique Word, Reserved 1, Sub-Frame Length, Protocol Version, Sub-Frame Count, and Media-frame Count fields, generally in that order. In one embodiment, this type of packet is generally identified as a Type 15359 (0x3bff hexadecimal) packet and uses a pre-selected fixed length of 20 bytes, not including the packet length field.The Packet Type field and the Unique Word field each use a 2 byte value (16-bit unsigned integer). The 4-byte combination of these two fields together forms a 32-bit unique word with good autocorrelation. In one embodiment, the actual unique word is 0x005a3bff, where the lower 16 bits are transmitted first as the Packet Type, and the most significant 16 bits are transmitted afterward.The Reserved 1 field contains 2 bytes that are reserved space for future use, and is generally configured at this point with all bits set to zero. One purpose of this field is to cause subsequent 2-byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address. The least significant byte is reserved to indicate whether or not a host is capable of addressing multiple client devices. A value of zero for this byte is reserved to indicate that the host is capable of operating only with a single client device.The Sub-frame Length field contains 4 bytes of information, or values, that specifies the number of bytes per sub-frame. In one embodiment, the length of this field may be set equal to zero to indicate that only one sub-frame will be transmitted by the host before the link is shut down into an idle state. The value in this field can be dynamically changed "on-the-fly" when transitioning from one sub-frame to the next. This capability is useful in order to make minor timing adjustments in the sync pulses for accommodating isochronous data streams. If the CRC of the Sub-frame Header packet is not valid then the link controller should use the Sub-frame Length of the previous known-good Sub-frame Header packet to estimate the length of the current sub-frame.The Protocol Version field contains 2 bytes that specify the protocol version used by the host. The Protocol Version field may be set to '0' to specify the first or current version of the protocol as being used This value will change over time as new versions are created, and is already being upgraded to a value of '1' for some version fields. Version values will probably or generally follow a current version number for an approved standards document which covers interfaces such as MDDI, as would be known.The Sub-frame Count field contains 2 bytes that specify a sequence number that indicates the number of sub-frames that have been transmitted since the beginning of the media-frame. The first sub-frame of the media-frame has a Sub-frame Count of zero. The last sub-frame of the media-frame has a value of n-1, where n is the number of sub-frames per media-frame. The value of the Sub-frame Count field is equal to the Subframe Count sent in the previous Sub-Frame packet plus 1, except for a first sub-frame of a media-frame which will have a count of zero. Note that if the Sub-frame Length is set equal to zero (indicating a non-periodic sub-frame) then the Sub-frame count is also set equal to zero.The Media-frame Count field contains 4 bytes (32-bit unsigned integer) that specify a sequence number that indicates the number of media-frames that have been transmitted since the beginning of the present media item or data being transferred. The first media-frame of a media item has a Media-frame Count of zero. The Media-frame Count increments just prior to the first sub-frame of each media-frame and wraps back to zero after the maximum Media-frame Count (for example, media-frame number 23-I = 4,294,967,295) is used. The Media-frame Count value may be reset generally at any time by the Host to suit the needs of an end application.2. Filler PacketA filler packet is a packet that is transferred to, or from, a client device when no other information is available to be sent on either the forward or reverse link. It is recommended that filler packets have a minimum length in order to allow maximum flexibility in sending other packets when required. At the very end of a sub-frame or a reverse link encapsulation packet (see below), a link controller sets the size of the filler packet to fill the remaining space to maintain packet integrity. The Filler Packet is useful to maintain timing on the link when the host or client have no information to send or exchange. Every host and client needs to be able to send and receive this packet to make effective use of the interface.An exemplary embodiment of the format and contents of a Filler Packet are shown in FIG. 9 . As shown in FIG. 9 , this type of packet is structured to have Packet Length, Packet Type, Filler Bytes, and CRC fields. In one embodiment, this type of packet is generally identified as a Type 0, which is indicated in the 2-byte Type field. The bits or bytes in the Filler Bytes field comprise a variable number of all zero bit values to allow the filler packet to be the desired length. The smallest filler packet contains no bytes in this field. That is, the packet consists of only the packet length, packet type, and CRC, and in one embodiment uses a pre-selected fixed length of 6 bytes or a Packet Length value of 4. The CRC value is determined for all bytes in the packet including the Packet Length, which may be excluded in some other packet types. 3. Video Stream Packet Video Stream Packets carry video data to update typically rectangular regions of a display device. The size of this region may be as small as a single pixel or as large as the entire display. There may be an almost unlimited number of streams displayed simultaneously, limited by system resources, because all context required to display a stream is contained within the Video Stream Packet. The format of one embodiment of a Video Stream Packet (Video Data Format Descriptor) is shown in FIG. 10 . As seen in FIG. 10 , in one embodiment, this type of packet is structured to have Packet Length (2 bytes), Packet Type, bClient ID, Video Data Descriptor, Pixel Display Attributes, X Left Edge, Y Top Edge, X Right Edge, Y Bottom Edge, X and Y Start, Pixel Count, Parameter CRC, Pixel Data, and Pixel Data CRC fields. This type of packet is generally identified as a Type 16, which is indicated in the 2-byte Type field. In one embodiment, a client indicates an ability to receive a Video Stream Packet using RGB, Monochrome, and Y Cr Cb Capability fields of the Client Capability Packet.In one embodiment, the bClient ID field contains 2 bytes of information that are reserved for a Client ID. Since this is a newly developed communications protocol actual client IDs are not yet known or sufficiently communicable. Therefore, the bits in this field are generally set equal to zero until such ID values are known, at which time the ID values can be inserted or used, as would be apparent to those skilled in the art. The same process will generally be true for the client ID fields discussed below.The common frame concept discussed above is an effective way to minimize the audio buffer size and decrease latency. However, for video data it may be necessary to spread the pixels of one video frame across multiple Video Stream Packets within a media-frame. It is also very likely that the pixels in a single Video Stream Packet will not exactly correspond to a perfect rectangular window on the display. For the exemplary video frame rate of 30 frames per second, there are 300 sub-frames per second, which results in 10 sub-frames per media-frame. If there are 480 rows of pixels in each frame, each Video Stream Packet in each sub-frame will contain 48 rows of pixels. In other situations, the Video Stream Packet might not contain an integer number of rows of pixels. This is true for other video frame sizes where the number of sub-frames per media-frame does not divide evenly into the number of rows (also known as video lines) per video frame. For efficient operation, each Video Stream Packet generally must contain an integer number of pixels, even though it might not contain an integer number of rows of pixels. This is important if pixels are more than one byte each, or if they are in a packed format as shown in FIG. 12 .The format and contents employed for realizing the operation of an exemplary Video Data Descriptor field, as mentioned above, are shown in FIGS. 11A-11E . In FIGS. 11A-11E , the Video Data Format Descriptor field contains 2 bytes in the form of a 16-bit unsigned integer that specifies the format of each pixel in the Pixel Data in the present stream in the present packet. It is possible that different Video Stream packets may use different pixel data formats, that is, use a different value in the Video Data Format Descriptor, and similarly, a stream (region of the display) may change its data format on-the-fly. The pixel data format should comply with at least one of the valid formats for the client as defined in the Client Capability Packet. The Video Data Format Descriptor defines the pixel format for the present packet only which does not imply that a constant format will continue to be used for the lifetime of a particular video stream.FIGS. 11A through 11D illustrate how the Video Data Format Descriptor is coded. As used in these figures, and in this embodiment, when bits [15:13] are equal to '000', as shown in FIG. 11A , then the video data consists of an array of monochrome pixels where the number of bits per pixel is defined by bits 3 through 0 of the Video Data Format Descriptor word. Bits 11 through 4 are generally reserved for future use or applications and are set to zero in this situation. When bits [15:13] are instead equal to the values '001', as shown in FIG. 11B , then the video data consists of an array of color pixels that each specify a color through a color map (palette). In this situation, bits 5 through 0 of the Video Data Format Descriptor word define the number of bits per pixel, and bits 11 through 6 are generally reserved for future use or applications and set equal to zero. When bits [15:13] are instead equal to the values '010', as shown in FIG. 11C , then the video data consists of an array of color pixels where the number of bits per pixel of red is defined by bits 11 through 8, the number of bits per pixel of green is defined by bits 7 through 4, and the number of bits per pixel of blue is defined by bits 3 through 0. In this situation, the total number of bits in each pixel is the sum of the number of bits used for red, green, and blue.However, when bits [15:13] are instead equal to the values or string '011', as shown in FIG. 11D , then the video data consists of an array of video data in 4:2:2 YCbCr format with luminance and chrominance information, where the number of bits per pixel of luminance (Y) is defined by bits 11 through 8, the number of bits of the Cb component is defined by bits 7 through 4, and the number of bits of the Cr component is defined by bits 3 through 0. The total number of bits in each pixel is the sum of the number of bits used for red, green, and blue. The Cb and Cr components are sent at half the rate as Y. In addition, the video samples in the Pixel Data portion of this packet are organized as follows: Cbn, Yn, Crn, Yn+1, Cbn+2, Yn+2, Crn+2, Yn+3, ... where Cbn and Crn are associated with Yn and Yn+1, and Cbn+2 and Crn+2 are associated with Yn+2 and Yn+3, and so on.Yn, Yn+1, Yn+2 and Yn+3 are luminance values of four consecutive pixels in a single row from left to right. If there are an odd number of pixels in a row (X Right Edge - X Left Edge +1) in the window referenced by the Video Stream Packet then the Y value corresponding to the last pixel in each row will be followed by the Cb value of the first pixel of the next row, and a Cr value is not sent for the last pixel in the row. It is recommended that windows using Y Cb Cr format have a width that is an even number of pixels. The Pixel Data in a packet should contain an even number of pixels. It may contain an odd or even number of pixels in the case where the last pixel of the Pixel Data corresponds to the last pixel of a row in the window specified in the Video Stream Packet header, i.e. when the X location of the last pixel in the Pixel Data is equal to X Right Edge.When bits [15:13] are instead equal to the values '100' then the video data consists of an array of Bayer pixels where the number of bits per pixel is defined by bits 3 through 0 of the Video Data Format Descriptor word. The Pixel Group Pattern is defined by bits 5 and 4 as shown in FIG. 11E . The order of pixel data may be horizontal or vertical, and the pixels in rows or columns may be sent in forward or backward order and is defined by bits 8 through 6. Bits 11 through 9 should be set to zero. The group of four pixels in the pixel group in the Bayer format resembles what is often referred to as a single pixel in some display technologies. However, one pixel in the Bayer format is only one of the four colored pixels of the pixel group mosaic patternFor all five formats shown in the figures, Bit 12, which is designated as "P," specifies whether or not the Pixel Data samples are packed, or byte-aligned pixel data. A value of '0' in this field indicates that each pixel in the Pixel Data field is byte-aligned with an MDDI byte boundary. A value of '1' indicates that each pixel and each color within each pixel in the Pixel Data is packed up against the previous pixel or color within a pixel leaving no unused bits. The difference between Byte-Aligned and Packed Pixel data format is shown in more detail in FIG. 12 , where one can clearly see that byte-aligned data may leave unused portions of the data sub-frame, as opposed to packed pixel format which does not.The first pixel in the first video stream packet of a media frame for a particular display window will go into the upper left corner of the stream window defined by an X Left Edge and a Y Top Edge, and the next pixel received is placed in the next pixel location in the same row, and so on. In this first packet of a media frame, the X start value will usually be equal to X Left Edge, and Y start value will usually be equal to Y Top Edge. In subsequent packets corresponding to the same screen window, the X and Y start values will usually be set to the pixel location in the screen window that would normally follow after the last pixel sent in the Video Stream Packet that was transmitted in the previous sub-frame.4. Audio Stream PacketThe audio stream packets carry audio data to be played through the audio system of the client, or for a stand alone audio presentation device. Different audio data streams may be allocated for separate audio channels in a sound system, for example: left-front, right-front, center, left-rear, and right-rear, depending on the type of audio system being used. A full complement of audio channels is provided for headsets that contain enhanced spatial-acoustic signal processing. A client indicates an ability to receive an Audio Stream Packet using the Audio Channel Capability and Audio Sample Rate fields of the Client Capability Packet. The format of Audio Stream Packets is illustrated in FIG. 13 .As shown in FIG. 13 , this type of packet is structured in one embodiment to have Packet Length, Packet Type, client ID, Audio Channel ID, Reserved 1, Audio Sample Count, Bits Per Sample and Packing, Audio Sample Rate, Parameter CRC, Digital Audio Data, and Audio Data CRC fields. In one embodiment, this type of packet is generally identified as a Type 32 packet.The bClient ID field contains 2 bytes of information that are reserved for a Client ID, as used previously. The Reserved 1 field contains 2 bytes that is reserved for future use, and is generally configured at this point with all bits set to zero.The Bits Per Sample and Packing field contains 1 byte in the form of an 8-bit unsigned integer that specifies the packing format of audio data. The format generally employed is for Bits 4 through 0 to define the number of bits per PCM audio sample. Bit 5 then specifies whether or not the Digital Audio Data samples are packed. The difference between packed and byte-aligned audio samples, here using 10-bit samples, is illustrated in FIG. 14 . A value of '0' indicates that each PCM audio sample in the Digital Audio Data field is byte-aligned with an MDDI byte boundary, and a value of '1' indicates that each successive PCM audio sample is packed up against the previous audio sample. This bit is generally effective only when the value defined in bits 4 through 0 (the number of bits per PCM audio sample) is not a multiple of eight. Bits 7 through 6 are reserved for future use and are generally set at a value of zero. 5. Reserved Stream Packets In one embodiment, packet types 1 to 15, 18 to 31, and 33 through 55 are reserved for stream packets to be defined for use in future versions or variations of the packet protocols, as desired for various applications encountered. Again, this is part of making the MDDI more flexible and useful in the face of ever changing technology and system designs as compared to other techniques. 6. User-Defined Stream Packets Eight data stream types, known as Types 56 through 63, are reserved for use in proprietary applications that may be defined by equipment manufacturers for use with a MDDI link. These are known as User-defined Stream Packets. Such packets may be used for any purpose, but the host and client should only employ such packets in situations where the result of such use is very well understood or known. The specific definition of the stream parameters and data for these packet types is left to the specific equipment manufacturers or interface designers implementing such packet types or seeking their use. Some exemplary uses of the User-defined Stream Packets are to convey test parameters and test results, factory calibration data, and proprietary special use data. The format of the user-defined stream packets as used in one embodiment is illustrated in FIG. 15 . As shown in FIG. 15 , this type of packet is structured to have Packet Length (2 bytes), Packet Type, bClient ID number, Stream Parameters, Parameter CRC, Stream Data, and Stream Data CRC fields. 7. Color Map Packets The color map packets specify the contents of a color map look-up table used to present colors for a client. Some applications may require a color map that is larger than the amount of data that can be transmitted in a single packet. In these cases, multiple Color Map packets may be transferred, each with a different subset of the color map by using the offset and length fields described below. The format of the Color Map Packet in one embodiment is illustrated in FIG. 16 . As shown in FIG. 16 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Color Map Item Count, Color Map Offset, Parameter CRC, Color Map Data, and Data CRC fields. In one embodiment, this type of packet is generally identified as a Type 64 packet (Video Data Format and Color Map Packet) as specified in the Packet Type Field (2 bytes). A client indicates an ability to receive Color Map Packets using the Color Map Size and Color Map Width fields of the Client Capability Packet. 8. Reverse Link Encapsulation Packets In an exemplary embodiment, data is transferred in the reverse direction using a Reverse Link Encapsulation Packet. A forward link packet is sent and the MDDI link operation (transfer direction) is changed or turned around in the middle of this packet so that packets can be sent in the reverse direction. The format of the Reverse Link Encapsulation packet in one embodiment is illustrated in FIG. 17 . As shown in FIG. 17 , this type of packet is structured to have Packet Length, Packet Type, hCLient ID, Reverse Link Flags, Reverse Rate Divisor, Turn-Around 1 Length, Turn-Around 2 Length, Parameter CRC, All Zero 1, Turn-Around 1, Reverse Data Packets, Turn-Around 2, and All Zero 2 fields. In one embodiment, this type of packet is generally identified as a Type 65 packet. For External Mode every host must be able to generate this packet and receive data, and every client must be able to receive and send data to the host in order to efficiently make use of the desired protocol and resulting speed. Implementation of this packet is optional for Internal Mode, but the Reverse Link Encapsulation Packet is used for the host to receive data from the client.The MDDI link controller behaves in a special manner while sending a Reverse Link Encapsulation Packet. The MDDI has a strobe signal that is generally always driven by the host as controller of the link. The host behaves as if it were transmitting a zero for each bit of the Turn-Around and Reverse Data Packets portions of the Reverse Link Encapsulation packet. The host toggles a MDDI_Strobe signal at each bit boundary during the two turn-around times and during the time allocated for reverse data packets. That is, the host toggles MDDI_Stb from the beginning of the All Zero 1 field to the end of the All Zero 2 field. (This is the same behavior as if it were transmitting all-zero data.)The host disables its MDDI data signal line drivers and generally assures they have been completely disabled prior to the last bit of the Turn-Around 1 field, and then re-enables its line drivers during the Turn-Around 2 field, and generally assure that the drivers have been completely re-enabled prior to the last bit of the Turn-Around 2 field. The client reads the Turn-Around Length parameter and drives the data signals toward the host immediately after the last bit in the Turn-Around 1 field. That is, the client clocks new data into the link on certain rising edges of the MDDI strobe as specified in the packet contents description below, and elsewhere. The client uses the Packet Length and Turn-Around Length parameters to know the length of time it has available to send packets to the host. The client may send filler packets or drive the data lines to a zero state when it has no data to send to the host. If the data lines are driven to zero, the host interprets this as a packet with a zero length (not a valid length) and the host does not accept any more packets from the client for the duration of the current Reverse Link Encapsulation Packet.In one embodiment, the Reverse Link Request field of the Client Request and Status Packet may be used to inform the host of the number of bytes the client needs in the Reverse Link Encapsulation Packet to send data back to the host. The host attempts to grant the request by allocating at least that number of bytes in the Reverse Link Encapsulation Packet. The host may send more than one Reverse Link Encapsulation Packet in a sub-frame. The client may send a Client Request and Status Packet at almost any time, and the host will interpret the Reverse Link Request parameter as the total number of bytes requested in one sub-frame. 9. Client Capability Packets A host needs to know the capability of the client (display) it is communicating with in order to configure the host-to-client link in an generally optimum or desired manner. It is recommended that a display send a Client Capability Packet to the host after forward link synchronization is acquired. The transmission of such a packet is considered required when requested by the host using the Reverse Link Flags in the Reverse Link Encapsulation Packet. The Client Capability Packet is used to inform the host of the capabilities of a client. For External Mode every host should be able to receive this packet, and every client should be able to send this packet to fully utilize this interface and protocol. Implementation of this packet is optional for Internal Mode, since the capabilities of the client, such as a display, keyboard or other input/output device, in this situation should already be well defined and known to the host at the time of manufacture or assembly into a single component or unit of some type.The format of the Client Capability packet in one embodiment is illustrated in FIG. 18 . As shown in FIG. 18 , for this embodiment, this type of packet is structured to have Packet Length, Packet Type, reserved cClientID, Protocol Version, Min Protocol Version, Data Rate Capability, Interface Type Capability, Number of Alt Displays, Reserved 1, Bitmap Width, Bitmap Height, Display Window Width, Display Window Height. Color Map Size, Color Map RGB Width, RGB Capability, Monochrome Capability, Reserved 2, Y Cr Cb Capability, Bayer Capability, Alpha-Cursor Image Planes, Client Feature Capability, Max Video Frame Rate, Min Video Frame Rate, Min Sub-frame rate, Audio Buffer Depth, Audio Channel Capability, Audio Sample Rate Capability, Audio Sample Resolution, Mic Audio Sample Resolution, Mic Sample Rate Capability, Keyboard Data Format, Pointing Device Data Format, Content Protection Type, Mfr. Name, Product Code, Reserved 3, Serial Number, Week of Mfr., Year of Mfr., and CRC fields. In an exemplary embodiment, this type of packet is generally identified as a Type 66 packet. 10. Keyboard Data Packets A keyboard data packet is used to send keyboard data from the client device to the host. A wireless (or wired) keyboard may be used in conjunction with various displays or audio devices, including, but not limited to, a head mounted video display/audio presentation device. The Keyboard Data Packet relays keyboard data received from one of several known keyboard-like devices to the host. This packet can also be used on the forward link to send data to the keyboard. A client indicates an ability to send and receive Keyboard Data Packets using the Keyboard Data Field in the Client Capability Packet.The format of a Keyboard Data Packet is shown in FIG. 19 , and contains a variable number of bytes of information from or for a keyboard. As shown in FIG. 19 , this type of packet is structured to have Packet Length, Packet Type, bClient ID, Keyboard Data Format, Keyboard Data, and CRC fields. Here, this type of packet is generally identified as a Type 67 packet.The bClient ID is a reserved field, as before, and the CRC is performed over all bytes of the packet. The Keyboard Data Format field contains a 2 bytes value that describes the keyboard data format. Bits 6 through 0 should be identical to the Keyboard Data Format field in the Client Capability Packet. This value is not to equal 127. Bits 15 through 7 are reserved for future use and are, therefore, currently set to zero. 11. Pointing Device Data Packets A pointing device data packet is used as a method, structure, or means to send position information from a wireless mouse or other pointing device from the client to the host. Data can also be sent to the pointing device on the forward link using this packet. An exemplary format of a Pointing Device Data Packet is shown in FIG. 20 , and contains a variable number of bytes of information from or for a pointing device. As shown in FIG. 20 , this type of packet is structured to have Packet Length, Packet Type, bClient ID, Pointing Device Format, Pointing Device Data, and CRC fields. In an exemplary embodiment, this type of packet is generally identified as a Type 68 packet in the 1-byte type field. 12. link Shutdown Packets A Link Shutdown Packet is sent from the host to the client as a method or means to indicate that the MDDI data and strobe will be shut down and go into a low-power consumption "hibernation" state. This packet is useful to shut down the link and conserve power after static bitmaps are sent from a mobile communication device to the display, or when there is no further information to transfer from a host to a client for the time being. Normal operation is resumed when the host sends packets again. The first packet sent after hibernation is a sub-frame header packet. The format of a Client Status Packet for one embodiment is shown in FIG. 21 . As shown in FIG. 21 , this type of packet is structured to have Packet Length, Packet Type, CRC and All Zeros fields. In one embodiment, this type of packet is generally identified as a Type 69 packet in the 1-byte type field.The packet length field uses 2 bytes to specify the total number of bytes in the packet not including the packet length field. In one embodiment, the Packet Length of this packet is dependent on the Interface Type or link mode in effect at the time when the Link Shutdown Packet is sent. Therefore, the typical packet length becomes 20 bytes for Type 1 mode (22 bytes total in the packet), 36 bytes for a Type 2 mode (38 bytes total in the packet), 68 bytes for a Type 3 mode link (70 bytes total in the packet), and 132 bytes for a Type 4 mode (with 134 bytes total in the packet).The All Zeros field uses a variable number of bytes to ensure that MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering clock using only MDDI_Stb prior to disabling a host's line drivers. The length of the All Zeros field is dependent on the Interface Type or link operating mode in effect at the time when the Link Shutdown Packet is sent. The length of the All Zeros field is intended to produce 64 pulses on MDDI_Stb for any Interface Type setting. Therefore, the All Zeros length for each interface type becomes 16 bytes for Type 1, 32 bytes for Type 2, 64 bytes for Type 3, and 128 bytes for Type 4.The CRC field uses 2 bytes that contain a 16-bit CRC of bytes from the Packet Length to the Packet Type.In the low-power hibernation state, the MDDI_Data0 driver is disabled into a high-impedance state starting after the 16th to 48th MDDI_Stb cycle or pulse after the last bit of the All Zeros field. For Type-2, Type-3, or Type-4 links the MDDI_Data1 through MDDI_DataPwr7 signals are also placed in a high-impedance state at the same time that the MDDI_Data0 driver is disabled Either the host or client may cause the MDDI link to "wake up" from the hibernation state as described elsewhere, which is a key advance for and advantage of the present invention.As described in the definition of the All Zeros field, MDDI_Stb toggles for 64 cycles following the MSB of the CRC field of the Link Shutdown Packet to facilitate an orderly shutdown in the client controller. One cycle is a low-to-high transition followed by a high-to-low transition, or a high-to-low transition followed by a low-to-high transition. After the All Zeros field is sent, the MDDI_Stb driver in the host is disabled. 13. Client Request and Status Packets The host needs a small amount of information from the client so it can configure the host-to-client link in a generally optimum manner. It is recommended that the client send one Client Request and Status Packet to the host each sub-frame. The client should send this packet as the first packet in the Reverse Link Encapsulation Packet to ensure that it is delivered reliably to the host. The forwarding of this packet is also accomplished when requested by a host using the Reverse Link Flags in the Reverse Link Encapsulation Packet. The Client Request and Status Packet is used to report errors and status to the host. For external mode operation, every host should be able to receive this packet, and every client should be able to send this packet in order to properly or optimally employ the MDDI protocol. While it is also recommended that for internal operations, that is internal hosts and internal clients, there should be support for this packet, it is not required.The format of a Client Request and Status Packet is shown in FIG. 22 . As shown in FIG. 22 , this type of packet is structured to have Packet Length, Packet Type, cClient ID, Reverse Link Request, Capability Change, Client Busy, CRC Error Count, and CRC fields. This type of packet is generally identified as a Type 70 packet in the 1-byte type field, and typically uses a pre-selected fixed length of 12 bytes.The Reverse Link Request field may be used to inform the host of the number of bytes the client needs in the Reverse Link Encapsulation Packet to send data back to the host. The host should attempt to grant the request by allocating at least that number of bytes in the Reverse Link Encapsulation Packet. The host may send more than one Reverse Link Encapsulation Packet in a sub-frame in order to accommodate data. The client may send a Client Request and Status Packet at any time and the host will interpret the Reverse Link Request parameter as the total number of bytes requested in one sub-frame. Additional details and specific examples of how reverse link data is sent back to the host are shown below. 14. Bit Block Transfer Packets The Bit Block Transfer Packet provides a means, structure, or method to scroll regions of the display in any direction. Displays that have this capability will report the capability in bit 0 of the Display Feature Capability Indicators field of the Client Capability Packet. The format for one embodiment of a Bit Block Transfer Packet is shown in FIG. 23 . As shown in FIG. 23 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Upper Left X Value, Upper Left Y Value, Window Width, Window Height, Window X Movement, Window Y Movement, and CRC fields. This type of packet is generally identified as a Type 71 packet, and in one embodiment uses a pre-selected fixed length of 15 bytes.The fields are used to specify the X and Y values of the coordinate of the upper left corner of the window to be moved, the width and height of the window to be moved, and the number of pixels that the window is to be moved horizontally, and vertically, respectively. Positive values for the latter two fields cause the window to be moved to the right, and down, and negative values cause movement to the left and up, respectively. 15. Bitmap Area Fill Packets The Bitmap Area Fill Packet provides a means, structure, or method to easily initialize a region of the display to a single color. Displays that have this capability will report the capability in bit 1 of the Client Feature Capability Indicators field of the Client Capability Packet. One embodiment for the format of a Bitmap Area Fill Packet is shown in FIG. 24 . As shown in FIG. 24 , in this case this type of packet is structured to have Packet Length, Packet Type, hClient ID, Upper Left X Value, Upper Left Y Value, Window Width, Window Height, Data Format Descriptor, Pixel Area Fill Value, and CRC fields. This type of packet is generally identified as a Type 72 packet in the 1-byte type field, and uses a pre-selected fixed length of 17 bytes. 16. Bitmap Pattern Fill Packets The Bitmap Pattern Fill Packet provides a means or structure to easily initialize a region of the display to a pre-selected pattern. Displays that have this capability will report the capability in bit 2 of the Client Feature Capability field of the Client Capability Packet. The upper left corner of the fill pattern is aligned with the upper left corner of the window to be filled, unless the horizontal or vertical pattern offset is non-zero. If the window to be filled is wider or taller than the fill pattern, then the pattern may repeated horizontally or vertically a number of times to fill the window. The right or bottom of the last repeated pattern is truncated as necessary. If the window is smaller than the fill pattern, then the right side or bottom of the fill pattern may be truncated to fit the window.If a horizontal pattern offset is non-zero, then the pixels between the left side of the window and the left side plus the horizontal pattern offset are filled with the right-most pixels of the pattern. The horizontal pattern offset is to be less than the pattern width. Similarly, if a vertical pattern offset is non-zero, then the pixels between the top side of the window and the top of the side plus vertical pattern offset are filled with the lower-most pixels of the pattern. The vertical pattern offset is to be less than the pattern height.One embodiment for the format of a Bitmap Pattern Fill Packet is shown in FIG. 25 . As shown in FIG. 25 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Upper Left X Value, Upper Left Y Value, Window Width, Window Height, Pattern Width, Pattern Height, Horizontal Pattern Offset, Vertical Pattern Offset, Data Format Descriptor, Parameter CRC, Pattern Pixel Data, and Pixel Data CRC fields. In some embodiments, this type of packet is generally identified as a Type 73 packet in the 1-byte type field. 17. Communication Link Data Channel Packets The Communication Link Data Channel Packet provides a structure, means, or method for a client with high-level computing capability, such as a PDA, to communicate with a wireless transceiver such as a cell phone or wireless data port device. In this situation, the MDDI link is acting as a convenient high-speed interface between the communication device and the computing device with the mobile display, where this packet transports data at a Data Link Layer of an operating system for the device. For example, this packet could be used if a web browser, email client, or an entire PDA were built into a mobile display. Displays that have this capability will report the capability in bit 3 of the Client Feature Capability field of the Client Capability Packet.The format of an embodiment for a Communication Link Data Channel Packet is shown in FIG. 26 . As shown in FIG. 26 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Parameter CRC, Communication Link Data, and Communication Data CRC fields. In one embodiment, this type of packet is generally identified as a Type 74 packet in the type field. 18. Interface Type Handoff Request Packets The Interface Type Handoff Request Packet provides a means, method or structure that enables the host to request that the client or display shift from an existing or current mode to the Type 1 (serial), Type 2 (2-bit parallel), Type 3 (4-bit parallel), or Type 4 (8-bit parallel) modes. Before the host requests a particular mode it should confirm that the client is capable of operating in the desired mode by examining bits 6 and 7 of the Display Feature Capability Indicators field of the Client Capability Packet. One embodiment for the format of a Interface Type Handoff Request Packet is shown in FIG. 27 . As shown in FIG. 27 , this type of packet is structured to have Packet Length, Packet Type, Interface Type, Reserved 1, and CRC fields. This type of packet is generally identified as a Type 75 packet, and uses a pre-selected fixed length of 4 bytes. 19. Interface Type Acknowledge Packets The Interface Type Acknowledge Packet is sent by a client and provides a means, method or structure that enables a client to confirm receipt of the Interface Type Handoff Packet. The requested mode, Type 1 (serial), Type 2 (2-bit parallel), Type 3 (4-bit parallel), or Type 4 (8-bit parallel) mode, is echoed back to the host as a parameter in this packet. The format of one embodiment for a Interface Type Acknowledge Packet is shown in FIG. 28 . As shown in FIG. 28 , this type of packet is structured to have Packet Length, Packet Type, cClient ID, Interface Type, Reserved 1, and CRC fields. This type of packet is generally identified as a Type 76 packet, and uses a pre-selected fixed length of 4 bytes. 20. Perform Type Handoff Packets The Perform Type Handoff Packet is a means, structure, or method for the host to command the client to handoff to the mode specified in this packet. This is to be the same mode that was previously requested and acknowledged by the Interface Type Handoff Request Packet and Interface Type Acknowledge Packet. The host and client should switch to the agreed upon mode after this packet is sent. The client may lose and re-gain link synchronization during the mode change. The format of one embodiment for a Perform Type Handoff Packet is shown in FIG. 29 . As shown in FIG. 29 , this type of packet is structured to have Packet Length, Packet Type, Packet Type, Reserve 1, and CRC fields. This type of packet is generally identified as a Type 77 packet in the 1-byte type field, and uses a pre-selected fixed length of 4 bytes. 21. Forward Audio Channel Enable Packets This packet provides a structure, method, or means that allows a host to enable or disable audio channels in a client. This capability is useful so that a client (a display for example) can power off audio amplifiers or similar circuit elements to save power when there is no audio to be output by the host. This is significantly more difficult to implement implicitly simply using the presence or absence of audio streams as an indicator. The default state when the client system is powered-up is that all audio channels are enabled. The format of one embodiment of a Forward Audio Channel Enable Packet is shown in FIG. 30 . As shown in FIG 30 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Audio Channel Enable Mask, and CRC fields. This type of packet is generally identified as a Type 78 packet in the 1-byte type field, and uses a pre-selected fixed length of 4 bytes. 22. Reverse Audio Sample Rate Packets This packet allows the host to enable or disable the reverse-link audio channel, and to set the audio data sample rate of this stream. The host selects a sample rate that is defined to be valid in the Client Capability Packet. If the host selects an invalid sample rate then the client will not send an audio stream to the host, and an appropriate error, error value, or error signal, may be sent to the host in the Client Error Report Packet. The host may disable the reverse-link audio stream by setting the sample rate to a value of 255. The default state assumed when the client system is initially powered-up or connected is with the reverse-link audio stream disabled. The format of one embodiment for a Reverse Audio Sample Rate Packet is shown in FIG. 31 . As shown in FIG. 31 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Audio Sample Rate, Reserved 1, and CRC fields. This type of packet is generally identified as a Type 79 packet, and uses a pre-selected fixed length of 4 bytes. 23. Digital Content Protection Overhead Packets This packet provides a structure, method, or means that allows a host and a client to exchange messages related to the digital content protection method being used. Presently two types of content protection are contemplated, Digital Transmission Content Protection (DTCP), or High-bandwidth Digital Content Protection System (HDCP), with room reserved for future alternative protection scheme designations. The method being used is specified by a Content Protection Type parameter in this packet. The format of an embodiment of a Digital Content Protection Overhead Packet is shown in FIG. 32 . As shown in FIG. 32 , this type of packet is structured to have Packet Length, Packet Type, bClient ID, Content Protection Type, Content Protection Overhead Messages, and CRC fields. This type of packet is generally identified as a Type 80 packet. 24. Transparent Color Enable Packets The Transparent Color Enable Packet is a structure, method, or means that used to specify which color is transparent in a display and to enable or disable the use of a transparent color for displaying images. Displays that have this capability will report that capability in bit 4 of the Client Feature Capability field of the Client Capability Packet. When a pixel with the value for transparent color is written to the bitmap, the color does not change from the previous value. The format of a Transparent Color Enable Packet is shown in FIG. 33 . As shown in FIG. 33 , in one embodiment this type of packet is structured to have Packet Length, Packet Type, hClient ID, Transparent Color Enable, Reserved 1, Alpha-Cursor Identifier, Data Format Descriptor, Transparent Pixel Value, and CRC fields. This type of packet is generally identified as a Type 81 packet in the 1-byte type field, and uses a pre-selected fixed length of 10 bytes. 25. Round Trip Delay Measurement Packets The Round Trip Delay Measurement Packet provides a structure, method, or means that is used to measure the propagation delay from the host to a client (display) plus the delay from the client (display) back to the host. This measurement inherently includes the delays that exist in the line drivers and receivers, and an interconnect subsystem. This measurement is used to set the turn around delay and reverse link rate divisor parameters in the Reverse Link Encapsulation Packet, described generally above. This packet is most useful when the MDDI link is running at the maximum speed intended for a particular application. The packet may be sent in Type 1 mode and at a lower data rate in order to increase the range of the round trip delay measurement. The MDDI_Stb signal behaves as though all zero data is being sent during the following fields: both Guard Times, All Zero, and the Measurement Period. This causes MDDI_Stb to toggle at half the data rate so it can be used as periodic clock in the client during the Measurement Period.In one embodiment, a client generally indicates an ability to support the Round Trip Delay Measurement Packet through use of bit 18 of the Client Feature Capability Indicators field of the Client Capability Packet. It is recommended that all clients support round trip delay measurement, but it is possible for the host to know the worst-case round trip delay based on a maximum cable delay, and on maximum driver and receiver delays. The host may also know the round-trip delay in advance for an MDDI link used in internal mode, since this is an aspect of known design elements (conductor lengths, circuitry type, and features, and so forth) of the device in which the interface is being used.The format of a Round Trip Delay Measurement Packet is shown in FIG. 34 . As shown in FIG. 34 , in one embodiment this type of packet is structured to have Packet Length, Packet Type, hClient ID, Parameter CRC, Guard Time 1, Measurement Period, All Zero, and Guard Time 2 fields. This type of packet is generally identified as a Type 82 packet, and uses a pre-selected fixed length of 159 bits.The timing of events that take place during the Round Trip Delay Measurement Packet are illustrated in FIG. 35 . In FIG. 35 , the host transmits the Round Trip Delay Measurement Packet, shown by the presence of the Parameter CRC and Strobe Alignment fields followed by the All Zero 1 and Guard Time 1 fields. A delay 3502 occurs before the packet reaches the client display device or processing circuitry. As the client receives the packet, it transmits the 0xff, 0xff, and 30 bytes of 0x00 pattern as precisely as practical at the beginning of the Measurement Period as determined by the client. The actual time the client begins to transmit this sequence is delayed from the beginning of the Measurement Period from the point of view of the host. The amount of this delay is substantially the time it takes for the packet to propagate through the line drivers and receivers and the interconnect subsystem (cables, conductors). A similar amount of delay 3504 is incurred for the pattern to propagate from the client back to the host.In order to accurately determine the round trip delay time for signals traversing to and from the client, the host counts the number of forward link bit time periods occurring after the start of the Measurement Period until the beginning of the 0xff, 0xff, and 30 bytes of 0x00 sequence is detected upon arrival. This information is used to determine the amount of time for a round trip signal to pass from the host to the client and back again. Then, about one half of this amount is attributed to a delay created for the one way passage of a signal to the client.The host and client both drive the line to a logic-zero level during both guard times to keep the MDDI_DATA lines in a defined state. The enable and disable times of the host and client during both guard times are such that the MDDI_Data signals are at a valid low level for any valid round-trip delay time. 26. Forward Link Skew Calibration Packet The Forward Link Skew Calibration Packet allows a client or display to calibrate itself for differences in the propagation delay of the MDDI_Data signals with respect to the MDDI_Stb signal. Without delay skew compensation, the maximum data rate is generally limited to account for potential worst-case variation in these delays. Generally, this packet is only sent when the forward link data rate is configured to a rate of around 50 Mbps or lower. After sending this packet to calibrate the display, the data rate may be stepped up above 50 Mbps. If the data rate is set too high during the skew calibration process, the display might synchronize to an alias of the bit period which could cause the delay skew compensation setting to be off by more than one bit time, resulting in erroneous data clocking. The highest data rate type of interface or greatest possible Interface Type is selected prior to sending the Forward Link Skew Calibration Packet so that all existing data bits are calibrated.One embodiment of the format of a Forward Link Skew Calibration Packet is shown in FIG. 56 . As shown in FIG. 56 , this type of packet is structured to have Packet Length (2 bytes), Packet Type, hClient ID, Parameter CRC, All Zero, Calibration Data Sequence, and CRC fields. This type of packet is generally identified as a Type 83 packet in the type field, and in one embodiment has a pre-selected length of 515. Virtual Control Panel The use of a Virtual Control Panel (VCP) allows a host to set certain user controls in a client. By allowing these parameters to be adjusted by the host, the user interface in the client can be simplified because screens that allow a user to adjust parameters such as audio volume or display brightness can be generated by host software rather than by one or more microprocessors in the client. The host has the ability to read the parameter settings in the client and to determine the range of valid values for each control. The client generally has the capability to report back to the host which control parameters can be adjusted.The control codes (VCP Codes) and associated data values generally specified, are utilized to specify controls and settings in the client. The VCP Codes in the MDDI specification are expanded to 16 bits to preserve proper data field alignment in the packet definitions, and in the future to support supplementary values that are unique to this interface or future enhancements. 27. Request VCP Feature Packet The Request VCP Feature Packet provides a means, mechanism, or method for the host to request the current setting of a specific control parameter or all valid control parameters. Generally, a client responds to a VCP Packet with the appropriate information in a VCP Feature Reply Packet. In one embodiment, the client indicates an ability to support the Request VCP Feature Packet using bit 13 of the Client Feature Capability Indicators field of the Client Capability Packet.The format of the Request VCP Feature Packet in one embodiment is shown in FIG. 69 . As seen in FIG. 69 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, MCCS VCP code, and CRC fields. This type of packet is generally identified in one embodiment as a Type 128, which is indicated in the 2 byte type field. The packet length, which specifies the total number of bytes in the packet not including the packet length field, is typically fixed for this type of packet at a length of 8 bytes.The hClient ID field is reserved for use as a Client ID in future implementations and is typically set to zero. The MCCS VCP Code field comprises 2 bytes of information that specifies the MCCS VCP Control Code Parameter. A value in the range of 0 to 255 causes a VCP Feature Reply Packet to be returned with a single item in the VCP Feature Reply List corresponding to the specified MCCS code. An MCCS VCP Code of 65535 (0xffff) requests a VCP Feature Reply Packet with a VCP Feature Reply List containing a Feature Reply List Item for each control supported by the client. The values of 256 through 65534, for this field are reserved for future use and presently not in use. 28. VCP Feature Reply Packet The VCP Feature Reply Packet provides a means, mechanism, or method for a client to respond to a host request with the current setting of a specific control parameter or all valid control parameters. Generally, a client sends the VCP Feature Reply Packet in response to a Request VCP Feature Packet. This packet is useful to determine the current setting of a specific parameter, to determine the valid range for a specific control, to determine if a specific control is supported by the client, or to determine the set of controls that are supported by the client. If a Request VCP Feature is sent that references a specific control that is not implemented in the client then a VCP Feature Reply Packet is returned with a single VCP Feature Reply List item corresponding to the unimplemented control that contains the appropriate error code. In one embodiment, the client indicates an ability to support the VCP Feature Reply Packet using bit 13 of the Client Feature Capability field of the Client Capability Packet.The format of the VCP Feature Reply Packet in one embodiment is shown in FIG. 70 . As seen in FIG. 70 , this type of packet is structured to have Packet Length, Packet Type, cClient ID, MCCS Version, Reply Sequence Number, VCP Feature Reply List, and CRC fields. This type of packet is generally identified in one embodiment as a Type 129, as indicated in the 2 byte type field.The cClient ID field contains information reserved for a Client ID. This field is reserved for future use and is generally set to zero. MCCS Version field contains 2 bytes of information that specifies the Version of the VESA MCCS Specification implemented by the client.The 2 byte Reply Sequence Number field contains information or data that specifies the sequence number of the VCP Feature Reply Packets returned by the client. The client returns one or more VCP Feature Reply Packets in response to a Request VCP Feature Packet with an MCCS Control Code value of 65535. The client may spread the feature reply list over multiple VCP Feature Reply Packets. In this case, the client assigns a sequence number to each successive packet, and the sequence numbers of the VCP Feature Reply Packets sent in response to a single Request VCP Feature Packet starts at zero and increments by one. The last VCP Feature List Item in the last VCP Feature Reply Packet should contain an MCCS VCP Control Code value equal to 0xffff to identify that the packet is the last one and contains the highest sequence number of the group of packets returned. If only one VCP Feature Reply Packet is sent in response to a Request VCP Feature Packet then the Reply Sequence Number in that single packet is zero and the VCP Feature Reply List contains a record having an MCCS VCP Control Code equal to 0xffff.The Number of Features in List field contains 2 bytes that specify the number of VCP Feature List Items that are in the VCP Feature Reply List in this packet, while the VCP Feature Reply List field is a group of bytes that contain one or more VCP Feature Reply List Items. The format of a single VCP Feature Reply List Item in one embodiment is shown in FIG. 71 .As shown in FIG. 71 , each VCP Feature Reply List Item is 12 bytes in length, and comprises the MCCS VCP Code, Result Code, Maximum Value, and Present Value fields. The 2-byte MCCS VCP Code field contains data or information that specifies the MCCS VCP Control Code Parameter associated with this list item. Only the Control Code values defined in the VESA MCCS Specification version 2 and later are considered as valid for this embodiment. The 2-byte Result Code field contains information that specifies an error code related to the request for information regarding the specified MCCS VCP Control. A value of '0' in this field means there is no error, while a value of '1' means the specified control is not implemented in the client. Further values for this field of 2 through 65535 are currently reserved for future use and implementation of other application contemplated by the art, but are not to be used for now.The 4-byte Maximum Value field contains a 32-bit unsigned integer that specifies the largest possible value to which the specified MCCS Control can be set. If the requested control is not implemented in the client this value is set to zero. If the value returned is less than 32 bits (4 bytes) in length, then the value is cast into a 32-bit integer leaving the most significant (unused) bytes set to zero. The 4-byte Present Value field contains information that specifies the present value of the specified MCCS VCP Continuous (C) or non-continuous (NC) control. If the requested control is not implemented in the client or if the control is implemented but is a table (T) data type, then this value is set to zero. If the value returned is less than 32 bits (4 bytes) in length per the VESA MCCS specification then the value is cast into a 32-bit integer leaving the most significant (unused) bytes set to zero. 29. Set VCP Feature Packet The Set VCP Feature Packet provides a means, mechanism, or method for a host to set VCP control values for both continuous and non-continuous controls in a client. In one embodiment, the client indicates the ability to support the Set VCP Feature Packet using bit 13 of the Client Feature Capability field of the Client Capability Packet.The format of the Set VCP Feature Packet in one embodiment is shown in FIG. 72 . As seen in FIG. 72 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, MCCS VCP Code, Number of Values in List, Control Value List, and CRC fields. This type of packet is generally identified as a Type 130, as indicated in the 2 byte type field, is 20 bytes long exclusive of the Packet Length field.The hClient ID field again uses a 2-byte value to specify or act as a Client ID. This field is reserved for future use and is currently set to zero. The MCCS VCP Code field uses 2 bytes of information or values to specify the MCCS VCP Control Code Parameter to be adjusted. The 2-byte Number of Values in List Field contains information or values that specify the number of 16-bit values that exist in the Control Value List. The Control Value List will usually contain one item unless the MCCS Control Code relates to a table in the client. In the case of non-table-related controls, The Control Value List will contain a value that specifies the new value to be written to the control parameter specified by the MCCS VCP Code field. For table-related controls the format of the data in the Control Value List is specified by the parameter description of the specified MCCS VCP Code. If the list contains values that are larger than one byte, then the least-significant byte is transmitted first, consistent with the method defined elsewhere. Finally, the 2-byte CRC field contains a 16-bit CRC of all bytes in the packet including the Packet Length. 30. Request Valid Parameter Packet The Request Valid Parameter Packet is used as a means or structure useful to request that a client return a Valid Parameter Reply Packet containing a list of parameters supported by the specified non-continuous (NC) or table (T) control. This packet should only specify non-continuous controls or controls that relate to a table in the client, and not specify a MCCS VCP Code value of 65535 (0xffff) to specify all controls. If a non-supported or invalid MCCS VCP Code is specified then an appropriate error value is returned in the Valid Parameter Reply Packet. In one embodiment, the client indicates an ability to support the Request Valid Parameter Packet using bit 13 of the Client Feature Capability field of the Display Capability Packet.The format of the Request Valid Parameter Packet in one embodiment is shown in FIG. 73 . As seen in FIG. 73 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, MCCS VCP Code, and CRC fields. This type of packet is generally identified in one embodiment as a Type 131, as indicated in the 2 byte type field.The packet length, as indicated in the 2-bytes Packet Length Field is generally set to have a total number of bytes in the packet, not including the packet length field of 8. The hClient ID again specifies the Client ID, but is currently reserved for future use, as would be apparent to one skilled in the art, and is set to zero. The 2-byte MCCS VCP Code Filed contains a value that specifies the non-continuous MCCS VCP Control Code Parameter to be queried. The value in this field should correspond to a non-continuous control that is implemented in the client. The values 256 through 65535 (0xffff) are typically reserved or considered as invalid, and are considered as an unimplemented control in the error response. 31. Valid Parameter Reply Packet A Valid Parameter Reply Packet is sent in response to a Request Valid Parameter Packet. It is used as a means, method, or structure to identify the valid settings for a non-continuous MCCS VCP control or a control that returns the contents of a table. If the control relates to a table in the client, then the VCP Parameter Reply List simply contains the specific list of sequential table values that were requested. If the contents of the table cannot fit into a single Valid Parameter Reply Packet then multiple packets with sequential Reply Sequence Numbers can be sent by the client. In one embodiment, a client indicates an ability to support a Valid Parameter Reply Packet using bit 13 of the Client Feature Capability field of the Client Capability Packet.A host may request the contents of a table in the following manner: the host sends a Set VCP Feature Packet containing the necessary or desired parameters such as read/write parameter, LUT offset, and RGB selection; then a Request Valid Parameter Packet that specifies the desired control is sent by the host; then the client returns one or more Valid Parameter Reply Packets containing the table data. This sequence of operations performs a similar function as the table reading functions described in the MCCS operation model.If a specific client parameter is not supported by the client then in one embodiment the corresponding field of this packet will contain a value of 255. For parameters that are used in the client, the corresponding field should contain a value of the parameter in the client.The format of the Valid Parameter Reply Packet for one embodiment is shown in FIG. 74 . As seen in FIG. 74 , this type of packet is structured to have Packet Length, Packet Type, cClient ID, MCCS VCP Code, Response Code, Reply Sequence Number, Number Values in List, VCP Parameter Reply List, and CRC fields. This type of packet is generally identified for one embodiment as a Type 132, as indicated in the 2 byte type field.The cClient ID field is reserved for the future Client ID, as is known from the above discussions, while the 3-byte MCCS VCP Code Packet contains a value that specifies a non-continuous MCCS VCP Control Code Parameter that is described by this packet. If an invalid MCCS VCP Control Code is specified by a Request Valid Parameter Packet, then the same invalid parameter value will be specified in this field with the appropriate value in the Response Code field. If the MCCS Control Code is invalid then the VCP Parameter Reply List will have zero length.The Response Code field contains 2 bytes of information or values that specify the nature of the response related to the request for information regarding the specified MCCS VCP Control. If the value in this field is equal to 0, then no error is considered as being present for this data type, and the last Valid Parameter Reply Packet in the sequence is sent, it having the highest Reply Sequence Number. If the value in this field is equal to 1, then no error is considered as being present, but other Valid Parameter Reply Packets will be sent that have higher sequence numbers. If the value in this field is equal to 2, then the specified control is not considered as being implemented in the client. If the value in this field id equal to 3, then the specified control is not a non-continuous control (it is a continuous control that always has a valid set of all values from zero to its maximum value). Values for this field equal to 4 through 65535 are reserved for future use and generally not to be used.The 2-byte Reply Sequence Number field specifies the sequence number of the Valid Parameter Reply Packets returned by the client. The client returns one or more Valid Parameter Reply Packets in response to a Request Valid Parameter Packet. The client may spread the VCP Parameter Reply List over multiple Valid Parameter Reply Packets. In this latter case, the client will assign a sequence number to each successive packet, and set the Response Code to 1 in all but the last packet in the sequence. The last Valid Parameter Reply Packet in the sequence will have the highest Reply Sequence Number and the Response Code contains a value of 0.The 2-byte Number of Values in List field specifies the number of 16-bit values that exist in the VCP Parameter Reply List. If the Response Code is not equal to zero then the Number of Values in List parameter is zero. The VCP Parameter Reply List field contains a list of 0 to 32760 2-byte values that indicate the set of valid values for the non-continuous control specified by the MCCS Control Code field. The definitions of the non-continuous control codes are specified in the VESA MCCS Specification. Finally, in this embodiment, the CRC field contains a 16-bit CRC of all bytes in the packet including the Packet Length. Scaled Video Stream Images The MDDI or protocol mechanism, structure, means, or method provides support for scaled video stream images that allow the host to send an image to the client that is scaled larger or smaller than the original image, and the scaled image is copied to a main image buffer. An overview of the Scaled Video Stream functionality and associated protocol support is provided elsewhere. An ability to support scaled video streams is defined by or within the Scaled Video Stream Capability Packet, which is sent in response to a Request Specific Status Packet. 32. Scaled Video Stream Capability Packet The Scaled Video Stream Capability Packet defines the characteristics of the scaled video stream source image in or used by a client. The format of the Scaled Video Stream Capability Packet is shown generally in FIG. 75 . As seen in FIG. 75 , in one embodiment, a Scaled Video Stream Capability Packet is structured to have Packet Length, Packet Type, cClient ID, Max Number of Streams, Source Max X Size, Source Max Y size, RGB Capability, Monochrome Capability, Reserved 1, Y Cr Cb Capability, Reserved 2, and CRC fields. The packet length, in one embodiment, is selected to be a fixed 20 bytes, as shown in the length field, including the 2-byte cClient ID field, which is reserved for use for a Client ID, otherwise set to zero, and the CRC field. In one embodiment, the client indicates an ability to support the Scaled Video Stream Capability Packet using a parameter value of 143 in the Valid Parameter Reply List of the Valid Status Reply List Packet.The 2-byte Maximum Number of Streams field contains a value to identify the maximum number of simultaneous scaled video streams that may be allocated at one time. In one embodiment, a client should deny a request to allocate a scaled video stream if the maximum number of scaled video streams is already allocated. If less than the maximum number of scaled video streams are allocated the client may also deny an allocation request based on other resource limitations in the client.The Source Maximum X Size and Y size fields (2 bytes) specify values for the maximum width and height, respectively, of the scaled video stream source image expressed as a number of pixels.The RGB Capability field uses values to specify the number of bits of resolution that can be displayed in RGB format. If the scaled video stream cannot use the RGB format then this value is set equal to zero. The RGB Capability word is composed of three separate unsigned values with: Bits 3 through 0 defining a maximum number of bits of blue (the blue intensity) in each pixel, Bits 7 through 4 defining the maximum number of bits of green (the green intensity) in each pixel, and Bits 11 through 8 defining the maximum number of bits of red (the red intensity) in each pixel, while Bits 15 through 12 are reserved for future use in future capability definitions, and are generally set to zero.The 1-byte Monochrome Capability field contains a value that specifies the number of bits of resolution that can be displayed in monochrome format. If the scaled video stream cannot use the monochrome format then this value is set to zero. Bits 7 through 4 are reserved for future use and should, therefore, be set to zero ('0') for current applications, although this may change over time, as will be appreciated by those skilled in the art. Bits 3 through 0 define the maximum number of bits of grayscale that can exist in each pixel. These four bits make it possible to specify that each pixel consists of 1 to 15 bits. If the value is zero, then the monochrome format is not supported by the scaled video stream.The Reserved 1 field (here 1 byte) is reserved for future use in providing values related to the Scaled Video Stream Packet information or data. Therefore, currently, all bits in this field are set to a logic '0'. One purpose of this field is to cause all subsequent 2-byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.The 2-byte Y Cb Cr Capability field contains values that specify the number of bits of resolution that can be displayed in Y Cb Cr format. If the scaled video stream cannot use the Y Cb Cr format then this value is zero. The Y Cb Cr Capability word is composed of three separate unsigned values with: Bits 3 through 0 defining the maximum number of bits that specify the Cr sample; Bits 7 through 4 defining the maximum number of bits that specify the Cb sample; Bits 11 through 8 defining the maximum number of bits specify the Y sample; and with Bits 15 through 12 being reserved for future use and is generally set to zero.The 1-byte Capability Bits field contains a set of flags that specify capabilities associated with the scaled video stream. The flags are defined as follows: Bit 0 covers Pixel data in the Scaled Video Stream Packet can be in a packed format. An example of packed and byte-aligned pixel data is shown earlier in FIG. 12 . Bit 1 is reserved for future use and is generally set to zero; Bit 2 is also reserved for future use and is set to zero; Bit 3 covers scaled video streams that can be specified in the color map data format. The same color map table is used for the scaled video streams as is used for the main image buffer and the alpha-cursor image planes. The color map is configured using the Color Map Packet described elsewhere; and Bits 7 through 4 are reserved for future use and are generally set to be zero.The Reserved 2 field (here 1 byte) is reserved for future use in providing values related to the Scaled Video Stream Packet information or data. Therefore, currently, all bits in this field are set to a logic '0'. One purpose of this field is to cause all subsequent 2-byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address. 33. Scaled Video Stream Setup Packet The Scaled Video Stream Setup Packet is used to define the parameters of the scaled video stream and the client uses the information to allocate internal storage for buffering and scaling of the image. A stream may be de-allocated by sending this packet with the X Image Size and Y Image Size fields equal to zero. Scaled video streams that have been de-allocated may be reallocated later with the same or different stream parameters. In one embodiment a client indicates an ability to support the Scaled Video Stream Setup Packet using a parameter value of 143 in the Valid Parameter Reply List of the Valid Status Reply List Packet, and by using a non-zero value in the Maximum Number of Streams field of the Scaled Video Stream Capability Packet.The format of the Scaled Video Stream Setup Packet is shown generally in FIG. 76 . As seen in FIG. 76 , in one embodiment, a Scaled Video Stream Setup Packet is structured to have Packet Length Packet Type, hClient, Stream ID, Visual Data Format Descriptor, Pixel Data Attributes, X Left Edge, Y Top Edge, X Right Edge, Y Bottom Edge, X Image Size, Y Image Size, and CRC fields.The 2-byte Packet Length field specifies the total number of bytes in the packet not including the packet length field. In one embodiment, this packet length is fixed at 24. The 2-byte Packet Type field employs a value of 136 to identify the packet as a Scaled Video Stream Setup Packet. The 2-byte hClient ID field is reserved for future use as a Client ID, and is generally set to an all bits at logic-zero value for the moment, or until a protocol user determines what ID values are to be used, as would be known.The Stream ID field uses 2 bytes to specify a unique identifier for the Stream ID. This value is assigned by the host and ranges in value from zero to the maximum Stream ID value specified in the Client Capability Packet. The host must manage the use of Stream ID values carefully to ensure that each active stream is assigned a unique value, and that streams that are no longer active are de-allocated or reassigned.In one embodiment, the Video Data Format Descriptor field uses 2 bytes to specify the format of each pixel in the Pixel Data in the present stream in the present packet. The pixel data format should comply with at least one of the valid formats for the alpha-cursor image plane as defined in the Alpha-Cursor Image Capability Packet. The Video Data Format Descriptor defines the pixel format for the current packet only and does not imply that a constant format will continue to be used for the lifetime of a particular video stream. Fig. 11 illustrates an embodiment of how the Video Data Format Descriptor is coded, and as discussed above for other packets.In one embodiment, the 2-byte Pixel Data Attributes field has values that are interpreted as follows with Bits 1 and 0 selecting the display where the pixel data is to be routed. For bit values of '11' or '00' pixel data is displayed to or for both eyes, for bit values '10', pixel data is routed only to the left eye, and for bit values '01', and pixel data is routed only to the right eye.Bit 2 indicates whether or not the Pixel Data is in interlace format. When Bit 2 is 0, then the Pixel Data is in the standard progressive format. The row number (pixel Y coordinate) is incremented by 1 when advancing from one row to the next. When Bit 2 is 1, then the Pixel Data is in interlace format. The row number (pixel Y coordinate) is incremented by 2 when advancing from one row to the next.Bit 3 indicates whether or not the Pixel Data is in alternate pixel format. This is similar to the standard interlace mode enabled by bit 2, but the interlacing is vertical instead of horizontal. When Bit 3 is 0, the Pixel Data is in the standard progressive format. The column number (pixel X coordinate) is incremented by 1 as each successive pixel is received. When Bit 3 is 1, then the Pixel Data is in alternate pixel format. The column number (pixel X coordinate) is incremented by 2 as each pixel is received.Bit 4 indicates whether the Pixel data is related to the display or the camera. When Bit 4 is 0, the Pixel Data is to or from the display frame buffer. When Bit 4 is 1, the Pixel Data is to or from the camera. Bit 5 is reserved for future use and is, therefore, generally set to be zero.Bits 7 and 6 are the Display Update Bits that specify the frame buffer where the pixel data is to be written. The effects of the Frame Update Bits are described in more detail elsewhere. When Bits [7:6] are '01', the Pixel data is written to the offline image buffer. When Bits [7:6] are '00', the Pixel data is written to the image buffer used to refresh the display. When Bits [7:6] are '11', the Pixel data is written to all image buffers. If Bits [7:6] are '10', this is treated as an invalid value. These bits are currently reserved for future use. In this situation, Pixel data would be ignored and not written to any of the image buffers.Bits 8 through 15 are reserved for future use and are generally be set to logic-zero level or values. 34. Scaled Video Stream Acknowledgement Packet The Scaled Video Stream Acknowledgement Packet allows a client to acknowledge the receipt of a Scaled Video Stream Setup Packet. The client can indicate an ability to support the Scaled Video Stream Acknowledgement Packet via a parameter value of 143 in the Valid Parameter Reply List of the Valid Status Reply List Packet and via a non-zero value in the Maximum Number of Streams field of the Scaled Video Stream Capability Packet.The format of the Scaled Video Stream Acknowledgement Packet is shown generally in FIG. 77 . As seen in FIG. 77 , in one embodiment, a Scaled Video Stream Acknowledgement Packet is structured to have Packet Length, Packet Type, cClient, Stream ID, ACK Code, and CRC fields. The 2-byte Packet Length field is used to specify the total number of bytes, excluding the packet length field, with a value of 10 for this packet type, while a Packet Type of 137 identifies a packet as a Scaled Video Stream Acknowledgement Packet.The 2-byte cClient ID field is reserved for future use for the Client ID, and is generally set to zero. The 2-byte Stream ID field specifies a unique identifier for the Stream ID. This is the same value assigned by the host in the Scaled Video Stream Setup Packet.The 2-byte Ack Code field provides values containing a code that describes the outcome of an attempt to update the specified scaled video stream. In one embodiment, the codes are defined as follows:0 - The stream allocation attempt was successful.1 - the stream de-allocation attempt was successful.2 - invalid attempt to allocate a stream ID that has already been allocated.3 - invalid attempt to de-allocate a stream ID that is already de-allocated.4 - the client does not support scaled video streams5 - the stream parameters are inconsistent with the capability of the client.6 - stream ID value larger than the maximum value allowed by the client.7 - insufficient resources available in the client to allocate the specified stream.The 2-byte CRC field contains the CRC of all bytes in the packet including the Packet Length. 35. Scaled Video Stream Packet The Scaled Video Stream Packet is used to transmit the pixel data associated with a specific scaled video stream. The size of the region reference by this packet is defined by the Scaled Video Stream Setup Packet. The client can indicate an ability to support the Scaled Video Stream Packet via a parameter value of 143 in the Valid Parameter Reply List of the Valid Status Reply List Packet and using a successful scaled video stream allocation response in the Ack Code field of the Scaled Video Stream Acknowledgement Packet.The format of one embodiment of the Scaled Video Stream Packet is shown generally in FIG. 78 . As seen in FIG. 78 , a Scaled Video Stream Packet is structured to have Packet Length, Packet Type, hClient ID, Stream ID, Parameter CRC, Pixel Count, Pixel Data, and Pixel Data CRC fields. The 2-byte Packet Type field uses a value of 18 to identify a packet as a Scaled Video Stream Packet. The hClient ID field is reserved for the Client ID, and generally set to zero. As before, the 2-byte Stream ID field specifies a unique identifier for the Stream ID. This value is specified by the host in the Scaled Video Stream Setup Packet and confirmed in the Scaled Video Stream Acknowledgement Packet.The 2-byte Pixel Count field specifies the number of pixels in the Pixel Data field below. The 2-byte Parameter CRC field has the CRC of all bytes from the Packet Length to the Pixel Count. If this CRC fails to check then the entire packet is discarded. The 2-byte Pixel Data field contains the raw video information that is to be scaled and then displayed. Data is formatted in the manner described by the Video Data Format Descriptor field. The data is transmitted a row at a time as defined previously.The 2-byte Pixel Data CRC field contains a CRC of only the Pixel Data. If this CRC fails to check then the Pixel Data can still be used but the CRC error count is incremented. 36. Request Specific Status Packet The Request Specific Status Packet provides a means, mechanism, or method for a host to request that the client send a capability or status packet back to the host as specified in this packet. The client returns the packet of the specified type in the next Reverse Link Encapsulation Packet. The client will generally set bit 17 in the Client Feature Capability field of the Client Capability Packet if the client has the capability to respond to the Request Specific Status Packet. A convenient method for the host to use to determine all of the types of status packets to which a client can respond is to use the Valid Status Reply List Packet described elsewhere. The client can indicate an ability to respond with the Valid Status Reply List Packet using bit 21 of Client Feature Capability field of the Client Capability Packet.The format of one embodiment of a Request Specific Status Packet is shown generally in FIG. 79 . As seen in FIG. 79 , a Request Specific Status Packet is structured to have Packet Length, Packet Type, hClient ID, Status Packet ID, and CRC fields. Packet Length field specifies the total number of bytes in the packet not including the packet length field, and is generally fixed at a value of 10 for this packet type. A Packet Type of 138 identifies the packet as a Request Specific Status Packet. The hClient ID field (2 bytes) is reserved for future use for a Client ID, and is set to zero for now, while a 2-byte Status Packet ID field specifies the type of capability or status packet that the client is going to send to the host. Typical packets types are:66 - Client Capability Packet is sent by the client.133 - Alpha-Cursor Image Capability Packet is sent by the client.139 - Valid Status Reply List Packet is sent that identifies the exact types of capability and status packets that the client can send.140 - Packet Processing Delay Parameters Packet is sent by the client.141 - Personal Client Capability Packet is sent by the client.142 - Client Error Report Packet is sent by the client.143 - Scaled Video Stream Capability Packet is sent by the client.144 - Client Identification Packet is sent by the client.Packet Types 56 through 63 can be used for manufacturer-specific capability and status identifiers.The CRC field again contains a CRC of all bytes in the packet including the Packet Length. 37. Valid Status Reply List Packet The Valid Status Reply List Packet provides the host with a structure, means, or method to have a list of status and capability packets to which the client has the capability to respond. A client can indicate an ability to support the Valid Status Reply List Packet using bit 21 of Client Feature Capability field of the Client Capability Packet.The format of one embodiment of a Valid Status Reply List Packet is shown generally in FIG. 80 . As seen in FIG. 80 , a Valid Status Reply List Packet is structured to have Packet Length, Packet Type, cClient ID, Number of Values in List, Valid Parameter Reply List, and CRC fields. The packet length for this type of packet is generally fixed at a value of 10, and a type value of 139 identifies the packet as a Valid Status Reply Packet. The cClient ID field is reserved for future use as the Client ID, and is generally be set to zero. The 2- byte Number of Values in List field specifies the number of items in the following Valid Parameter Reply List.The Valid Parameter Reply List field contains a list of 2-byte parameters that specify the types of capability or status packets that the client can send to the host. If the client has indicated that it can respond to the Request Specific Status Packet (using bit 21 of the Client Feature Capability field the in the Client Capability Packet) then it is capable of sending at least the Client Capability Packet (Packet Type = 66) and the Valid Status Reply List Packet (Packet Type = 139). The Packet Types that can be sent by the client and may be included in this list, along with their respective assignments for purposes of the one embodiment, are:66 - Client Capability Packet.133 - Alpha-Cursor Image Capability Packet.139 - Valid Status Reply List Packet, that identifies the exact types of capability and status packets that the client can send.140 - Packet Processing Delay Parameters Packet.141- Personal Display Capability Packet.142 - Client Error Report Packet.143 - Scaled Video Stream Capability Packet.144 - Client Identification Packet.145 - Alternate Display Capability Packet.Packet Types 56 through 63 can be used for manufacturer-specific capability and status identifiers.The CRC field contains a CRC of all bytes in the packet including the Packet Length. 38. Packet Processing Delay Parameters Packet The Packet Processing Delay Parameters Packet provides a set of parameters to allow the host to compute the time required to complete the processing associated with the reception of a specific packet type. Some commands sent by the host cannot be completed by the client in zero time. The host may poll the status bits in the Client Request and Status Packet to determine if certain functions have been completed by the client, or the host may compute the completion time using the parameters returned by the client in the Packet Processing Delay Parameters Packet. The client can indicate an ability to support the Packet Processing Delay Parameters Packet using a parameter value of 140 in the Valid Parameter Reply List of the Valid Status Reply List Packet.The format of one embodiment of a Packet Processing Delay Parameters Packet is shown generally in FIG. 81A . As seen in FIG. 81A , a Packet Processing Delay Parameters Packet is structured to have Packet Length, Packet Type, cClient ID, Number of List Items, Delay Parameters List, and CRC fields. The packet length for this type of packet is generally fixed at a value of 10, and a type value of 140 identifies the packet as a Packet Processing Delay Parameters Packet. The cClient ID field is reserved for future use as the Client ID, and is generally be set to zero. The 2- byte Number of List items field specifies the number of items in the following Valid Parameter Reply List.The Delay Parameters List field is a list containing one or more Delay Parameter List items. The format for one embodiment of a single Delay Parameters List item is shown in FIG. 81B , where Packet Type for Delay, Pixel Delay, Horizontal Pixel Delay, Vertical Pixel Delay, and Fixed Delay fields are shown.Each Delay Parameters List Item is generally restricted to be 6 bytes in length, and is further defined as follows. The 2-byte Packet Type for Delay field specifies the Packet Type for which the following delay parameters apply. The Pixel Delay field (1 byte) comprises an index to a delay value. The value read from the table is multiplied by the total number of pixels in the destination field of the packet. The total number of pixels is the width times the height of the destination area of the bitmap referenced by the packet. The 1-byte Horizontal Pixel Delay field contains a value that is an index to a delay value table (same table as DPVL). The value read from the table is multiplied by the width (in pixels) of the destination field of the packet. The 1-byte Vertical Pixel Delay field contains a value that is an index to a delay value table (generally uses the same table as DPVL). The value read from the table is multiplied by the height (in pixels) of the destination field of the packet.The Fixed Delay field uses 1 byte as an index to a delay value table (same table as DPVL). The value read from the table is a fixed delay parameter that represents a time required to process the packet that is unrelated to any parameter values specified in the packet. The total delay, or packet processing completion time delay, is determined according to the relationship: Delay = PacketProcessingDelay PixelDelay ⋅ TotalPixels + PacketProcessingDelay HorizontalPixelDelay ⋅ Width + PacketProcessingDelay VerticalPixelDelay ⋅ Height + PacketProcessingDelay FixedDelayFor some packets, the Total Pixels, Width, or Height do not apply because those parameters are not referenced in the corresponding packet. In those cases, the corresponding Pixel Delay parameter is generally set to be zero. 39. Personal Display Capability Packet The Personal Display Capability Packet provides a set of parameters that describe the capabilities of a personal display device, such as a head-mounted display or display glasses. This enables the host to customize the display information according to the specific capabilities of a client. A client, on the other hand, indicates an ability to send the Personal Display Capability Packet by using a corresponding parameter in the Valid Parameter Reply List of the Valid Status Reply List Packet.The format of one embodiment of a Personal Display Capability Packet is shown generally in FIG. 82 . As seen in FIG. 82 , a Personal Display Capability Packet is structured to have Packet Length, Packet Type, cClient ID, Sub-Pixel Layout, Pixel Shape, Horizontal Field of View, Vertical Field of View, Visual Axis Crossing, Lft./Rt. Image, See Through, Maximum Brightness, Optical Capability, Minimum IPD, Maximum IPD, Points of IFeld of Curvature List and CRC fields. In one embodiment, the Packet Length field value is fixed at 68. A Packet Type value of 141 identifies a packet as a Personal Display Capability Packet. The cClient ID field is reserved for future use and is generally set to zero for now.The Sub-Pixel Layout field specifies the physical layout of a sub-pixel from top to bottom and left to right, using values of: 0 to indicate that a sub-pixel layout is not defined; 1 to indicate red, green, blue stripe; 2 to indicate blue, green, red stripe; 3 to indicate a quad-pixel, having a 2-by-2 sub-pixel arrangement of red at the top left, blue at the bottom right, and two green sub-pixels, one at the bottom left and the other at the top right; 4 to indicate a quad-pixel, with a 2-by-2 sub-pixel arrangement of red at the bottom left, blue at the top right, and two green sub-pixels, one at the top left and the other at the bottom right; 5 to indicate a Delta (Triad); 6 to indicate a mosaic with red, green, and blue overlayed (e.g. LCOS display with field-sequential color); and with values 7 through 255 being generally reserved for future use.The Pixel Shape field specifies the shape of each pixel that is composed of a specific configuration sub-pixels, using a value of: 0 to indicate that a sub-pixel shape is not defined; I to indicate round; 2 to indicate square; 3 to indicate rectangular; 4 to indicate oval; 5 to indicate elliptical; and with the values 6 through 255 being reserved for future use in indicating desired shapes, as can be appreciated by one skilled in the art.A 1-byte Horizontal Field of View (HFOV) field specifies the horizontal field of view in 0.5 degree increments (e.g. if the HFOV is 30 degrees, this value is 60). If this value is zero then the HFOV is not specified.A 1-byte Vertical Field of View (VFOV) field specifies the vertical field of view in 0.5 degree increments (e.g. if the VFOV is 30 degrees, this value is 60). If this value is zero then the VFOV is not specified.A 1-byte Visual Axis Crossing field specifies the visual axis crossing in 0.01 diopter (1/m) increments (e.g. if the visual axis crossing is 2.22 meters, this value is 45). If this value is zero then the Visual Axis Crossing is not specified.A 1-byte Left/Right Image Overlap field specifies the percentage of overlap of the left and right image. The allowable range of the image overlap in percent is 1 to 100. Values of 101 to 255 are invalid and are generally not to be used. If this value is zero then the image overlap is not specified.A 1-byte See Through field specifies the see-through percentage of image. The allowable range of see-through in percent is 0 to 100. Values of 101 to 254 are invalid and are not to be used. If this value is 255 then the see-through percentage is not specified. A1-byte Maximum Brightness field specifies the maximum brightness in increments of 20 nits (e.g. if the maximum brightness is 100 nits, this value is 5). If this value is zero then the maximum brightness is not specified.A 2-byte Optical Capability Flags field contains various fields that specify optical capabilities of the display. These bit values are generally assigned according to:Bits 15 through 5 are reserved for future use and are generally set to a logic-zero state.Bit 4 selects Eye Glass Focus Adjustment, with a value of '0' meaning the display has no eye glass focus adjustment, and a value of '1' meaning the display has an eye glass focus adjustment.Bits 3 through 2 select a Binocular Function according to: a value of 0 means the display is binocular and can display 2-dimensional (2D) images only; 1 means the display is binocular and can display 3-dimensional (3D) images; 2 means the display is monocular, and 3 is reserved for future use.Bits 1 through 0 select Left-Right Field Curvature Symmetry, with a value of 0 meaning Field curvature not defined. If this field is zero then all field curvature values from A1 through E5 are set to zero except for point C3, which specifies a focal distance of the display or is to be set to zero to indicate the focal distance is not specified. A value of 1 means Left and Right displays have the same symmetry; 2 means Left and right displays are mirrored on the vertical axis (column C); and 3 is reserved for future use.The 1-byte Inter-Pupillary Distance (IPD) Minimum field specifies the minimum inter-pupillary distance in millimeters (mm). If this value is zero then the minimum inter-pupillary distance is not specified. The 1-byte Inter-Pupillary Distance (IPD) Maximum field specifies the maximum inter-pupillary distance in millimeters (mm). If this value is zero then the maximum inter-pupillary distance is not specified.The Points of Field Curvature List field contains a list of 25 2-byte parameters that specify the focal distance in thousandths of a diopter (1/m) with a range of 1 to 65535 (e.g. 1 is 0.001 diopters and 65535 is 65.535 diopters). The 25 elements in the Points of Field Curvature List are labeled A1 through E5 as shown in FIG. 83 . The points are to be evenly distributed over the active area of the display. Column C corresponds to the vertical axis of the display and row 3 corresponds to the horizontal axis of the display. Columns A and E correspond to the left and right edges of the display, respectively. And rows 1 and 5 correspond to the top and bottom edges of the display, respectively. The order of the 25 points in the list is: A1, B1, C1, D1, E1, A2, B2, C2, D2, E2, A3, B3, C3, D3, E3, A4, B4, C4, D4, E4, A5, B5, C5, D5, E5.The CRC field contains a CRC of all bytes in the packet including the Packet Length. 40. Client Error Report Packet The Client Error Report Packet acts as a mechanism or means for allowing a client to provide a list of operating errors to the host. The client may detect a wide range of errors in the course of its normal operation as a result of receiving certain commands from the host. Examples of these errors include: the client may have been commanded to operate in a mode that it does not support, the client may have received a packet containing certain parameters that are out of range or are beyond the capability of the client, the client may have been commanded to enter a mode in an improper sequence. The Client Error Report Packet may be used to detect errors during normal operation, but is most useful to the system designer and integrator to diagnose problems in development and integration of host and client systems. A client indicates its ability to send a Client Error Report Packet using a parameter value of 142 in the Valid Parameter Reply List of the Valid Status Reply List Packet.The format of one embodiment of a Client Error Report Packet is shown generally in FIG. 84A . As seen in FIG. 84A , a Client Error Report Packet is structured to have Packet Length, Packet Type, cClient ID, Number of List Items, Error Code List, and CRC fields. A Packet Type value of 142 identifies a packet as a Client Error Report Packet. The cClient ID field is reserved for future use and is generally set to zero for now. The Number of List Items field (2 bytes) specifies the number of items in the following Error Code List. The Error Code List field (here 8 bytes) is a list containing one or more Error Report List items. The format of a single Error Report List item is shown in FIG. 87B .In one embodiment, as shown in FIG. 87B , each Error Report List Item is exactly 4 bytes in length, and has a structure in one embodiment comprising: a 2-byte Display Error Code field that specifies the type of error being reported, a 2-byte Error Sub-code field specifies a greater level of detail regarding the error defined by the Client Error Code packet The specific definition of each Client Error Code is defined by the manufacturer of the client. An Error Sub-code does not have to be defined for every Display Error Code, and in those cases where the Error Sub-code is not defined the value is set to zero. The specific definition of each Error Sub-code is defined by the manufacturer of the client. 41. Client Identification Packet The Client Identification Packet allows a client to return identifying data in response to a Request Specific Status Packet. In one embodiment, a client indicates an ability to send the Client Identification Packet using a parameter value of 144 in the Valid Parameter Reply List of the Valid Status Reply List Packet. It is useful for the host to be able to determine the client device manufacturer name and model number by reading this data from the client. The information may be used to determine if the client has special capabilities that cannot described in the Client Capability Packet. There are potentially two methods, means, or mechanisms for reading identification information from the client. One is through use of the Client Capability Packet, which contains fields similar to those in the base EDID structure. The other method is through use of the Client Identification Packet that contains a richer set of information compared to the similar fields in the Client Capability Packet. This allows a host to identify manufacturers that have not been assigned a 3-character EISA code, and allows serial numbers to contain alphanumeric characters.The format of one embodiment of a Client Identification Packet is shown generally in FIG. 85 . As seen in FIG. 85 , a Client Identification Packet is structured to have Packet Length, Packet Type, cClient ID, Week of Mfr, Year of Mfr., Length of Mfr Name, Length of Product Name, Length of Serial Number, Manufacturer Name String, Product Name String, Serial Number String, and CRC fields.The 2 byte Packet Type field contains a value that identifies the packet as a Client Identification Packet. This value is selected to be 144 in one embodiment. The cClient ID field (2 bytes) again is reserved for future use for the Client ID, and is generally set to zero. The CRC field (2 bytes) contains a 16-bit CRC of all bytes in the packet including the Packet Length.A 1-byte Week of Manufacture field contains a value that defines the week of manufacture of the display. In at least one embodiment, this value is in the range of 1 to 53 if it is supported by the client. If this field is not supported by the client, then it is generally set to zero. A 1-byte Year of Manufacture field contains a value that defines the year of manufacture of the client (display). This value is an offset from the year 1990 as a starting point, although other base years could be used. Years in the range of 1991 to 2245 can be expressed by this field. Example: the year 2003 corresponds to a Year of Manufacture value of 13. If this field is not supported by the client it should be set to a value of zero.The Length of Mfr Name, Length of Product Name, and Length of Serial Number fields each contain 2-byte values that specify the length of the Manufacturer Name String field including any null termination or null pad characters, the length of the Product Name String field including any null termination or null pad characters, and the length of the Serial Number String field including any null termination or null pad characters, respectively.The Manufacturer Name String, Product Name String, and Serial Number String fields each contain a variable number of bytes specified by the Length Mfr Name, Product Name, and Serial Number fields, respectively, that contain an ASCII string that specifies the manufacturer, product name, and alphanumeric serial number of the display, respectively. Each of these strings is terminated by at least one null character. 42. Alternate Display Capability Packet The Alternate Display Capability Packet indicates the capability of the alternate displays attached to the MDDI client controller. It is sent in response to a Request Specific Status Packet. When prompted, a client device sends an Alternate Display Capability Packet for each alternate display that is supported. The client can indicate an ability to send the Alternate Display Capability Packet via a parameter value of 145 in the Valid Parameter Reply List of the Valid Status Reply List Packet.For MDDI systems operated in internal mode it may be common to have more than one display connected to an MDDI client controller. An example application is a mobile phone with a large display on the inside of the flip and a smaller display on the outside. It is not necessary for an internal mode client to return an Alternate Display Capability Packet for two potential reasons. First, the host may already be programmed or otherwise informed of the capabilities during manufacture since they are used in a common device or housing. Second, due to assembly of the two, the client cannot easily be disconnected or separated from a connection to the host, and the host may contain a hard-coded copy of the client capabilities, or at least know they do not change with a change in client, as otherwise might occur.The Number of Alt Displays field of the Client Capability Packet is used to report that more than one display is attached and the Alternate Display Capability Packet reports the capability of each alternate display. The video stream packet contains 4 bits in the Pixel Data Attributes field to address each alternate display in the client device.The format of one embodiment of a Alternate Display Capability Packet is shown generally in FIG. 89 . As seen in FIG. 86 , an Alternate Display Capability Packet is structured to have Packet Length, Packet Type, cClient ID, Alt Display Number, Reserved 1, Bitmap Width, Bitmap Height, Display Window Width, Display Window Height, Color Map RGB Width, RGB Capability, Monochrome Capability, Reserved 2, Y Cb Cr Capability, Display Feature Capability, Reserved 3, and CRC fields. A Packet Type value of 145 identifies a packet as a Alternate Display Capability Packet. The cClient ID field is reserved for a Client ID for future use and generally set to zero.The Alt Display Number field uses 1 byte to indicate the identity of the alternate display with an integer in the range of 0 to 15. The first alternate display is typically designated as number 0 and the other alternate displays are identified with unique Alt Display Number values with the largest value used being the total number of alternate displays minus 1. Values larger than the total number of alternate displays minus 1 are not used. Example: a mobile phone having a primary display and a caller-ID display connected to an MDDI client has one alternate display, so the Alt Display Number of the caller-ID display is zero and the Number of Alt Displays field of the Client Capability Packet has a value of 1.The Reserved 1 field (1 byte) is reserved for future use. All bits in this field are set to zero. One purpose of this field is to cause all subsequent 2 byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.The Bitmap Width field uses 2 bytes that specify the width of the bitmap expressed as a number of pixels. The Bitmap Height field uses 2 bytes that specify the height of the bitmap expressed as a number of pixels. The Display Window Width field uses 2 bytes that specify the width of the display window expressed as a number of pixels. The Display Window Height field uses 2 bytes that specify the height of the display window expressed as a number of pixels.The Color Map RGB Width field uses 2 bytes that specify the number of bits of the red, green, and blue color components that can be displayed in the color map (palette) display mode. A maximum of 8 bits for each color component (red, green, and blue) can be used. Even though 8 bits of each color component are sent in the Color Map Packet, only the number of least significant bits of each color component defined in this field are used. If the display client cannot use the color map (palette) format then this value is zero. The color map RGB Width word is composed of three separate unsigned values:Bits 3 through 0 define the maximum number of bits of blue in each pixel with values of 0 to 8 being considered valid. Bits 7 through 4 define the maximum number of bits of green in each pixel with values of 0 to 8 being considered valid. Bits 11 through 8 define the maximum number of bits of red in each pixel with values of 0 to 8 being considered valid. Bits 14 through 12 are reserved for future use and are generally set to zero. Bit 15 is used to indicate the ability of a client to accept Color Map pixel data in packed or unpacked format. When Bit 15 is set to a logic-one level, this indicates that the client can accept Color Map pixel data in either packed or unpacked format. If bit 15 is set to a logic-zero, this indicates that the client can accept Color Map pixel data only in unpacked format.RGB Capability field uses 2 bytes to specify the number of bits of resolution that can be displayed in RGB format. In one embodiment, if the client cannot use the RGB format then this value is set equal to zero. The RGB Capability word is composed of three separate unsigned values: Bits 3 through 0 define the maximum number of bits of blue (the blue intensity) in each pixel, Bits 7 through 4 define the maximum number of bits of green (the green intensity) in each pixel, and Bits 11 through 8 define the maximum number of bits of red (the red intensity) in each pixel. Bits 14 through 12 are reserved for future use and are set to zero. Bit 15 is used to indicate the ability of a client to accept RGB pixel data in packed or unpacked format. When Bit 15 is set to a logic-one level, this indicates that the client can accept RGB pixel data in either packed or unpacked format. If bit 15 is set to a logic-zero, this indicates that the client can accept RGB pixel data only in unpacked format.The 1 byte Monochrome Capability field contains a value or information to specify the number of bits of resolution that can be displayed in monochrome format. If the client cannot use the monochrome format then this value is set equal to zero. Bits 6 through 4 are reserved for future use and are generally set to zero. Bits 3 through 0 define the maximum number of bits of grayscale that can exist in each pixel. These four bits make it possible to specify that each pixel consists of 1 to 15 bits. If the value is zero then the monochrome format is not supported by the client. Bit 7 when set to one indicates that the client can accept monochrome pixel data in either packed or unpacked format. If bit 7 is set to zero this indicates that the client can accept monochrome pixel data only in unpacked format.The Reserved 2 field is a 1 byte wide field reserved for future use and generally has all bits in this field set to logic-zero level. In one embodiment, one purpose of this field is to cause all subsequent 2 byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.A 2-byte Y Cb Cr Capability field specifies the number of bits of resolution that can be displayed in Y Cb Cr format. If the client cannot use the Y Cb Cr format then this value is zero. The Y Cb Cr Capability word is composed of three separate unsigned values: Bits 3 through 0 define the maximum number of bits that specify the Cb sample, Bits 7 through 4 define the maximum number of bits that specify the Cr sample, Bits 11 through 8 define the maximum number of bits that specify the Y sample, and Bits 14 through 12 are reserved for future use and are set to zero. Bit 15 when set to one indicates that the client can accept Y Cb Cr pixel data in either packed or unpacked format. If bit 15 is set to zero this indicates that the client can accept Y Cb Cr pixel data only in unpacked format.A 2 byte Bayer Capability field specifies the number of bits of resolution, pixel group, and pixel order that can be transferred in Bayer format. If the client cannot use the Bayer format then this value is set at zero. The Bayer Capability field is composed of the following values: Bits 3 through 0 define the maximum number of bits of intensity that exist in each pixel, Bits 5 through 4 define the pixel group pattern that may be required, Bits 8 through 6 define a pixel order that is required, and Bits 14 through 9 are reserved for future use and are set to zero. Bit 15 when set to one indicates that the client can accept Bayer pixel data in either packed or unpacked format. If bit 15 is set to zero, this indicates that the client can accept Bayer pixel data only in unpacked format.The 2-byte CRC field contains a 16-bit CRC of all bytes in the packet including the Packet Length. 43. Register Access Packet The Register Access Packet provides either a host or a client with a means, mechanism, or method to access configuration and status registers in the opposite end of the MDDI link. These registers are likely to be unique for each display or device controller. These registers already exist in many displays that require setting configurations, modes of operation, and have other useful and necessary settings. The Register Access Packet allows the MDDI host or client to both write to a register and request to read a register using the MDDI link. When the host or client requests to read a register the opposite end should respond by sending the register data in the same packet type, but also by indicating that this is the data read from a particular register with the use of the Read/Write Info field. The Register Access Packet may be used to read or write multiple registers by specifying a register count greater than 1. A client indicates an ability to support the Register Access Packet using bit 22 of Client Feature Capability field of the Client Capability Packet.The format of one embodiment of a Register Access Packet is shown generally in FIG. 87 . As seen in FIG. 87 , a Register Access Packet is structured to have Packet Length, Packet Type, bClient ID, Read/Write Flags, Register Address, Parameter CRC, Register Data List and Register Data CRC fields. A Packet Type value of 146 identifies a packet as Register Access Packet. The bClient ID field is reserved for future use and is generally set to zero for now.The 2-byte Read/Write Flags field specifies the specific packet as either a write, or a read, or a response to a read, and provides a count of the data values.Bits 15 through 14 act as Read/Write, Flags. If Bits[15:14] are '00' then this packet contains data to be written to a register addressed by the Register Address field. The data to be written to the specified registers is contained in the Register Data List field. If Bits[15:14] are '10' then this is a request for data from one or more registers addressed by the Register Address field. If Bits[15:14] are '11' then that packet contains data that was requested in response to a Register Access Packet having bits 15:14 of the Read/Write Flags set to '10'. The Register Address field contains the address of the register corresponding to the first Register Data List item, and the Register Data List field contains data that was read from the address or addresses. If Bits[15.14] are '01' this is treated as an invalid value, this value is reserved for future use and is not used at this time, but those skilled in the art will understand how to employ it for future applications.Bits 13:0 use a 14-bit unsigned integer to specify the number of 32-bit Register Data items to be transferred in the Register Data List field. If bits 15:14 equal '00' then bits 13:0 specify the number of 32-bit register data items that are contained in the Register Data List field to be written to registers starting at the register specified by the Register Address field. If bits 15:14 equal '10' then bits 13:0 specify the number of 32-bit register data items that the receiving device sends to a device requesting that the registers be read. The Register Data List field in this packet contains no items and is of zero length. If bits 15:14 equal '11' then bits 13:0 specify the number of 32-bit register data items that have been read from registers that are contained in the Register Data List field. Bits 15:14 are not currently set equal to '01', which is considered an invalid value, and otherwise reserved for future designations or use.The Register Address field uses 4 bytes to indicate the register address that is to be written to or read from. For addressing registers whose addressing is less than 32 bits, the upper bits are set to zero.The 2-byte Parameter CRC field contains a CRC of all bytes form the Packet Length to the Register Address. If this CRC fails to check then the entire packet is discarded.The Register Data List field contains a list of 4-byte register data values to be written to client registers or values that were read from client device registers.The 2-byte Register Data CRC field contains a CRC of only the Register Data List. If this CRC fails to check then the Register Data may still be used, but the CRC error count is incremented. D. Packet CRC The CRC fields appear at the end of the packets and sometimes after certain more critical parameters in packets that may have a significantly large data field, and thus, an increased likelihood of errors during transfer. In packets that have two CRC fields, the CRC generator, when only one is used, is re-initialized after the first CRC so that the CRC computations following a long data field are not affected by the parameters at the beginning of the packet.In an exemplary embodiment, the polynomial used for the CRC calculation is known as the CRC-16, or X16 + X15 + X2 + X0. A sample implementation of a CRC generator and checker 3600 useful for implementing the invention is shown in FIG. 36 . In FIG. 36 , a CRC register 3602 is initialized to a value of 0x0001 just prior to transfer of the first bit of a packet which is input on the Tx_MDDI_Data_Before_CRC line, then the bytes of the packet are shifted into the register starting with the LSB first. Note that the register bit numbers in this figure correspond to the order of the polynomial being used, and not the bit positions used by the MDDI. It is more efficient to shift the CRC register in a single direction, and this results in having CRC bit 15 appear in bit position 0 of the MDDI CRC field, and CRC register bit 14 in MDDI CRC field bit position 1, and so forth until MDDI bit position 14 is reached.As an example, if the packet contents for the Client Request and Status Packets are: 0x000c, 0x0046, 0x000, 0x0400, 0x00, 0x00, 0x0000 (or represented as a sequence of bytes as: 0x0c, 0x00, 0x46, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00), and are submitted using the inputs of the multiplexors 3604 and 3606, and NAND gate 3608, the resulting CRC output on the Tx_MDDI_Data_With_CRC line is 0xd9aa (or represented as a sequence as 0xaa, 0xd9).When CRC generator and checker 3600 is configured as a CRC checker, the CRC that is received on the Rx_MDDI_Data line is input to multiplexor 3604 and exclusive-OR (XOR) gate 3612, and is compared bit by bit with the value found in the CRC register using NOR gate 3610, AND gate 3608, and AND gate 3614. If there are any errors, as output by AND gate 3614, the CRC is incremented once for every packet that contains a CRC error by connecting the output of gate 3614 to the input of register 3602. Note that the example circuit shown in the diagram of FIG. 36 can output more than one CRC error signal within a given CHECK_CRC_NOW window (see FIG. 37B ). Therefore, the CRC error counter generally only counts the first CRC error instance within each interval where CHECK_CRC_NOW is active. If configured as a CRC generator the CRC is clocked out of the CRC register at the time coinciding with the end of the packet.The timing for the input and output signals, and the enabling signals, is illustrated graphically in FIGS. 37A and 37B . The generation of a CRC and transmission of a packet of data are shown in FIG. 37A with the state (0 or 1) of the Gen_Reset, Check_CRC_Now, Generate_CRC_Now, and Sending_MDDI_Data signals, along with the Tx_MDDI_Data_Before_CRC and Tx_MDDI_Data_With_CRC signals. The reception of a packet of data and checking of the CRC value are shown in FIG. 37B , with the state of the Gen_Reset, Check_CRC_Now, Generate_CRC_Now, and Sending_MDDI_Data signals, along with the Rx_MDDI_Data and CRC error signals. E. Error Code Overload for Packet CRC Whenever only data packets and CRC are being transferred between the host and client, there are no error codes being accommodated. The only error is a loss of synchronization. Otherwise, one has to wait for the link to timeout from a lack of a good data transfer path or pipeline and then reset the link and proceed. Unfortunately, this is time consuming and somewhat inefficient.For use in one embodiment, a new technique has been developed in which the CRC portion of packets is used to transfer error code information. This is generally shown in FIG. 65 . That is, one or more error codes are generated by the processors or devices handling the data transfer which indicate specific predefined errors or flaws that might occur within the communication processing or link. When an error is encountered, that the appropriate error code is generated and transferred using the bits for the CRC of a packet. That is, the CRC value is overloaded, or overwritten, with the desired error code, which can be detected on the receiving end by an error monitor or checker that monitors the values of the CRC field. For those cases in which the error code matches the CRC value for some reason, the compliment of the error is transferred to prevent confusion.In one embodiment, to provide a robust error warning and detection system, the error code may be transferred several times, using a series of packets, generally all, that are transferred or sent after the error has been detected. This occurs until the point at which the condition creating the error is cleared from the system, at which point the regular CRC bits are transferred without overloading by another value.This technique of overloading the CRC value provides a much quicker response to system errors while using a minimal amount of extra bits or fields.As shown in FIG. 66 , a CRC overwriting mechanism or apparatus 6600 is shown using an error detector or detections means 6602, which can form part of other circuitry previously described or known, detects the presence or existence of errors within the communication link or process. An error code generator or means 6604, which can be formed as part of other circuitry or use techniques such as look up tables to store pre-selected error messages, generates one or more error codes to indicate specific predefined errors or flaws that have been detected as occurring. It is readily understood that devices 6602 and 6604 can be formed as a single circuit or device as desired, or as part of a programmed sequence of steps for other known processors and elements.A CRC value comparator or comparison means 6606 is shown for checking to see if the selected error code or codes are the same as the CRC value being transferred. If that is the case then a code compliment generator or generation means or device is used to provide the compliment of the error codes as to not be mistaken as the original CRC pattern or value and confuse or complicate the detection scheme. An error code selector or selection means element or device 6610 then selects the error code or value it is desired to insert or overwrite, or their respective compliments as appropriate. An error code CRC over-writer or over writing mechanism or means 6612 is a device that receives the data stream, packets, and the desired codes to be inserted and overwrites the corresponding or appropriate CRC values, in order to transfer the desired error codes to a receiving device.As mentioned, the error code may be transferred several times, using a series of packets, so the over-writer 6612 may utilize memory storage elements in order to maintain copies of the codes during processing or recall these codes from previous elements or other known storage locations which can be used to store or hold their values as needed, or as desired.The general processing the overwriting mechanism of FIG. 66 is implementing is shown in additional detail in FIGS. 67A and 67B . In 67A an error, one or more, is detected in step 6702 in the communication data or process and an error code is selected in step 6704 to indicate this condition. At the same time, or at an appropriate point, the CRC value to be replaced is checked in a step 6706, and compared to the desired error code in step 6708. The result of this comparison, as discussed earlier, is a determination as to whether or not the desired code, or other representative values, will be the same as the CRC value present. If this is the case, then processing proceeds to a step 6712 where the compliment, or in some cases another representative value, as desired, is selected as the code to insert. One it has been determined what error codes or values are to be inserted in steps 6710 and 6714, that appropriate code is selected for insertion. These steps are illustrated as separate for purposes of clarity but generally represent a single choice based on the output of the step 6708 decision. Finally, in step 6716 the appropriate values are overwritten in the CRC location for transfer with the packets being targeted by the process.On the packet reception side, as shown in FIG. 67B , the packet CRC values are being monitored in a step 6722. Generally, the CRC values are being monitored by one or more processes within the system to determine if an error in data transfer has occurred and whether or not to request a retransmission of the packet or packets, or to inhibit further operations and so forth, some of which is discussed above. As part of such monitoring the information can also be used to compare values to known or pre-selected error codes, or representative values and detect the presence of errors. Alternatively, a separate error detection process and monitor can be implemented. If a code appears to be present it is extracted or otherwise noted in step 6724 for further processing. A determination can be made in step 6726 as to whether or not this is the actual code or a compliment, in which case an additional step 6728 is used to translate the value to the desired code value. In either case the resulting extracted code, compliment, or other recovered values are then used to detect what error has occurred form the transmitted code in step 6730. V. Link Hibernation The MDDI link can enter the hibernation state quickly and wake up from hibernation quickly. This responsiveness allows a communicating system or device to force the MDDI link into hibernation frequently to reduce power consumption, since it can wake up again for use very quickly. In one embodiment, as an external mode client wakes up from hibernation for the first time it does so at a data rate and with strobe pulse timing that is consistent with a 1 Mbps rate, that is, the MDDI_Stb pair should toggle at a 500 kHz rate. Once characteristics of the client have been discovered by or communicated to the host, then the host may wake up the link at generally any rate from 1 Mbps to the maximum rate at which the client can operate. Internal mode clients may wake up at any rate at which both the host and client can operate. This is also generally applicable to the first time an internal mode client wakes up.In one embodiment, when the link wakes up from hibernation the host and client exchange a sequence of pulses. These pulses can be detected using low-speed line receivers that consume only a fraction of the current as the differential receivers required to receive the signals at the maximum link operating speed. Either the host or client can wake up the link, so the wake-up protocol is designed to handle possible contention that can occur if both host and client attempt to wake up simultaneously.During the hibernation state the MDDI_Data and MDDI_Stb differential drivers are disabled and the differential voltage across all differential pairs is zero volts. The differential line receivers used to detect the sequence of pulses during the wake-up from hibernation have an intentional voltage offset. In one embodiment, the threshold between a logic-one and logic-zero level in these receivers is approximately 125 mV. This causes an un-driven differential pair to be seen as a logic-zero level during the link wake-up sequence.In order to enter a Hibernation State, the host sends 64 MDDI_Stb cycles after the CRC of the Link Shutdown Packet. The host disables the MDDI_Data0 output of the host in the range of 16 to 56 MDDI_Stb cycles (including output disable propagation delays) after the CRC. The host finishes sending the 64 MDDI_Stb cycles after the CRC of the Link Shutdown packet before it initiates the wake-up sequence. In one embodiment, the host-initiated wake-up is defined as the host having to wait at least 100 nsec after MDDI_Data0 reaches a valid logic-one level before driving pulses on MDDI_Stb. In one embodiment, the client waits at least 60 MDDI_Stb cycles after the CRC of the Link Shutdown Packet before it drives MDDI_Data0 to a logic-one level to attempt to wake-up the host.In order to "wake-up" from a Hibernation State, several actions or processes are undertaken. When the client, here a display, needs data or communication, service, from the host it drives the MDDI_Data0 line to a logic-one state for around 70 to 1000 µsec, while MDDI_Stb is inactive and keeps MDDI_Data0 driven to a logic-one level for about 70 MDDI_Stb cycles (over a range of 60 to 80) after MDDI_Stb becomes active, although other periods can be used as desired. The client then disables the MDDI_Data0 driver by placing it into a high-impedance state.If MDDI_Stb is active during hibernation, although unlikely, then the client might only drive MDDI_Data0 to a logic-one state for about 70 MDDI_Stb cycles (over a range of 60 to 80). This action causes the host to start or restart data traffic on the forward link (208) and to poll the client for its status.The host must detect the presence of the request pulse and begins the startup sequence of first driving the MDDI_Stb to logic-zero level and MDDI_Data0 to a logic-high level for at least around 200 nsec. And then while toggling MDDI_Stb continue to drive MDDI_Data0 to a logic-one level for about 150 MDDI_Stb cycles (a range of 140 to 160) and to logic-zero for about 50 MDDI_Stb cycles. The client should not send a service request pulse if it detects MDDI_Data0 in the logic-one state for more than 80 MDDI_Stb cycles. When the client has detected MDDI_Data0 at a logic-one level for 60 to 80 MDDI_Stb cycles it begins to search for the interval where the host drives MDDI_Data0 to a logic-zero level for 50 MDDI_Stb cycles. After the host drives MDDI_Data0 to a logic-zero level for a duration of 50 MDDI_Stb cycles, then the host starts sending packets on the link. The first packet sent is a Sub-frame Header Packet. The client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDI_Stb cycles of the 50 cycle interval. The nature of selection of the times and tolerances of time intervals related to the hibernation processing and start up sequence are discussed further below. (See FIGS. 68A-C below.)The host may initiate the wake-up by first enabling MDDI_Stb and simultaneously drive it to a logic-zero level. MDDI_Stb should not be driven to a logic-one level until pulses are output as described below. After MDDI_Stb reaches a logic-zero level the host enables MDDI_Data0 and simultaneously drives it to a logic-one level. MDDI_Data0 should not be driven to a logic-zero level during the wake-up process_until the interval where it is driven to a logic-zero level for an interval of 50 MDDI_Stb pulses as described below. The host should wait at least 200 nsec after MDDI_Data0 reaches a valid logic-one level before driving pulses on MDDI_Stb. This timing relationship occurs while considering the worst-case output enable delays. This substantially guarantees that a client has sufficient time to fully enable its MDDI_Stb receiver after being awakened by a logic-one level on MDDI_Data0 that was driven by the host.An example of the processing steps for a typical client service request event 3800 with no contention is illustrated in FIG. 38 , where the events are labeled for convenience in illustration using the letters A, B, C, D, E, F, and G. The process commences at point A when the host sends a Link Shutdown Packet to the client device to inform it that the link will transition to a low-power hibernation state. In a next step, the host enters the low-power hibernation state by disabling the MDDI_Data0 driver and setting the MDDI_Stb driver to a logic zero, as shown at point B. MDDI_Data0 is driven to a logic-zero level by a high-impedance bias network. After some period of time, the client sends a service request pulse to the host by driving MDDI_Data0 to a logic one level as seen at point C. The host still asserts the logic-zero level using the high-impedance bias network, but the driver in the client forces the line to a logic one level. Within 50 µsec, the host recognizes the service request pulse, and asserts a logic one level on MDDI_Data0 by enabling its driver, as seen at point D. The client then ceases from attempting to assert the service request pulse, and the client places its driver into a high-impedance state, as seen at point E. The host drives MDDI_Data0 to a logic-zero level for 50 µsec, as shown at point F, and also begins to generate MDDI_Stb in a manner consistent with the logic-zero level on MDDI_Data0. The client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDI_Stb cycles. After asserting MDDI_Data0 to a logic-zero level and driving MDDI_Stb for 50 µsec, the host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point G.A similar example is illustrated in FIG. 39 where a service request is asserted after the link restart sequence has begun, and the events are again labeled using the letters A, B, C, D, E, F, and G. This represents a worst case scenario where a request pulse or signal from the client comes closest to corrupting the Sub-frame Header Packet. The process commences at point A when the host again sends a Link Shutdown Packet to the client device to inform it that the link will transition to a low-power hibernation state. In a next step, the host enters the low-power hibernation state by disabling the MDDI_Data0 driver and setting the MDDI_Stb driver to a logic-zero level, as shown at point B. As before, MDDI_Data0 is driven to a logic-zero level by a high-impedance bias network. After a period of time, the host begins the link restart sequence by driving MDDI_Data0 to a logic-one level for 150 µsec as seen at point C. Prior to 50 µsec passing after the link restart sequence begins the display also asserts MDDI_Data0 for a duration of 70 µsec, as seen at point D. This happens because the display has a need to request service from the host and does not recognize that the host has already begun the link restart sequence. The client then ceases attempting to assert the service request pulse, and the client places its driver into a high-impedance state, as seen at point E. The host continues to drive MDDI_Data0 to a logic-one level. The host drives MDDI_Data0 to a logic-zero level for 50 µsec, as shown at point F, and also begins to generate MDDI_Stb in a manner consistent with the logic zero level on MDDI_Data0. After asserting MDDI_Data0 to a logic-zero level, and driving MDDI_Stb for 50 µsec, the host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point G.From the above discussion, one sees that the prior solution involved having the host go through two states as part of a wakeup sequence. For the first state, the host drives the MDDI_Data0 signal high for 150 µs, and then drives the MDDI_Data0 signal low for 50 us while activating the MDDI_Stb line, and then begins to transmit MDDI packets. This process works well to advance the state of the art in terms of data rates achievable using the MDDI apparatus and methods. However, as stated earlier, more speed in terms of reduced response time to conditions or being able to more quickly select the next step or process, are the ability to simplify processing or elements, are always in demand.Applicants have discovered a new inventive approach to wake-up processing and timing in which the host uses a clock cycle based timing for the signal toggling. In this configuration, the host starts toggling MDDI_Stb from 0 to 10 µsec after the host drives the MDDI_Data0 signal high at the beginning of the wake-up sequence, and does not wait until the signal is driven low. During a wake-up sequence, the host toggles MDDI_Stb as though the MDDI_Data0 signal were always at a logic-zero level. This effectively removes the concept of time from the client side, and the host changes from the prior 150 µs and 50 µs periods for the first two states, to 150 clock cycles and 50 clock cycles, for these periods.The host now becomes responsible for driving that data line high, and within 10 clock cycles starting to transmit a strobe signal as if the data line was zero. After the host has driven the data line high for 150 clock cycles, the host drives the data line low for 50 clock cycles while continuing to transmit the strobe signal. After it has completed both of these processes, the host can begin to transmit the first sub-frame header packet.On the client side, the client implementation can now use the generated clock to calculate the number of clock cycles that the data line is first high, and then low. The number of clock cycles that need to occur in both the data line driven high state is 150 and data line driven low state is 50. This means that for a proper wakeup sequence, the client should be able to count at least 150 continuous clock cycles of the data line being high, followed by at least 50 continuous clock cycles of the data line being low. Once these two conditions are met, the client can begin to search for the unique word of the first sub-frame. A break in this pattern is used as a basis to return the counters to an initial state in which the client again looks for the first 150 continuous clock cycles of the data line being high.A client implementation of the invention for host based wakeup from hibernation is very similar to the initial start-up case except that the clock rate is not forced to start at 1Mbps, as discussed earlier. Instead the clock rate can be set to resume at whatever previous rate was active when the communication link went into hibernation. If the host begins transmission of a strobe signal as described above, the client should be able to again count at least 150 continuous clock cycles of the data line being high, followed by at least 50 continuous clock cycles of the data line being low. Once these two conditions have been met, the client can begin the search for the unique word.A client implementation of the invention for client based wakeup from hibernation is similar to the host based wakeup except that it starts by having the client driving the data line. The client can asynchronously drive the data line without a clock to wake up the host device. Once the host recognizes that the data line is being driven high by the client, it can begin its wakeup sequence. The client can count the number of clock cycles generated by the host starting or during its wakeup process. Once the client counts 70 continuous clock cycles of the data being high, it can stop driving the data line high. At this point, the host should already be driving the data line high as well. The client can then count another 80 continuous clock cycles of the data line being high to reach the 150 clock cycles of the data line being high, and can then look for 50 clock cycles of the data line being low. Once these three conditions have been met the client can begin to look for the unique word.An advantage of this new implementation of wake-up processing is that it removes the need for a time measuring device. Whether this is an oscillator, or capacitor discharge circuit, or other such known devices, the client no longer needs such external devices to determine the start up conditions. This saves money and circuit area when implementing controllers, counters, and so forth on a client device board. While this may not be as advantageous to the client, for the host, this technique should also potentially simplify the host in terms of very high density logic (VHDL) being used for core circuitry. The power consumption of using the data and strobe lines as the wakeup notification and measurement source will also be lower since no external circuitry will need to be running for the core elements to be waiting for a host based wakeup. The number of cycles or clock periods used are exemplary and other periods can be used as will be apparent to one skilled in the art.An advantage of this new implementation of wake-up processing is that it removes the need for a time measuring device. Whether this is an oscillator, or capacitor discharge circuit, or other such known devices, the client no longer needs such external devices to determine the start up conditions. This saves money and circuit area when implementing controllers, counters, and so forth on a client device board. While this may not be as advantageous to the client, for the host, this technique should also potentially simplify the host in terms of very high density logic (VHDL) being used for core circuitry. The power consumption of using the data and strobe lines as the wakeup notification and measurement source will also be lower since no external circuitry will need to be running for the core elements to be waiting for a host based wakeup.To clarify and illustrate the operation of this new technique, the timing of MDDI_Data0, MDDI_Stb, and various operations relative to the clock cycles are shown in FIGS. 68A, 68B, and 68C .An example of the processing steps for a typical Host-initiated Wake-up with no contention is illustrated in FIG. 68A , where the events are again labeled for convenience in illustration using the letters A, B, C, D, E, F, and G. The process commences at point A when the host sends a Link Shutdown Packet to the client device to inform it that the link will transition to a low-power hibernation state. In a next step, point B, the host toggles MDDI_Stb for about 64 cycles (or as desired for system design) to allow processing by the client to be completed prior to stopping MDDI_Stb from toggling, which stops the recovered clock in the client device. The host also initially sets MDDI_Data0 to logic-zero level and then disables the MDDI_Data0 output in the range of 16 to 48 cycles (generally including output disable propagation delays) after the CRC. It may be desirable to place high-speed receivers for MDDI_Data0 and MDDI_Stb in the client in a low power state some time after the 48 cycles after the CRC and prior to the next stage (C). The client places its high-speed receivers for MDDI_Data0 and MDDI_Stb into hibernation any time after the rising edge of the 48thMDDI_Stb cycle after the CRC of the Link Shutdown Packet. It is recommended that the client place its high-speed receivers for MDDI_Data0 and MDDI_Stb into hibernation before the rising edge of the 64thMDDI_Stb cycle after the CRC of the Link Shutdown Packet.The host enters the low-power hibernation state at point or step C, by disabling the MDDI_Data0 and MDDI_Stb drivers and placing a host controller in a low power hibernation state. One can also set the MDDI_Stb driver to a logic-zero level (using a high-impedance bias network) or to continue toggling during hibernation, as desired. The client is also in a low power level hibernation state.After some period of time, the host commences the link restart sequence at point D, by enabling the MDDI_Data0 and MDDI_Stb driver output. The host drives MDDI_Data0 to a logic-one level and MDDI_Stb to a logic-zero level for as long as it should take for the drivers to fully enable their respective outputs. The host typically waits around 200 nanoseconds after these outputs reach desired logic levels before driving pulses on MMDI_Stb. This allows the client time to prepare to receive.With the host drivers enabled and MDDI_Data0 being driven to a logic-one level, the host begins to toggle MDDI_Stb for a duration of 150 MDDI_Stb cycles, as seen at point E. The host drives MDDI_Data0 to a logic zero level for 50 cycles, as shown at point F, and the client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDI_Stb cycles. The host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point G.An example of the processing steps for a typical Client-initiated Wake-up with no contention is illustrated in FIG. 68B , where the events are again labeled for convenience in illustration using the letters A, B, C, D, E, F, G, H, and I. As before, the process commences at point A when the host sends a Link Shutdown Packet to inform the client that the link will transition to the low power state.At point B, the host toggles MDDI_Stb for about 64 cycles (or as desired for system design) to allow processing by the client to be completed prior to stopping MDDI_Stb from toggling, which stops the recovered clock in the client device. The host also initially sets MDDI_Data0 to a logic-zero level and then disables the MDDI_Data0 output in the range of 16 to 48 cycles (generally including output disable propagation delays) after the CRC. It may be desirable to place high-speed receivers for MDDI_Data0 and MDDI_Stb in the client in a low power state some time after the 48 cycles after the CRC and prior to the next stage (C).The host enters the low-power hibernation state at point or step C, by disabling the MDDI_Data0 and MDDI_Stb drivers and placing a host controller in a low power hibernation state. One can also set the MDDI_Stb driver to a logic-zero level (using a high-impedance bias network) or to continue toggling during hibernation, as desired. The client is also in a low power level hibernation state.After some period of time, the client commences the link restart sequence at point D, by enabling the MDDI_Stb receiver, and also enabling an offset in the MDDI_Stb receiver to guarantee the state of the received version of MDDI_Stb is a logic-zero level in the client before the host enables its MDDI_Stb driver. It may be desirable for the client to enable the offset slightly ahead of enabling the receiver to ensure the reception of a valid differential signal and inhibit erroneous signals, as desired. The Client enables the MDDI_Data0 driver while driving the MDDI_Data0 line to a logic one level. It is allowed for MDDI_Data0 and MDDI_Stb to be enabled simultaneously if the time to enable the offset and enable the standard MDDI_Stb differential receiver is less than 200 nsec.Within about 1 msec., point E, the host recognizes the service request pulse from the client, and the host begins the link restart sequence by enabling the MDDI_Data0 and MDDI_Stb driver outputs. The host drives MDDI_Data0 to a logic-one level and MDDI_Sb to a logic-zero level for as long as it should take for the drivers to enable their respective outputs. The host typically waits around 200 nanoseconds after these outputs reach desired logic levels before driving pulses on MDDI_Stb. This allows the client time to prepare to receive.With the host drivers enabled and MDDI_Data0 being driven to a logic-one level, the host begins outputting pulses on MDDI_Stb for a duration of 150 MDDI_Stb cycles, as seen at point F. When the client recognizes the first pulse on MDDI_Stb it disables the offset in its MDDI_Stb receiver. The client continues to drive MDDI_Data0 to a logic-one level for 70 MDDI_Stb cycles, and disables its MDDI_Data0 driver at point G. The host continues to drive MDDI_Data0 to a logic-one level for a duration of 80 additional MDDI_Stb pulses, and at point H drives MDDI_Data0 to a logic-zero level.As seen at points G and H, the host drives MDDI_Data0 to a logic-zero level for 50 cycles, and the client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDI_Stb cycles. After driving MDDI_Stb for a duration of 50 cycles, the host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point I.An example of the processing steps for a typical Host-initiated Wake-up with contention from the client, that is the client also wants to wake up the link, is illustrated in FIG. 68C . The events are again labeled for convenience in illustration using the letters A, B, C, D, E, F, G, H, and I. As before, the process commences at point A when the host sends a Link Shutdown Packet to inform the client that the link will transition to the low power state, proceeds to point B where MDDI_Stb is toggled for about 64 cycles (or as desired for system design) to allow processing by the client to be completed, and then to point C, where the host enters the low-power hibernation state, by disabling the MDDI_Data0 and MDDI_Stb drivers and placing a host controller in a low power hibernation state. After some period of time, the host commences the link restart sequence at point D, by enabling the MDDI_Data0 and MDDI_Stb driver output, and begins to toggle MDDI_Stb for a duration of 150 MDDI_Stb cycles, as seen at point E.At up to 70 MDDI_Stb cycles after point E, here point F, the client has not yet recognized that the host is driving MDDI_Data0 to a logic-one level so the client also drives MDDI_Data0 to a logic-one level. This occurs here because the client has a desire to request service but does not recognize that the host it is trying to communicate with has already begun the link restart sequence. At point G, the client ceases to drive MDDI_Data0, and places its driver into a high impedance state by disabling its output. The host continues to drive MDDI_Data0 to a logic-one level for 80 additional cycles.The host drives MDDI_Data0 to a logic zero level for 50 cycles, as shown at point H, and the client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDI_Stb cycles. The host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point I. VI. Interface Electrical Specifications In the example embodiments, Data in a Non-Return-to-Zero (NRZ) format is encoded using a data-strobe signal or DATA-STB format, which allows clock information to be embedded in the data and strobe signals. The clock can be recovered without complex phase lock loop circuitry. Data is carried over a bi-directional differential link, generally implemented using a wire-line cable, although other conductors, printed wires, or transfer elements can be used, as stated earlier. The strobe signal (STB) is carried over a uni-directional link which is driven only by the host. The strobe signal toggles value (0 or 1) whenever there is a back-to-back state, 0 or 1, that remains the same on the Data line or signal.An example of how a data sequence such as bits "1110001011" can be transmitted using DATA-STB encoding is shown in graphical form in FIG. 40 . In FIG. 40 , a DATA signal 4002 is shown on the top line of a signal timing chart and a STB signal 4004 is shown on a second line, each time aligned as appropriate (common starting point). As time passes, when there is a change of state occurring on the DATA line 4002 (signal), then the STB line 4004 (signal) maintains a previous state, thus, the first '1' state of the DATA signal correlates with the first '0' state for the STB signal, its starting value. However, if or when the state, level, of the DATA signal does not change then the STB signal toggles to the opposite state or '1' in the present example, as is the case in FIG. 40 where the DATA is providing another '1' value. That is, there is one and only one transition per bit cycle between DATA and STB. Therefore, the STB signal transitions again, this time to '0' as the DATA signal stays at '1' and holds this level or value as the DATA signal changes level to '0'. When the DATA signal stays at '1', the STB signal toggles to the opposite state or '1' in the present example, and so forth, as the DATA signal changes or holds levels or values.Upon receiving these signals, an exclusive-OR (XOR) operation is performed on the DATA and STB signals to produce a clock signal 4006, which is shown on the bottom of the timing chart for relative comparison with the desired data and strobe signals. An example of circuitry useful for generating the DATA and STB outputs or signals from input data at the host, and then recovering or recapturing the data from the DATA and STB signals at the client, is shown in FIG. 41 .In FIG. 41 , a transmission portion 4100 is used to generate and transmit the original DATA and STB signals over an intermediary signal path 4102, while a reception portion 4120 is used to receive the signals and recover the data. As shown in FIG 41 , in order to transfer data from a host to a client, the DATA signal is input to two D-type flip-flop circuit elements 4104 and 4106 along with a clock signal for triggering the circuits. The two flip-flop circuit outputs (Q) are then split into a differential pair of signals MDDI_Data0+, MDDI_Data0- and MDDI_Stb+, MDDI_Stb-, respectively, using two differential line drivers 4108 and 4110 (voltage mode). A three-input exclusive-NOR (XNOR) gate, circuit, or logic element 4112 is connected to receive the DATA and outputs of both flip-flops, and generates an output that provides the data input for the second flip-flop, which in turn generates the MDDI_Stb+, MDDI_Stb-signals. For convenience, the XNOR gate has the inversion bubble placed to indicate that it is effectively inverting the Q output of the flip-flop that generates the Strobe.In reception portion 4120 of FIG 41 , the MDDI_Data0+, MDDI_Data0- and MDDI_Stb+, MDDI_Stb- signals are received by each of two differential line receivers 4122 and 4124, which generate single outputs from the differential signals. The outputs of the amplifiers are then input to each of the inputs of a two-input exclusive-OR (XOR) gate, circuit, or logic element 4126 which produces the clock signal. The clock signal is used to trigger each of two D-type flip-flop circuits 4128 and 4130 which receive a delayed version of the DATA signal, through delay element 4132, one of which (4128) generates data '0' values and the other (4130) data '1' values. The clock has an independent output from the XOR logic as well. Since the clock information is distributed between the DATA and STB lines, neither signal transitions between states faster than half of the clock rate. Since the clock is reproduced using the exclusive-OR processing of the DATA and STB signals, the system effectively tolerates twice the amount of skew between the input data and clock compared to the situation when a clock signal is sent directly over a single dedicated data line.The MDDI Data pairs, MDDI_Stb+, and MDDI_Stb- signals are operated in a differential mode to maximize immunity from the negative affects of noise. Each differential pair is parallel-terminated with the characteristic impedance of the cable or conductor being used to transfer signals. Generally, all parallel-terminations reside in the client device. This is near the differential receiver for forward traffic (data sent from the host to the client), but it is at the driving end of the cable or other conductors or transfer elements for reverse traffic (data sent from the client to the host). For reverse traffic the signal is driven by the client, reflected by the high impedance receiver at the host, and is terminated at the client. This avoids the need for a double termination that would increase current consumption. It also functions at data rates greater than the reciprocal of the round-trip delay in the cable. The MDDI_Stb+ and MDDI_Stb-conductors or signals are only driven by the host.An exemplary configuration of elements useful for achieving the drivers, receivers, and terminations for transferring signals as part of the inventive MDDI are shown in FIG. 42 . This exemplary interface uses low voltage sensing, here 200 mV, with less than 1 volt power swings and low power drain. The driver of each signal pair has a differential current output. While receiving MDDI packets the MDDI_Data and MDDI_Stb pairs use a conventional differential receiver with a voltage threshold of zero volts. In the hibernation state the driver outputs are disabled and the parallel-termination resistors pull the voltage on each signal pair to zero volts. During hibernation a special receiver on the MDDI_Data0 pair has an offset input threshold of positive 125 mV, which causes the hibernation line receiver to interpret the un-driven signal pair as a logic-zero level.Sometimes the host or client simultaneously drive the differential pair to a logic-one level or a logic-zero level to guarantee a valid logic-level on the pair when the direction of data flow changes (from host-to-client or client-to-host). The output voltage range and output specifications are still met with simultaneously driven outputs driven to the same logic level. In some systems it may be necessary to drive a small current into the terminated differential pair to create a small offset voltage at certain times during hibernation and when the link is waking up from the hibernation state. In those situations, the enabled offset-current bias circuits drive the current levels referred to as: IESD-and-Rx- internal ESD diode and differential receiver input with IESD-and-Rx≤ 1 µA typically; ITx-Hi-Z- differential driver output in the high-impedance state, with ITx-Hi-Z≤ 1 µA typically; and Iexternal-ESD- the leakage through the external ESD protection diodes, with Iexternal-ESD≤ 3 µA typically.Each of these leakage currents is illustrated in FIG. 47 . The pull-up and pull-down circuits must achieve the minimum differential voltage under the worst-case leakage conditions described above when all occur simultaneously. The total leakage is ≤ 4 µA for internal mode without external ESD protection diodes and ≤ 10 µA for external mode with external ESD protection.The electrical parameters and characteristics of the differential line drivers and line receivers are described for one exemplary embodiment in Tables VIIa-VIId. Functionally, the driver transfers the logic level on the input directly to a positive output, and the inverse of the input to a negative output. The delay from input to outputs is well-matched to the differential line which is driven differentially. In most implementations, the voltage swing on the outputs is less than the swing on the input to minimize power consumption and electromagnetic emissions. In one embodiment, there is a minimum voltage swing of around 0.5V. However, other values can be used, as would be known by those skilled in the art, and the inventors contemplate a smaller value in some embodiments, depending on design constraints.The differential line receivers have the same characteristic as a high-speed voltage comparator. In FIG. 41 , the input without the bubble is the positive input and the input with the bubble is the negative input. The output is a logic one if: (Vinput+) - (Vinput-) is greater than zero. Another way to describe this is a differential amplifier with very large (virtually infinite) gain with the output clipped at logic 0 and 1 voltage levels.The delay skew between different pairs should be minimized to operate the differential transmission system at the highest potential speed.Table VIIIaVoutput-RangeAllowable host driver output voltage range with respect to host ground0.351.60VIOD+Driver differential output high current (while driving the terminated transmission line)2.54.5mAIOD-Driver differential output low voltage (while driving the terminated transmission line)-4.5-2.5mATRise-FallRise and fall time (between 20% and 80% amplitude) of driver output, measured in differential mode425Note 1psecTskew-pairSkew between positive and negative outputs of the same differential pair (intra-pair skew)125psecTDifferential-skewPeak delay skew between one differential pair and any other differential pair.See abovepsecTAJitter, bit boundary to center crossing0TB-283psec TB-TP0-DRVR Jitter, bit boundary to minimum output level0See abovepsecNote 1: The maximum rise and fall time is either 30% of the interval to transmit one bit on one differential pair or 100 nsec, whichever is smaller.Table VIIbVoutput-Range-ExtAllowable client driver output voltage range with respect to client ground (External Mode)01.25VVoutput-Range-IntAllowable client driver output voltage range with respect to client ground (Internal Mode)0.351.60VIOD+Driver differential output high voltage (while driving the equivalent of the pull-up and pull-down circuits that exist at the host and client)2.54.5mAIOD-Driver differential output low voltage (while driving the equivalent of the pull-up and pull-down circuits that exist at the host and client)-4.5-2.5mATRise-FallRise and fall time (between 20% and 80% amplitude) of driver output, measured in differential mode425Note 1psecTskew-pairSkew between positive and negative outputs of the same differential pair (intra-pair skew)125psecTDifferential-SkewPeak delay skew between one differential pair and any other differential pair.See abovepsecTAJitter, bit boundary to center crossingTB- 283psecTB-TP4-DRVRJitter, bit boundary to minimum output levelSee abovepsecNote 1: The maximum rise and fall time is 30% of the interval to transmit one bit on one differential pair or 100 nsec, whichever is smaller.Table VIIcVIT+Receiver differential input high threshold voltage050mVVIT-Receiver differential input low threshold voltage-500mVVIT+Receiver differential input high threshold voltage (offset for hibernation wake-up)125175mVVIT-Receiver differential input low threshold voltage (offset for hibernation wake-up)75125mVVInput-RangeAllowable receiver input voltage range with respect to client ground.01.65VRtermParallel termination resistance value98100102ΩIinInput leakage current-1010µACpadCapacitance of pad to client ground (note 1)5pFCdiffCapacitance between the two signals of a differential pair (note 1)1pFTskew-pair-INTSkew caused by the differential receiver between positive and negative inputs of the differential receiver of the same differential pair (intra-pair skew). Internal Mode250psecTskew-pair-EXTIntra-pair skew, External Mode50psecTDifferential-SkewPeak delay skew between one differential pair and any other differential pair.See abovepsecTAJitter, bit boundary to center crossingTB- 38.5psecTB-TP4-RCVR-INTJitter, bit boundary to minimum input level (Internal Mode)0See abovepsecTB-TP4-RCVR-EXTJitter, bit boundary to minimum input level (External Mode)0See abovepsecTable VIIdVIT+Receiver differential input high threshold voltage (non-offset)050mVVIT-Receiver differential input low threshold voltage (non-offset)-500mVVIT+Receiver differential input high threshold voltage (offset for hibernation wake-up)125175mVVIT-Receiver differential input low threshold voltage (offset for hibernation wake-up)751.25myVInput-RangeAllowable receiver input voltage range with respect to host ground.01.65VTinInput leakage current (excluding hibernate bias)-1010µACpadCapacitance of pad to host ground5pFCdiffCapacitance between the two signals of a differential pair.1pFTskew-pairSkew caused by the differential receiver between positive and negative inputs of the differential receiver of the same differential pair (intra-pair skew).250psecTskew-pair-EXTIntra-pair skew, External Mode50psecTAJitter, bit boundary to center crossingTB- 38.5psecTB-TP0-RCVR- INTJitter, bit boundary to minimum output level (External Mode)See abovepsec TB-TP0- RCVR-EXT Jitter, bit boundary to Minimum output level (External Mode) See abovepsecIn FIG. 42 , a host controller 4202 and a client or display controller 4204 are shown transferring packets over the communication link 4206. The host controller employs a series of three drivers 4210, 4212, and 4214 to receive the host DATA and STB signals to be transferred, as well as to receive the client Data signals to be transferred, while the client employs the three drivers 4230, 4232, and 4234. The driver responsible for passage of the host DATA (4212) employs an enable signal input to allow activation of the communication link generally only when transfer from the host to the client is desired. Since the STB signal is formed as part of the transfer of data, no additional enable signal is employed for that driver (4212). The inputs of each of the client DATA and STB receivers (4132, 4230) have termination impedances or resistors 4218 and 4220, respectively paced across them. Driver 4234 in the client controller is used to prepare the data signals being transferred from the client to the host, where driver 4214 on the input side, processes the data.The special receivers (drivers) 4216 and 4236 are coupled or connected to the DATA lines, and generate or use the 125 mV voltage offset previously discussed, as part of the hibernation control discussed elsewhere. The offsets cause the hibernation line receivers to interpret un-driven signal pairs as a logic-zero level.The above drivers and impedances can be formed as discrete components or as part of a circuit module, or an application specific integrated circuit (ASIC) which acts as a more cost effective encoder or decoder solution.It can be easily seen that power is transferred to the client device, or display, from the host device using the signals labeled HOST_Pwr and HOST_Gnd over a pair of conductors. The HOST_Gnd portion of the signal acts as the reference ground and the power supply return path or signal for the client device. The HOST_Pwr signal acts as the client device power supply which is driven by the host device In an exemplary configuration, for low power applications, the client device is allowed to draw up to 500 mA. The HOST_Pwr signal can be provided from portable power sources, such as but not limited to, a lithium-ion type battery or battery pack residing at the host device, and may range from 3.2 to 4.3 volts with respect to HOST_Gnd. VII. Timing Characteristics A. Overview The steps and signal levels employed to enter a hibernation state (no service requested, desired, or required), and to secure service for a client from the host, either by host- or client initiation, are illustrated in FIGS. 43a, 43b, and 43c , respectively. In FIGS. 43a, 43b, and 43c , the first part of signals being illustrated shows a Link Shutdown Packet being transferred from the host and the data line is then driven to a logic zero state using the high-impedance bias circuit. No data is being transmitted by the client, or host, which has its driver disabled. A series of strobe pulses for the MDDI_Stb signal line can be seen at the bottom, since MDDI_Stb is active during the Link Shutdown Packet. Once this packet ends, the logic level changes to zero as the host drives the bias circuit and logic to zero. This represents the termination of the last signal transfer or service from the host, and could have occurred at any time in the past, and is included to show the prior cessation of service, and the state of the signals prior to service commencement. If desired, such as signal can be sent just to reset the communication link to the proper state without a 'known' prior communication having been undertaken by this host device.As shown in FIG. 43a , and discussed for the Link Shutdown Packet above, in the low-power hibernation state, the MDDI_Data0 driver is disabled into a high-impedance state starting after the 16th to 48th MDDI_Stb cycles or pulses after the last bit of the All Zeros field in the Link Shutdown Packet, For Type-2, Type-3, or Type-4 links the MDDI Data1 through MDDI_DataPwr7 signals are also placed in a high-impedance state at the same time that the MDDI_Data0 driver is disabled. As described in the definition of the All Zeros field, MDDI_Stb toggles for 64 cycles (or as desired for system design) following the MSB of the CRC field of the Link Shutdown Packet to allow processing by the client to be completed and facilitate an orderly shutdown in a client controller. One cycle is a low-to-high transition followed by a high-to-low transition, or a high-to-low transition followed by a low-to-high transition. After the All Zeros field is sent, the NIDDT-Stb and MDDI_Data0 drivers in the host are disabled, and the host enters the low-power hibernation state. After some period of time, the host commences the link restart sequence as shown in FIGS. 43b and 43c , by enabling the MDDI_Data0 and MDDI_Stb lines or driver outputs, and begins to toggle MDDI_Stb, as part of either at a host or client initiated wake-up request.As shown in FIG. 43b , after some time passes with the signal output from drivers for MDDI_Data0 and MDDI_Stb disabled, a host initiates service or wake-up from hibernation by enabling its MDDI_Stb driver for a period of time designated tstb-dagta-enbl, during which the line is driven to a logic zero level, until it is completely enabled and then enabling its MDDI_Data0 driver. The host holds MDDI_Stb at logic-zero level after MDDI-DataO reaches a high or a logic one level which occurs over a period of time designated tclient-startup. At the end of the tclient-startupperiod the host then toggles the MDDI_Stb signal or line. The host drives the MDDI_Data0 line high, a logic-one level, while the client does not drive MDDI_Data0, for a period designated trestart-high, and then drives the MDDI_Data0 line low, or to logic-zero level, for a period designated trestart-low.After this, the first forward traffic begins with a Sub-Frame Header Packet, and the forward traffic packets are then transferred. The MDDI_Stb signal is active during the trestart-lowperiod and the subsequent Sub-Frame Header Packet.As shown in FIG. 43c , after some time passes with the signal output from drivers for MDDI_Data0 and MDDI_Stb disabled, a client initiates a service request or wake-up from hibernation by enabling an offset in the MDDI_Stb receiver or output signal for a period of time designated tstb-dagta-enbl,as discussed above, before the host enables its MDDI_Stb driver. The client then enables its MDDI_Data0 driver for a period of time designated thost-detect, during which the line is driven to a logic zero level, before the host begins MDDI_Stb toggling.A certain amount of time passes or may be needed before the host detects the request during thost-detect, after which the host responds by holding MDDI_Stb at logic-zero level for the period designated tstb-startupbefore the host begins toggling MDDI_Stb with a link startup sequence by driving the MDDI_Data0 to a logic-one or high level during the trestart-highperiod. When the client recognizes the first pulse on MDDI_Stb it disables the offset in its MDDI_Stb receiver. The client continues to drive MDDI_Data0 to a logic-one level or a period designated tclient-detectuntil it detects the host driving the line. At this point, the client de-asserts the request, and disables its MDDI_Data0 driver so that the output from the client goes to a logic-zero level again, and the host is driving MDDI_Data0. As before, the host continues to drive MDDI_Data0 to a logic-one level for the trestart-highperiod, and then drives the MDDT_Data0 line low for the trestart-lowperiod, after which the first forward traffic begins with a Sub-Frame Header Packet. The MDDI_Stb signal is active during the trestart-lowperiod and the subsequent Sub-Frame Header Packet.Table VIII shows representative times or processing periods for the length of the various periods discussed above, and the relationship to exemplary minimum and maximum data rates, where: t bit = 1 Link_Data_Rate ,where Link_Data_Rate is the bit rate of a single data pair.Table VIII1/tBIT-min-perfLink data rate for a minimum performance device0.0011.1Mbps1/tBIT-max-perfMaximum link data rate range for a device, external0.001400Mbps1/tBIT-max-perfMaximum link data rate range for a device, internal0.001550MbpsReverse Link data rate0.000550MbpstBITPeriod of one forward link data bit, external mode2.5106nsectBITPeriod of one forward link data bit, internal mode1.8106nsectrestart-highDuration of host link restart high pulse140150160Stb clkstrestart-lowDuration of host link restart low pulse505050Stb clkststb-data-enablMDDI_Stb completely enabled to MDDI_Data0 enabled link restart sequence0µsectclient-starrtupTime for host to hold MDDI_Stb at logic-zero level after MDDI_Data0 reaches logic-high level200nsecthost-detectTime from MDDI_Data0 high to MDDI_Stb toggling01000µsectclient-detectTime for client to detect MDDLData0 at logic-high level performance device6080Stb clkststb-startupTime for host to hold MDDI_Stb at logic-zero level before host begins toggling MDDI_Stb200nsecThose skilled in the art will readily understand that the functions of the individual elements illustrated in FIGS. 41 and 42 , are well known, and the function of the elements in FIG. 42 is confirmed by the timing diagram in FIGS 43a, 43b, and 43c . Details about the series terminations and hibernation resistors that are shown in FIG. 42 were omitted from FIG. 41 because that information is unnecessary for a description of how to perform the Data-Strobe encoding and recover the clock from it. B. Data-Strobe Timing Forward Link The switching characteristics for the transfer of data on the forward link from the host driver output is shown in Table IX-1. Table IX presents a tabular form of the minimum and maximum desired, versus typical times for certain signal transitions to occur. For example, the typical length of time for a transition to occur from the start to the end of a data value (output of a '0' or '1'), a Data0 to Data0 transition, termed ttdd-(host-output), is ttbitwhile the minimum time is about ttbit-0.5 nsec., and the maximum is about ttbit+0.5 nsec. The relative spacing between transitions on the Data0, other data lines (DataX), and the strobe lines (Stb), is illustrated in FIG. 44 where the Data0 to Strobe, Strobe to Strobe, Strobe to Data0, Data0 to non-Data0, non-Data0 to non-Data0, non-Data0 to Strobe, and Strobe to non-Data0 transitions are shown, which are referred to as ttds-(host-output),ttss-(host-output), ttsd-(thost-output),ttddx-(host-output), ttdxdx-(host-output), ttdxs-(host-output), and ttsdx-(host-output), respectively.Table IX-1ttdd-(host-output)Data to Data0 transitionttbit- 0.5ttbitttbit+ 0.5nsecttds-(host-output)Data0 to Strobe transitionttbit- 0.8ttbitttbit+ 0.8nsecttss-(host-output)Strobe to Strobe transitionttbit- 0.5ttbitttbit+ 0.5nsecttsd-(host-output)Strobe to Data0 transitionttbit- 0.8ttbitttbit+ 0.8nsecttddx-(host-output)Data0 to non-Data0 transitionttbitnsecttdxdx-(host-output)non-Data0 to non-Data0 transitionttbit- 0.5ttbitttbit+ 0.5nsecttdxs-(host-output)non-Data0 to Strobe transitionttbitnsecttsdx-(host-output)Strobe to non-Data0 transitionttbitnsecThe typical MIDI timing requirements for the client receiver input for the same signals transferring data on the forward link is shown in Table IX-2. Since the same signals are being discussed but time delayed, no new figure is needed to illustrate the signal characteristics or meaning of the respective labels, as would be understood by those skilled in the art.Table IX-2ttdd-(client-input)Data0 to Data0 transitionttbit- 1.0ttbitttbit+ 1.0nsecttds-(client-input)Data0 to Strobe transitionttbit- 1.5ttbitttbit+ 1.5nsecttss-(client-input)Strobe to Strobe transitionttbit- 1.0ttbitttbit+ 1.0nsecttsd-(client-input)Strobe to Data0 transitionttbit- 1.5ttbitttbit+ 1.5nsecttddx-(host-output)Data0 to non-Data0 transitionttbitnsecttdxdx-(host-output)non-Data0 to non-Data0 transitionttbitnsecttdxs-(host-output)non-Data0 to Strobe transitionttbitnsecttsdx-(host-output)Strobe to non-Data0 transitionttbitnsecFIGS. 45 and 46 illustrate the presence of a delay in response that can occur when the host disables or enables the host driver, respectively. In the case of a host forwarding certain packets, such as the Reverse Link Encapsulation Packet or the Round Trip Delay Measurement Packet, the host disables the line driver after the desired packets are forwarded, such as the Parameter CRC, Strobe Alignment, and All Zero packets illustrated in FIG. 45 as having been transferred. However, as shown in FIG. 45 , the state of the line does not necessarily switch from '0' to a desired higher value instantaneously, although this is potentially achievable with certain control or circuit elements present, but takes a period of time termed the host Driver Disable Delay period to respond. While it could occur virtually instantly such that this time period is 0 nanoseconds (nsec.) in length, it could more readily extend over some longer period with 10 nsec, being a desired maximum period length, which occurs during the Guard Time 1 or Turn Around 1 packet periods.Looking in FIG. 46 , one sees the signal level change undergone when the host Driver is enabled for transferring a packet such as the Reverse Link Encapsulation Packet or the Round Trip Delay Measurement Packet. Here, after the Guard Time 2 or Turn Around 2 packet periods, the host driver is enabled and begins to drive a level, here '0', which value is approached or reached over a period of time termed the Host Driver Enable Delay period, which occurs during the Driver Re-enable period, prior to the first packet being sent.A similar process occurs for the drivers and signal transfers for the client device, here a display. The general guidelines for the length of these periods, and their respective relationships are shown in Table X, below.Table XHost Driver Disable Delay010nsecHost Driver Enable Delay02.0nsecDisplay Driver Disable Delay010nsec.Display Driver Enable Delay02.0nsec C. Host And Client Output Enable And Disable Times The switching characteristics and relative timing relationships for Host and Client output enabled and disable time or operations relative to the Reverse Link Encapsulation Packet structure and period, is shown in FIG. 48 . The driver output functions or operations are labeled as: thost-enablefor the Host output enable time, thost-disable for the Host output disable time, tclient-enablefor the Client output enable time, and tclient-disablefor the Client output disable time. Typical times for certain signal transitions are discussed below. The minimum period for these operations would be zero nanoseconds, with typical or maximum values determined from the system design employing the interface, possibly on the order of 8 nanoseconds, or more..The general guidelines for the length of these periods, (host and client enable/disable times) and their respective relationships are shown in Table XI, below.Table XIthost-enableHost output enable time024·tBITnsecthost-disableHost output disable time, entire length of the Turn-Around 1 field024·tBITnsectclient-enableClient output enable time, entire length of the Turn-Around 1 field024·tBITnsectclient-disableClient output disable time, measured from the end of the last bit of the Turn-Around 2 field024·tBITnsec VIII. Implementation of Link Control (Link Controller Operation) A. State Machine Packet Processor Packets being transferred over a MDDI link are dispatched very rapidly, typically at a rate on the order of 300 Mbps or more, such as 400 Mbps, although lower rates are certainly accommodated, as desired. This type of bus or transfer link speed is too great for currently commercially available (economical) general-purpose microprocessors or the like to control. Therefore, a practical implementation to accomplish this type of signal transfer is to utilize a programmable state machine to parse the input packet stream to produce packets that are transferred or redirected to the appropriate audio-visual subsystem for which they are intended. Such devices are well known and use circuits generally dedicated to a limited number of operations, functions, or states to achieve a desired high speed or very high speed operation.General purpose controllers, processors, or processing elements, can be used to more appropriately act upon or manipulate some information such as control or status packets, which have lower speed demands. When those packets (control, status, or other pre-defined packets) are received, the state machine should pass them through a data buffer or similar processing element to the general-purpose processor so the packets can be acted upon to provide a desired result (effect) while the audio and visual packets are transferred to their appropriate destination for action. If future, microprocessors or other general purpose controllers, processors, or processing elements are manufactured to achieve higher data rate processing capabilities, then the states or state machine discussed below might also be implemented using software control of such devices, typically as programs stored on a storage element or media.The general purpose processor function can be realized in some embodiments by taking advantage of the processing power, or excess cycles available for, microprocessors (CPUs) in computer applications, or controllers, processors, digital signal processors (DSPs), specialized circuits, or ASICs found in wireless devices, in much the same manner as some modems or graphics processors utilize the processing power of CPUs found in computers to performs some functions and reduce hardware complexity and costs. However, this cycle sharing or usage can negatively impact the processing speed, timing, or overall operation of such elements, so in many applications, dedicated circuits or elements are preferred for this general processing.In order for image data to be viewed on a display (micro-display), or to reliably receive all packets sent by the host device, the client signal processing is synchronized with the forward link channel timing. That is, signals arriving at the client and the client circuits need to be substantially time synchronized for proper signal processing to occur. A high level diagram of states achieved by signal for one embodiment is presented in the illustration of FIG. 49 . In FIG. 49 , the possible forward link synchronization "states" for a state machine 4900 are shown being categorized as one ASYNC FRAMES STATE 4904, two ACQUIRING SYNC STATES 4902 and 4906, and three IN-SYNC STATES 4908,4910, and 4912.As shown by starting step or state 4902, the display or client, such as a presentation device, starts in a pre-selected "no sync" state, and searches for a unique word in the first sub-frame header packet that is detected. It is to be noted that this no sync state represents the minimum communication setting or "fall-back" setting in which a Type 1 interface is selected. When the unique word is found during the search, the client saves the sub-frame length field. There is no checking of the CRC bits for processing on this first frame, or until synchronization is obtained. If this sub-frame length is zero, then sync state processing proceeds accordingly to a state 4904 labeled here as the "async frames" state, which indicates that synchronization has not yet been achieved. This step in the processing is labeled as having encountered cond 3, or condition 3, in FIG. 49 . Otherwise, if the frame length is greater than zero, then the sync state processing proceeds to a state 4906 where the interface state is set as "found one sync frame." This step in the processing is labeled as encountering cond 5, or condition 5, in FIG. 49 . In addition, if the state machine sees a frame header packet and good CRC determination for a frame length greater than zero, processing proceeds to the "found one sync frame" state. This is labeled as meeting cond 6, or condition 6, in FIG. 49 .In each situation in which the system is in a state other than "no sync," if a packet with a good CRC result is detected, then the interface state is changed to the "in-sync" state 4908. This step in the processing is labeled as having encountered cond 1, or condition 1, in FIG. 49 . On the other hand, if the CRC in any packet is not correct, then the sync state processing proceeds or returns to the interface state 4902 of "NO SYNC FRAME" state. This portion of the processing is labeled as encountering cond 2, or condition 2, in the state diagram of FIG. 49 . B. Acquisition Time for Sync The interface can be configured to accommodate a certain number of "sync errors" prior to deciding that synchronization is lost and returning to the "NO SYNC FLAME" state. In FIG. 49 , once the state machine has reached the "IN-SYNC STATE" and no errors are found, it is continuously encountering a cond 1 result, and remains in the "IN-SYNC" state. However once one cond 2 result is detected, processing changes the state to a "one-sync-error" state 4910. At this point, if processing results in detecting another cond 1 result, then the state machine returns to the "in-sync" state, otherwise it encounters another cond 2 result, and moves to a "TWO-SYNC-ERRORS" state 4912. Again, if a cond 1, occurs, processing returns the state machine to the "IN-SYNC" state. Otherwise, another cond 2 is encountered and the state machine returns to the "no-sync" state. It is also understandable that should the interface encounter a "link shutdown packet," then this will cause the link to terminate data transfers and return to the "no-sync frame" state as there is nothing to synchronize with, which is referred to as meeting cond 4, or condition 4, in the state diagram of FIG. 49 .It is understood that it is possible for there to be a repeating "false copy" of the unique word which may appear at some fixed location within the sub-frame. In that situation, it is highly unlikely that the state machine will synchronize to the sub-frame because the CRC on the sub-frame Header Packet must also be valid when processed in order for the MDDI processing to proceed to the "IN SYNC" state.The sub-frame length in the sub-frame Header Packet may be set to zero to indicate that the host will transmit only one sub-frame before the link is shut down, and the MIDI is placed in or configured into an idle hibernation state. In this case, the client must immediately receive packets over the forward link after detecting the subframe Header Packet because only a single sub-frame is sent before the link transitions to the idle state. In normal or typical operations, the sub-frame length is non-zero and the client only processes forward link packets while the interface is in those states collectively shown as "IN-SYNC" states in FIG. 49 .An external mode client device may be attached to the host while the host is already transmitting a forward link data sequence. In this situation, the client must synchronize to the host. The time required for a client to synchronize to the forward link signal is variable depending on the sub-frame size and the forward link data rate. The likelihood of detecting a "false copy" of the unique word as part of the random, or more random, data in the forward link is greater when the sub-frame size is larger. At the same time, the ability to recover from a false detection is lower, and the time taken to do so is longer, when a forward link data rate is slower.For one or more embodiments, it recommended or understood that a MDDI host should perform certain additional steps to ensure that the MDDI reverse link is stable before it stops forward link transmission to go to a low power mode or to shut down the link completely.One problem that can occur is that if a host uses an incorrect measurement of the round-trip delay value this can cause all subsequently received reverse data transmission from the client to fail even though the forward link appears to be fine. This could happen if the host tries to send a Round Trip Delay Measurement Packet when the client is not in sync with the forward link, or due to an extreme ambient temperature change that causes a corresponding large change in the propagation delay of the differential drivers and receivers which affects the round trip delay. An intermittent cable or connector contact failure could also cause the client to temporarily lose synchronization and then regain sync, during which time, it may miss receiving a Round Trip Delay Measurement Packet. Subsequent reverse link packets would not be able to be decoded properly by the host.Another type of problem that can occur is that if the client temporarily loses sync and the host sends a Link Shutdown Packet before the client is able to regain sync. The host will be in hibernation while the client is unable to enter the hibernation state because it did not receive the Link Shutdown Packet and does not have a clock because the link is in hibernation.One technique or embodiment useful for overcoming such problems is to have the host ensure that the client is in sync with the forward link before putting the link into the hibernation state. If the MDDI host is unable to do this or does not have such an opportunity, such as when it loses power or the link is abruptly broken or fails due to a cable, conductor, or connector separation, break, or disconnection occurring during operation, then the host should first try to ensure that the client is in sync before starting a round-trip delay measurement process or sending a Reverse Link Encapsulation packet.A host can observe the CRC Error Count field in a Client Request and Status packet sent by the client to determine the forward link integrity. This packet is requested by the host from the client. However, in the event of a major link failure or disruption, this request will most likely go unanswered since a client will not be able to properly decode the packet, or maybe even receive it altogether. The request for the CRC Error Count using the Client Request and Status Packet sent in a Reverse Link Encapsulation Packet acts as a first integrity check, a sort of first line of defense. In addition, a host can send a Round Trip Delay Measurement Packet to confirm whether or not the assumption about the client having fallen out of sync is a valid one or not. If the client does not respond to a Round Trip Delay Measurement Packet, the host will conclude that the client is out of sync and can then start the process of getting it back in sync.Once the host concludes that the client has more than lively lost synchronization with the forward link, it waits until the next subframe header before attempting to send any packets other than filler packets. This is done in order to allow a client enough time to detect or look for one unique word contained in the subframe header packet. Following this, the host may assume that the client would have reset itself since it would not have found the unique word at the correct location. At this point, the host may follow the subframe header packet with a Round Trip Delay Measurement Packet. If the client still does not respond correctly to the Round Trip Delay Measurement Packet, the host may repeat the resynchronization process. A correct response is one in which the client sends the specified sequence back to the host in the Measurement Period of the Round Trip Delay Measurement Packet. If this sequence is not received, then attempts to receive reverse data in a Reverse Link Encapsulation Packet will fail. Continued failure of this nature may indicate some other system error which will have to be addressed in some other manner, and is not part of the link synchronization at this point.However, if after a successful Round Trip Delay Measurement Packet the host still sees corrupted data or no response in the Reverse Link Encapsulation Packets, it should confirm the reverse data sampling is correct by re-sending a Round Trip Delay Measurement Packet. If this is not successful after a number of attempts it is recommended for one embodiment that the host reduce the reverse data rate by increasing the reverse rate divisor value.The host should perform the Link Failure Detection and possibly the Link Resynchronization steps described above before placing the MDDI, link into the hibernation state. This will generally ensure that the Round Trip Delay Measurement Packet performed when the link is restarted later on is successful. If the host has no reason to suspect a link failure, and a correct response to a Reverse Link Encapsulation Packet and zero forward link CRC errors is being reported by the client, a host may assume that everything is operating or functioning accordingly or appropriately (no link failure for example) and proceed with the power down/hibernation process.Another manner in which a host can test for synchronization is for the host to send the Round Trip Delay Measurement Packet and confirm the proper response from the client. If the proper response is received by the host, it can reasonably be assumed that the client is successfully interpreting forward link packets. C. Initialization As stated earlier, at the time of "start-up," the host configures the forward link to operate at or below a minimum required, or desired, data rate of 1 Mbps, and configures the sub-frame length and media-frame rate appropriately for a given application. That is, both the forward and reverse links begin operation using the Type 1 interface. These parameters are generally only going to be used temporarily while the host determines the capability or desired configuration for the client display (or other type of client device). The host sends or transfers a sub-frame Header Packet over the forward link followed by a Reverse Link Encapsulation Packet which has bit '0' of the Request Flags set to a value of one (1), in order to request that the display or client responds with a Client Capability Packet. Once the display acquires synchronization on (or with) the forward link, it sends a Client Capability Packet and a Client Request and Status Packet over the reverse link or channel.The host examines the contents of the Client Capability Packet in order to determine how to reconfigure the link for optimal or a desired level of performance. The host examines the Protocol Version and Minimum Protocol Version fields to confirm that the host and client use versions of the protocol that are compatible with each other. The protocol versions generally remain as the first two parameters of the client capability Packet so that compatibility can be determined even when other elements of the protocol might not be compatible or completely understood as being compatible.In internal mode the host can know the parameters of the client in advance without having to receive a Client Capability Packet. The link may start up at any data rate at which the host and client can both operate. In many embodiments, a system designer will most likely choose to start the link at the maximum achievable data rate to hasten data transfer, however, this is not required and may not be used in many situations. For internal mode operation, the frequency of the strobe pulses used during the link restart from hibernation sequence will usually be consistent with this desired rate. D. CRC Processing For all packet types, the packet processor state machine ensures that the CRC checked is controlled appropriately or properly. It also increments a CRC error counter when a CRC comparison results in one or more errors being detected, and it resets the CRC counter at the beginning of each sub-frame being processed. E. Alternative Loss Of Synchronization Check While the above series of steps or states work to produce higher data rates or throughput speed, Applicants have discovered that an alternative arrangement or change in the conditions the client uses to declare that there is a loss of synchronization with the host, can be used effectively to achieve even higher data rates or throughput. The new inventive embodiment has the same basic structure, but with the conditions for changing states changed. Additionally a new counter is implemented to aid in making checks for sub-frame synchronization. These steps and conditions are presented relative to FIG. 63 , which illustrates a series of states and conditions useful in establishing the operations of the method or state machine. Only the "ACQUIRING-SYNC STATES" and "IN-SYNC STATES" portions are shown for clarity. In addition, since the resulting states are substantially the same, as is the state machine itself, they use the same numbering. However, the conditions for changing states (and the state machine operation) vary somewhat, so that all are renumbered for clarity between the two figures (1 , 2 , 3 , 4 , 5 , and 6 , versus 61, 62, 63, 64, and 65), as a convenience in identifying differences. Since the ASYNC FRAME state is not considered in this discussion, one state (4904) and condition (6) are no longer used in the figure.In FIG. 63 , the system or client (for display or presentation) starts with state machine 5000 in the pre-selected "no sync" state 4902, as in FIG. 49 . The first state change for changing states from the no-sync condition 4902 is in condition 64 which is the discovery of the sync pattern. Assuming that the CRC of the sub-frame header also passes on this packet (meets condition 61), the state of the packet processor state machine can be changed to the in-sync state 4908. A sync error, condition 62, will cause the state machine to shift to state 4910, and a second occurrence to state 4912. However, it has been discovered that any CRC failure of an MDDI packet will cause the state machine to move out of in-sync state 4908, to the one sync error state 4910. Another CRC failure of any MDDI packet will cause a move to the two sync failure state 4912. A packet decoded with a correct CRC value will cause the state machine to return to the in-sync state 4908.What has been changed is to utilize the CRC value or determination for 'every' packet. That is, to have the state machine look at the CRC value for every packet to determine a loss of synchronization instead of just observing sub-frame header packets. In this configuration or process, a loss of synchronization is not determined using the unique word and just sub-frame header CRC values.This new interface implementation allows the MDDI link to recognize synchronization failures much more quickly, and, therefore, to recover from them more quickly, as well.To make this system more robust, the client should also add or utilize a subframe counter. The client then checks for the presence of the unique word at the time it is expected to arrive or occur in a signal. If the unique word does not occur at the correct time, the client can recognize that a synchronization failure has occurred much more quickly than if it had to wait several (here three) packet times or periods that were greater than a sub-frame length. If the test for the unique word indicates it is not present, in other words that the timing is incorrect, then the client can immediately declare a link loss of synchronization and move to the no-sync state. The process of checking for the proper unique word presence, adds a condition 65 (cond 65) to the state machine saying that the unique word is incorrect. If a sub-frame packet is expected to be received on the client and doesn't match up, the client can immediately go to the no-sync state 4902, saving additional time waiting for multiple sync errors (condition 62) normally encountered traversing through states 4910 and 4912.This change uses an additional counter or counting function in the client core to count sub-frame length. In one embodiment, a count down function is used and the transfer of any packet that was currently being processed is interrupted to check for the sub-frame unique word if the counter has expired. Alternatively, the counter can count up, with the count being compared to a desired maximum or particular desired value, at which point the current packet is checked. This process protects the client from decoding packets that are incorrectly received on the client with extraordinarily long packet lengths. If the sub-frame length counter needed to interrupt some other packet that was being decoded, a loss of synchronization can be determined since no packet should cross a sub-frame boundary. IX. Packet Processing For each type of packet discussed above that the state machine receives, it undertakes a particular processing step or series of steps to implement operation of the interface. Forward link packets are generally processed according to the exemplary processing listed in Table XII below.Table XIISub-Frame Header (SH)Confirms good packet, captures sub-frame length field, and sends packet parameters to a general purpose processor.Filler (F)Ignores data.Video Stream (VS)Interprets the Video Data Format Descriptor and other parameters, unpacks packed pixel data when necessary, translates pixels through the color map if necessary, and writes pixel data to appropriate locations in the bitmap.Audio Stream (AS)Sends audio sample rate setting to audio sample clock generator, separates audio samples of specified size, unpacks audio sample data when necessary, and routes audio samples to appropriate audio sample FIFOColor Map (CM)Reads color map size and offset parameters, and writes the color map data to a color map memory or storage location.Reverse Link Encapsulation (REL)Facilitates sending packets in reverse direction at the appropriate time. Reverse link flags are examined, and Client Capability packets are sent as necessary. Client Request and Status packets are also sent as appropriate.Client Capability (CC)Sends this type of packet when requested by a host using the reverse link flags field of the Reverse Link Encapsulation Packet.Keyboard (K)Passes these packets to and from a general purpose processor that communicates with a keyboard type device, if one is present, and use is desiredPointing Device (PD)Passes these packets to and from a general purpose processor that communicates with a pointing type device, if one is present, and use is desired.Link Shutdown (LS)Records fact link is shut down and informs a general-purpose processor.Client Service Request and Status (CSRS)Sends this packet as the first packet in the Reverse Link Encapsulation packet.Bit Block Transfer (BPT)Interprets packet parameters, such as Video Data Format Descriptor, determines which pixels to move first, and moves pixels in bitmap as required.Bitmap Area Fill (BAF)Interprets packet parameters, translates pixels through color map if necessary, and writes pixel data to appropriate locations in bitmap,Bitmap Pattern Fill (BPF)Interprets packet parameters, unpacks packed pixel data if necessary, translates pixels through color map if necessary, and writes pixel data to appropriate locations in bitmap.Communication Link Channel (CLC)Sends this data directly to a general-purpose processor.Client Service Request (CSR) during hibernationGeneral-purpose processor controls the low-level functions of sending request and detects contention with link restarting on its own.Interface Type Handoff Request (ITHR) and Interface Type Acknowledge (ITA)May pass these packets to and from the general-purpose processor. The logic to receive this type of packet and formulate a response with an acknowledgment is substantially minimal. Therefore, this operation could also be implemented within the packet processor state machine. The resulting handoff occurs as a low-level physical layer action and is not likely to affect the functionality or functioning of the general-purpose processor.Perform Type Handoff (PTH)May act on such packets either directly or by transferring them to the general-purpose processor, also commanding hardware to undergo a mode change. X. Reducing the Reverse Link Data Rate It has been observed by the inventors that certain parameters used for the host link controller can be adjusted or configured in a certain manner in order to achieve a maximum or more optimized (scale) reverse link data rate, which is very desirable. For example, during the time used to transfer the Reverse Data Packets field of the Reverse Link Encapsulation Packet, the MDDI_Stb signal pair toggles to create a periodic data clock at half the forward link data rate. This occurs because the host link controller generates the MDDI_Stb signal that corresponds to the MDDI_Data0 signal as if it were sending all zeroes. The MDDI_Stb signal is transferred from the host to a client where it is used to generate a clock signal for transferring reverse link data from the client, with which reverse data is sent back to the host. An illustration of typical amounts of delay encountered for the signal transfer and processing on the forward and reverse paths in a system employing the MDDI, is shown in FIG. 50 . In FIG. 50 , a series of delay values 1.5 nsec., 8.0 nsec., 2.5 nsec., 2.0 nsec., 1.0 nsec., 1.5 nsec., 8.0 nsec., and 2.5 nsec., are shown near processing portions for the Stb+/- generation, cable transfer-to-client, client receiver, clock generation, signal clocking, Data0+/- generation, cable transfer-to-host, and host receiver stages, respectively.Depending on the forward link data rate and signal processing delays encountered, it may require more time than one cycle on the MDDI_Stb signal for this "round trip" effect or set of events to be completed, which results in the consumption undesirable amounts of time or cycles. To circumvent this problem, the Reverse Rate Divisor makes it possible for one bit time on the reverse link to span multiple cycles of the MDDI_Stb signal. This means that the reverse link data rate is less than the forward link rate.It should be noted that the actual length of signal delays through the interface may differ depending on each specific host-client system or hardware being used. Although not required, each system can generally be made to perform better by using the Round Trip Delay Measurement Packet to measure the actual delay in a system so that the Reverse Rate Divisor can be set to an optimum value. The host may support either basic data sampling which is simper but operates at a slower speed or advanced data sampling that is more complex but supports higher reverse data rates. The client capability to support both methods is considered the sameA round-trip delay is measured by having the host send a Round Trip Delay Measurement Packet to the client. The client responds to this packet by sending a sequence of ones back to the host inside of, or during, a pre-selected measurement window in that packet called the Measurement Period field. The detailed timing of this measurement was described previously. The round-trip delay is used to determine the rate at which the reverse link data can be safely sampled.The round-trip delay measurement consists of determining, detecting, or counting the number of forward link data clock intervals occurring between the beginning of the Measurement Period field and the beginning of the time period when the 0xff, 0xff, 0x00 response sequence is received back at the host from the client. Note that it is possible that the response from the client could be received a small fraction of a forward link clock period before the measurement count was about to increment. If this unmodified value is used to calculate the Reverse Rate Divisor it could cause bit errors on the reverse link due to unreliable data sampling. An example of this situation is illustrated in FIG. 51 , where signals representing MDDI_Data at host, MDDI_Stb at host, forward link data clock inside the host, and a Delay Count are illustrated in graphical form. In FIG. 51 , the response sequence was received from the client a fraction of a forward link clock period before the Delay Count was about to increment from 6 to 7. If the delay is assumed to be 6, then the host will sample the reverse data just after a bit transition or possibly in the middle of a bit transition. This could result in erroneous sampling at the host. For this reason, the measured delay should typically be incremented by one before it is used to calculate the Reverse Rate Divisor.The Reverse Rate Divisor is the number of MDDI_Stb cycles the host should wait before sampling the reverse link data. Since MDDI_Stb is cycled at a rate that is one half of the forward link rate, the corrected round-trip delay measurement needs to be divided by 2 and then rounded up to the next integer. Expressed as a formula, this relationship is: reverse_rate_divisor = RoundUpToNextInteger round_trip_delay + 1 2For the example given, this becomes: reverse_rate_divisor = RoundUpToNextInteger 6 + 1 2 = 4If the round trip delay measurement used in this example were 7 as opposed to 6, then the Reverse Rate Divisor would also be equal to 4.The reverse link data is sampled by the host on the rising edge of the Reverse Link Clock. There is a counter or similar known circuit or device present in both the host and client (display) to generate the Reverse Link Clock. The counters are initialized so that the first rising edge of the Reverse Link Clock occurs at the beginning of the first bit in the Reverse Link Packets field of the Reverse Link Encapsulation packet. This is illustrated, for the example given below, in FIG. 52A . The counters increment at each rising edge of the MDDI_Stb signal, and the number of counts occurring until they wrap around is set by the Reverse Rate Divisor parameter in the Reverse Link Encapsulation Packet. Since the MDDI_Stb signal toggles at one half of the forward link rate, then the reverse link rate is one half of the forward link rate divided by the Reverse Rate Divisor. For example, if the forward link rate is 200 Mbps and the Reverse Rate Divisor is 4 then the reverse link data rate is expressed as: 1 2 ⋅ 200 Mbps 4 = 25 MbpsAn example showing the timing of the MDDI_Data0 and MDDI_Stb signal lines in a Reverse Link Encapsulation Packet is shown in FIG. 52 , where the packet parameters used for illustration have the values:Packet Length = 1024 (0x0400)Turn Around 1 Length = 1Packet Type = 65 (0x41)Turn Around 2 Length = 1Reverse Link Flags = 0Reverse Rate Divisor = 2Parameter CRC = 0xdb43All Zero is 0x00Packet data between the Packet Length and Parameter CRC fields is:0x00, 0x04, 0x41, 0x00, 0x02, 0x01, 0x01, 0x43, 0xdb, 0x00,...The first reverse link packet returned from the client is the Client Request and Status Packet having a Packet Length of 7 and a packet type of 70. This packet begins with the byte values 0x07, 0x00, 0x46, ... and so forth. However, only the first byte (0x07) is visible in FIG. 52 . This first reverse link packet is time-shifted by nearly one reverse link clock period in the figure to illustrate an actual reverse link delay. An ideal waveform with zero host to client round-trip delay is shown as a dotted-line trace.The MS byte of the Parameter CRC field is transferred, preceded by packet type, then the all zero field The strobe from the host is switching from one to zero and back to one as the data from the host changes level, forming wider pulses. As the data goes to zero, the strobe switches at the higher rate, only the change in data on the data line causes a change near the end of the alignment field. The strobe switches at the higher rate for the remainder of the figure due to the fixed 0 or 1 levels of the data signal for extended periods of time, and the transitions falling on the pulse pattern (edge).The reverse link clock for the host is at zero until the end of the Turn Around 1 period, when the clock is started to accommodate the reverse link packets. The arrows in the lower portion of the figure indicate when the data is sampled, as would be apparent from the remainder of the disclosure. The first byte of the packet field being transferred (here 11000000) is shown commencing after Turn Around 1, and the line level has stabilized from the host driver being disabled. The delay in the passage of the first bit, and as seen for bit three, can bee seen in the dotted lines for the Data signal.In FIG. 53 , one can observe typical values of the Reverse Rate Divisor based on the forward link data rate. The actual Reverse Rate Divisor is determined as a result of a round-trip link measurement to guarantee proper reverse link operation. A first region 5302 corresponds to an area of safe operation, a second region 5304 corresponds to an area of marginal performance, while a third region 5306 indicates settings that are unlikely to function properly.The round-trip delay measurement and Reverse Rate Divisor setting are the same while operating with any of the Interface Type settings on either the forward or reverse link because they are expressed and operated on in terms of units of actual clock periods rather than numbers of bits transmitted or received.Typically, the largest possible Reverse Rate Divisor is half the number of bits that can be sent in the measurement window of the Round Trip Delay Measurement Packet using a Type-I interface, or for this example:512 bytes ⋅ 8 bits / byte 2 = 2048An advanced reverse data sampling method can also be employed as an alternative that allows the reverse bit time to be smaller than the round-trip delay. For this technique a host not only measures the round-trip delay, but can also determine the phase of the response from the client with respect to an 'ideal' bit boundary of a client and link with zero delay. By knowing the phase of the client device response, a host can determine a relatively safe time to sample the reverse data bits from the client. The round-trip delay measurement indicates to a host the location of the first bit of reverse data with respect to the beginning of the Reverse Data Packets field.One embodiment of an example of advanced reverse data sampling is illustrate in graphical form in FIG 52B . An ideal reverse data signal with zero round-trip delay is shown as a dotted-line waveform. The actual round-trip delay, between 3.5 and 4 MDDI_Stb cycles, can be observed as the difference in delay between solid waveform and the ideal. This is the same delay that would be measured using the Round Trip Delay Measurement Packet, and would be a measured round-trip delay value equal to 7 forward-link bit times. In this embodiment, reverse data bits are 2 MDDI_Stb pulses long, which is 4 forward-link bit times, which corresponds to a reverse rate divisor equal to 2. For advanced reverse data sampling it is convenient to use a pre-seleceted reverse rate divisor of 2 instead of computing it as described elsewhere. This appears to be a substanitlaly optimum choice for advanced reverse data sampling because the ideal sampling point can easily be determined using the conventional measurements described above.The ideal sampling point for reverse data can be easily computed by taking the reminder of the total round-trip delay divided by the number of forward link clocks per reverse bit, or round-trip delay modulo forward link clocks per reverse bit. Then subtract either 1 or 2 to get to a safe point away from the data transition. In this example, 7 mod 4 = 3, then 3-1= 2, our 3-2=1. The safe sampling point is either 1 or 2 forward link bit times from the edge of the "ideal" bit boundary for zero round-trip delay. The figure shows the sampling point at 2 forward link bit times from the ideal bit boundary, as indicated by the series of vertical arrows at the bottom of the timing diagram. The first sampling point is the first ideal bit boundary after the measured round-trip delay, plus the offset for safe sampling. In this example, the round trip delay measurement is 7, so the next ideal bit boundary is at the 8thbit time, then add either 1 or 2 for the safe sampling point, so the first bit shall be sampled at either 9 or 10 forward link bit times after the beginning of the Reverse Data Packets Field. XI. Turn-Around and Guard Times The Turn-Around 1 field in the Reverse Link Encapsulation Packet allows time for the host drivers to disable and the client drivers to enable simultaneously. The Guard Time 1 field in the Round Trip Delay Measurement Packet allows overlap of the host and client, so the client drivers can enable before the host interface drivers are disabled. The Turn Around 2 field in the Reverse Link Encapsulation Packet allows data in the previous field from the client to be fully transmitted before the host drivers are enabled. The Guard Time 2 field provides a time value or period which allows the client and host drivers to drive simultaneously at a logic-zero level. The Guard Time 1 and Guard Time 2 fields are generally filled with pre-set or pre-selected values for lengths that are not meant to be adjusted. Depending on the interface hardware being used, these values may be developed using empirical data and adjusted in some instances to improve operation. Turn-Around 1. Several factors contribute to a determination of the length of Turn-Around 1 and these are the forward link data rate, the maximum disable time of the MDDI_Data drivers in the host, and the enable time of the client driver which is which is generally the same as the host disable time. The length of the Turn-Around 1 field is selected to be 24·tBIT.(Table XI) The length in the number of forward link bytes of the Turn-Around 1 field is determined using the Interface Type Factor, and is computed using the relationship: : Length TurnAround 1 = 24 8 bits / byte ⋅ InterfaceTypeFactor FWD = 3 ⋅ InterfaceTypeFactor FWDwhere the Interface Type Factor is 1 for Type 1, 2 for Type 2, 4 for Type 3, and 8 for Type-4. Turn-Around 2 The factors that determine the length of time generally used for Turn Around 2 are, the round-trip delay of the communication link, the maximum disable time of the MDDI_Data drivers in the client, and the enable time of the host driver which is specified to be the same as the client driver disable time. The maximum host driver enable time and client driver disable time is specified in Error! Reference source not found.IError! Reference source not found. . The round-trip delay is measured in units of tBIT. The minimum length specified in the number of forward link bytes of the Turn-Around 2 field is computed according to the relationship: Length TurnAround 2 ≥ RoundUpToNextInteger RoundTripDelay + 24 8 bits / byte ⋅ InterfaceTypeFactor FWDFor example, a Type 3 forward link with a round-trip delay of 10 forward link clocks typically uses a Turn Around 2 delay on the order of: Length TurnAround 2 ≥ RoundUpToNextInteger 11 + 24 8 ⋅ 4 = 18 bytes XII. Alternative Reverse Link Timing While the use of timing and guard bands discussed above work to achieve a high data transfer rate interface, the inventors have discovered a technique to allow for reverse bit lengths that are shorter than the round trip time, by changing the reverse timing discovery.As presented above, the previous approach to the timing of the reverse link is configured such that the number of clock cycles is counted from the last bit of the Guard Time 1 of a reverse timing packet until the first bit is sampled on the rising edge of an IO clock. That is the clock signal(s) used to time the inputs and outputs for the MDDI. The calculation for the reverse rate divisor is then given by: reverse_rate_divisor = RoundUpToNextInteger round_trip_delay + 1 2This provides a bit width equal to the round trip delay which results in a very reliable reverse link. However, the reverse link has been shown to be capable of running faster, or at a higher data transfer rate, which the inventors want to take advantage of. A new inventive technique allows utilizing additional capabilities of the interface to reach higher speeds.This is accomplished by having the host count the number of clock cycles until a one is sampled, but with the host sampling the data line on both the rising and falling edges during the reverse timing packet. This allows the host to pick the most useful or even optimal sampling point within the reverse bit to ensure that the bit is stable. That is, to find the most useful or optimal rising edge to sample data on for reverse traffic reverse encapsulation packets. The optimal sampling point depends on both the reverse link divisor and whether the first one was detected on a rising edge or a falling edge. The new timing method allows the host to just look for the first edge of the 0xFF 0xFF 0x00 pattern sent by the client for reverse link timing to determine where to sample in a reverse encapsulation packet.Examples of the arriving reverse bit and how that bit would look for various reverse rate divisors, is illustrated in FIG. 64 , along with a number of clock cycles that have occurred since the last bit of Guard Time 1. In Fig. 64 , one can see that if the first edge occurs between a rising and falling edge (labeled as rise/fall), the optimal sampling point for a reverse rate divisor of one, the optimal sample point is a clock cycle edge labeled 'b', as that is the only rising edge occurring within the period of the reverse bit. For a reverse rate divisor of two, the optimal sampling point is probably still clock cycle leading edge 'b' as cycle edge 'c' is closer to a bit edge than 'b'. For a reverse rate divisor of four, the optimal sampling point is probably clock cycle edge 'd', as it is closer to the back edge of the reverse bit where the value has probably stabilized.Returning to FIG. 64 , if, however, the first edge occurs between a falling and rising edge (labeled as fall/rise), the optimal sampling point for a reverse rate divisor of one is sampling point clock cycle edge 'a', as that is the only rising edge within the reverse bit time period. For a reverse rate divisor of two, the optimal sampling point is edge 'b', and for a reverse rate divisor of four the optimal sampling point is edge 'c'.One can see that as the reverse rate divisors get larger and larger, the optimal sampling point becomes easier to ascertain or select, as it should be the rising edge that is closest to the middle.The host can use this technique to find the number of rising clock edges before the rising data edge of the timing packet data is observed on the data line. It can then decide, based on whether the edge occurs between a rising and falling edge or between a falling and rising edge, and what the reverse rate divisor is, how many additional clock cycles to add to a number counter, to reasonably ensure that the bit is always sampled as close to the middle as possible.Once the host has selected or determined the number of clock cycles, it can "explore" various reverse rate divisors with the client to determine if a particular reverse rate divisor will work. The host (and client) can start with a divisor of one and check the CRC of the reverse status packet received from the client to determine if this reverse rate functions appropriately to transfer data. If the CRC is corrupt, there is probably a sampling error, and the host can increase the reverse rate divisor and try to request a status packet again. If the second requested packet is corrupt, the divisor can be increased again and the request made again. If this packet is decoded correctly, this reverse rate divisor can be used for all future reverse packets.This method is effective and useful because the reverse timing should not change from the initial round trip timing estimate. If the forward link is stable, the client should continue to decode forward link packets even if there are reverse link failures. Of course, it is still the responsibility of the host to set a reverse link divisor for the link, since this method does not guarantee a perfect reverse link. In addition, the divisor will depend primarily on the quality of the clock that is used to generate an IO clock. If that clock has a significant amount of jitter, there is a greater possibility of a sampling error. This error probability increases with the amount of clock cycles in the round trip delay.This implementation appears to work best for Type 1 reverse data, but may present problems for Type 2 through Type 4 reverse data due to the skew between data lines potentially being too great to run the link at the rate that works best for just one data pair. However, the data rate probably does not need to be reduced to the previous method even with Type 2 through Type 4 for operation. This method may also work best if duplicated on each data line to select the ideal or an optimal clock sample location. If they are at the same sample time for each data pair, this method would continue to work. If they are at different sample periods, two different approaches may be used. The first is to select an desired or more optimized sample location for each data point, even if it is not the same for each data pair. The host can then reconstruct the data stream after sampling all of the bits from the set of data pairs: two bits for Type 2, four bits for Type 3, and eight bits for Type 4. The other option is for the host to increase the reverse rate divisor such that the data bits for every data pair can be sampled at the same clock edge. XIII. Effects of Link Delay and Skew Delay skew on the forward link between the MDDI_Data pairs and MDDI_Stb can limit the maximum possible data rate unless delay skew compensation is used. The differences in delay that cause timing skew are due to the controller logic, the line drivers and receivers, and the cable and connectors as outlined below. A. Link Timing Analysis Limited by Skew (MDDI Type-1) 1. Delay and Skew Example of a Type 1 Link A typical interface circuit similar to that shown in FIG. 41 , is shown in FIG. 57 for accommodating a Type 1 interface link. In FIG. 57 , exemplary or typical values for propagation delay and skew are shown for each of several processing or interface stages of an MDDI Type 1 forward link. Skew in the delay between MDDI_Stb and MDDI_Data0 causes the duty-cycle of the output clock to be distorted. Data at the D input of the receiver flip-flop (RXFF) stage using flip-flops 5728, 5732, changes slightly after the clock edge so that it can be sampled reliably. The figure shows two cascaded delay lines 5732a and 5732b being used to solve two different problems with creating this timing relationships. In the actual implementation these may be combined into a single delay element.Data, Stb, and Clock Recovery Timing on a Type 1 Link for exemplary signal processing through the interface are illustrated in FIG. 58 .The total delay skew that is significant generally arises or comes from the sum of the skew in the following stages: transmitter flip-flop (TXFF) with flip-flops 5704, 5706; transmitter driver (TXDRVR) with drivers 5708, 5710; the CABLE 5702; receiver line receiver (RXRCVR) with receivers 5722, 5724; and receiver NOR logic (RXXOR). Delay1 5732a should match or exceed the delay of the XOR gate 5736 in the RXXOR stage which is determined by the relationship: t PD - min Delay 1 ≥ t PD - max XORIt is desirable to meet this requirement so that the D input of receiver flip-flop 5728, 5732 does not change before its clock input. This is valid if the hold-time of RXFF is zero.The purpose or function of Delay2 is to compensate for the hold-time of the RXFF flip-flop according to the relationship: t PD - min Delay 2 = t H RXFFIn many systems this will be zero because the hold time is zero, and of course in that case the maximum delay of Delay2 can also be zero.The worst-case contribution to skew in the receiver XOR stage is in the data-late/strobe-early case where Delay1 is at a maximum value and the clock output from the NOR gate comes as early as possible according to the relationship: t SKEW - max RXXOR = t PD - max Delay 1 - t PD - min XORIn this situation, the data may change between two bit periods, n and n+1, very close to the time where bit n+1 is clocked into the receiver flip-flop.The maximum data rate (minimum bit period) of an MDDI Type 1 link is a function of the maximum skew encountered through all the drivers, cable, and receivers in the MDDI link plus the total data setup into the RXFF stage. The total delay skew in the link up to the output of the RXRCVR stage can be expressed as: t SKEW - max LINK = t SKEW - max TXFF + t SKEW - max TXDRVR + t SKEW - max CABLE + t SKEW - max RXRCVRwith the "cable" representing a variety of conductors or interconnections or wires and corresponding delay, and the minimum bit period is given by: t BIT - min = t SKEW - max LINK + 2 ⋅ t B - TP 4 + t Asymmetry + t SKEW - max RXXOR + t jitter - host + t PD - max Delay 2 + t SU RXFFIn the example shown in FIG. 57 for external mode, tSKEW-max(LINK)= 1000psec and the minimum bit period can be expressed as: t BIT - min = 1000 + 2 ⋅ 125 + 625 + 125 + 200 + 0 + 100 = 2300 p sec ,or stated as approximately 434 Mbps. In the example shown in FIG. 57 for internal mode, tSKEW-max(LINK)= 500 psec and the minimum bit period can be expressed as: t BIT - min = 500 + 2 ⋅ 125 + 625 + 125 + 200 + 0 + 100 = 1800 p sec ,or stated as approximately 555 Mbps. B. Link Timing Analysis for MDDI Type 2, 3, and 4 A typical interface circuit similar to that shown in FIGS. 41 and 57 , is shown in FIG. 59 for accommodating Type 2, 3, and 4 interface links. Additional elements are used in the TXFF (5904), TXDRVR (5908), RXRCVCR (5922), and RXFF (5932, 5928, 5930) stages to accommodate the additional signal processing. In FIG. 59 , exemplary or typical values for propagation delay and skew are shown for each of several processing or interface stages of an MDDI Type 2 forward link. In addition to skew in the delay between MDDI_Stb and MDDI_Data0 affecting the duty-cycle of the output clock, there is also skew between both of these two signals and the other MDDI_Data signals. Data at the D input of the receiver flip-flop B (RXFFB) stage consisting of flip-flops 5928 and 5930, is changed slightly after the clock edge so it can be sampled reliably. If MDDI_Data1 arrives earlier than MDDI_Stb or MDDI_Data0 then MDDI_Data1 should be delayed to be sampled by at least the amount of the delay skew. To accomplish this, data is delayed using the Delay3 delay line. If MDDI_Data1 arrives later than MDDI_Stb and MDDI_Data0 and it is also delayed by Delay3 then the point where MDDI_Data1 changes is moved closer to the next clock edge. This process determines an upper limit of the data rate of an MDDI Type 2, 3, or 4 link. Some exemplary different possibilities for the timing or skew relationship of two data signals and MDDI_Stb with respect to each other is illustrated in FIGS. 60A, 60B, and 60C .In order to sample data reliably in RXFFB when MDDI_DataX arrives as early as possible, Delay3 is set according to the relationship: t PD - min Delay 3 ≥ t SKEW - max LINK + t H RXFFB + t PD - max XORThe maximum link speed is determined by the minimum allowable bit period. This is most affected when MDDI_DataX arrives as late as possible. In that case, the minimum allowable cycle time is given by: t BIT - min = t SKEW - max LINK + t PD - max Dalay 3 + t SU RXFFB - t PD - min XORThe upper bound of link speed is then: t PD - max Delay 3 = t PD - min Delay 3and given that assumption: t BIT - min lower - bound = 2 ⋅ t SKEW - max LINK + t PD - max XOR + t SU RXFFB + t H RXFFBIn the example given above, the lower bound of the minimum bit period is given by the relationship: t BIT - min lower - bound = 2 ⋅ 1000 + 2 • 125 + 625 + 200 + 1500 + 100 + 0 = 5750 p sec ,which is approximately 174 Mbps.This is much slower than the maximum data rate that can be used with a Type 1 link. The automatic delay skew compensation capability of MDDI significantly reduces the affect that delay skew has on the maximum link rate factor is just on-the-edge of valid data setup. The calibrated skew between MDDI_Data0 and MDDI_Stb is: t SKEW - max Calibrated = 2 ⋅ t TAP - SPACING - max ,and the minimum bit period is: t BIT - min - Calibrated = t SKEW - max Calibrated + 2 ⋅ t B - TP 4 + t Asymmetry + t jitter - host + t SKEW - max RXAND - RXXOR + t SU RXFFWhere "TB" or tBrepresents signal jitter from a bit boundary to minimum output level. Asymmetry simply refers to the asymmetrical nature of internal delay through or of the differential receivers. "IP4" is associated with or is effectively defined for electrical characterization and testing purposes as the connection or interface (pins of the MDDI controller device in the client) for the differential line drivers and receivers for the client. It represents a convenient or predetermined point from which signal delay is measured and characterized for the link throughout the rest of a system. In one embodiment, a maximum value of the parameter tBat TP4 is defined by the relationshiptDifferential-Skew-TP 4-DRVR -EXT= 0.3·tBITfor the external mode andtDifferential-Skew-TP 4-DRVR-INT= 0.6·tBITfor the internal mode for the client transmitters; andtB-TP 4-RCVR-EXT= 0.051·tBIT+ 175 ps for the external mode for the client receivers.The label TP4 being simply useful in numbering various test points (TP) in the interface and links. In one embodiment, this test point location is defined to be the same for both internal and external modes. There is a corresponding "TP0" test point for, or associated with, the connection or interface pins of the MDDI controller device in the host that contains the differential line drivers and receivers. In this embodiment, a maximum value of the parameter TBat TP0 is defined by the relationshiptB-TP0-RCVR-INT= 0.051·tBIT+ 50ps,for the internal mode, andtB-TP0-RCVR-EXT=0.051·tBIT+ 175psfor the external mode for the host receivers; andtB-TP0= 0.102·tBITfor the host transmitters.In the example shown in FIG. 59 , tSKEW -max(Data0-Stb-Calibrated)= 300psecand the minimum bit period: t BIT - min - Calibrated = 300 + 2 ⋅ 125 + 625 + 200 + 175 + 100 = 1650 p sec ,approximately 606 Mbps.In order to sample data reliably in RXFFB when MDDI_Data1 arrives as early as possible, the associated programmable delay is adjusted to the optimal setting with an accuracy of one tap, and an additional tap delay is added for safety. The maximum link speed is determined by the minimum allowable bit period. This is most affected when MDDI_Data1 arrives as late as possible. In that case the minimum allowable cycle time is: t BIT - min - Data 1 - Calibrated = 2 ⋅ t TAP - Spacing - max + 2 ⋅ t TA - TP 4 ,where "TA" or tArepresents signal jitter from a bit boundary to center crossing.In the example given in FIG. 59 , the lower bound of the minimum bit period based on sampling MDDI_Data1 is: t BIT - min - Data 1 - Calibrated = 2 ⋅ 150 + 2 ⋅ 125 = 550 p secIn one embodiment, a typical total delay time for delay skew, delay asymmetry, and Clock Jitter in the host transmitter for Internal Mode would be defined as: t Asymmerty - TXFF + t Asymmetry - TXDRVR + t Skew - TXFF + t Skew - TXDRVR + t jitter - host = 0.467 ⋅ t BIT - 150 p s ,and for the external mode as: t Asymmerty - TXFF + t Asymmetry - TXDRVR + t Skew - TXFF + t Skew - TXDRVR + t jitter - host = 0. TBD ⋅ t BIT - 150 TBDp swhile a typical total delay time for delay skew, delay asymmetry, and setup time in the client device (tB-TP4) for internal mode is: t Asymmerty - RXRCVR + t Asymmetry - RXXOR + t Skew - RXRCVR + t Skew - RXXOR + t setup - RXFF = 0.307 ⋅ t BIT - 150 psand for the external mode: t Asymmerty - RXRCVR + t Asymmetry - RXXOR + t Skew - RXRCVR + t Skew - RXXOR + t setup - RXFF = 0. TBD ⋅ t BIT - TBDpswhere the term TBD is a flexible place keeping label for future to be determined values which will depend on a variety of well understood characteristics and operational requirements for the external mode connections. XIV. Physical Layer Interconnection Description Physical connections useful for implementing an interface according to the present invention can be realized using commercially available parts such as part number 3260-8S2(01) as manufactured by Hirose Electric Company Ltd. on the host side, and part number 3240-8P-C as manufactured by Hirose Electric Company Ltd. on the client device side. An exemplary interface pin assignment or "pinout" for such connectors used with a Type-1/Type 2 interfaces is listed in Table XIII, and illustrated in FIG. 61 .Table XIIIMDDI_Pwr1MDDI_Gnd11MDDI_Stb+2MDDI_Stb-12MDDI_Data0+4MDDI_Data0-14MDDI_Data1+6MDDI_Data1-16MDDI_Data2+8MDDI_Data2-18MDDI_Data3+10MDDI_Data3-20MDDI_Data4+9MDDI_Data4-19MDDI_Data5+7MDDI_Data5-17MDDI_Data6+5MDDI_Data6-15MDDI_Data7+3MDDI_Data7-13ShieldThe shield is connected to the HOST_Gnd in the host interface, and a shield drain wire in the cable is connected to the shield of the client connector. However, the shield and drain wire are not connected to the circuit ground inside of a client.Interconnection elements or devices are chosen or designed in order to be small enough for use with mobile communication and computing devices, such as PDAs and wireless telephones, or portable game devices, without being obtrusive or unaesthetic in comparison to relative device size. Any connectors and cabling should be durable enough for use in the typical consumer environment and allow for small size, especially for the cabling, and relatively low cost. The transfer elements should accommodate data and strobe signals that are differential NRZ data having a transfer rate up to around 450 Mbps for Type 1 and Type 2 and up to 3.6 Gbps for the 8-bit parallel Type 4 version.For internal mode applications there are either no connectors in the same sense for the conductors being used or such connection elements tend to be very miniaturized. One example is zero insertion force "sockets" for receiving integrated circuits or elements housing either the host or client device. Another example is where the host and client reside on printed circuit boards with various interconnecting conductors, and have "pins" or contacts extending from housings which are soldered to contacts on the conductors for interconnection of integrated circuits. XV. Operation A summary of the general steps undertaken in processing data and packets during operation of an interface using embodiments of the invention is shown in FIGS. 54A and 54B , along with an overview of the interface apparatus processing the packets in FIG. 55 . In these figures, the process starts in a step 5402 with a determination as to whether or not the client and host are connected using a communication path, here a cable. This can occur through the use of periodic polling by the host, using software or hardware that detects the presence of connectors or cables or signals at the inputs to the host (such as is seen for USB interfaces), or other known techniques. If there is no client connected to the host, then it can simply enter a wait state of some predetermined length, depending upon the application, go into a hibernation mode, or be inactivated to await future use which might require a user to take action to reactivate the host. For example, when a host resides on a computer type device, a user might have to click on a screen icon or request a program that activates the host processing to look for the client. Again, simple plug in of a USB type connection could activate host processing, depending on the capabilities and configuration of the host or resident host software.Once a client is connected to the host, or visa versa, or detected as being present, either the client or the host sends appropriate packets requesting service in steps 5404 and 5406. The client could send either Client Service Request or Status packets in step 5404. It is noted that the link, as discussed above, could have been previously shut down or be in hibernation mode so this may not be a complete initialization of the communication link that follows. Once the communication link is synchronized and the host is trying to communicate with the client, the client also provides a Client Capabilities packet to the host, as in step 5408. The host can now begin to determine the type of support, including transfer rates, the client can accommodate.Generally, the host and client also negotiate the type (rate/speed) of service mode to be used, for example Type 1, Type 2, and so forth, in a step 5410. Once the service type is established the host can begin to transfer information. In addition, the host may use Round Trip Delay Measurement Packets to optimize the timing of the communication links in parallel with other signal processing, as shown in step 5411.As stated earlier, all transfers begin with a Sub-Frame Header Packet, shown being transferred in step 5412, followed by the type of data, here video and audio stream packets, and filler packets, shown being transferred in step 5414, The audio and video data will have been previously prepared or mapped into packets, and filler packets are inserted as needed or desired to fill out a required number of bits for the media frames. The host can send packets such as the Forward Audio Channel Enable Packets to activate sound devices. In addition, the host can transfer commands and information using other packet types discussed above, here shown as the transfer of Color Map, Bit Block Transfer or other packets in step 5416. Furthermore, the host and client can exchange data relating to a keyboard or pointing devices using the appropriate packets.During operation, one of several different events can occur which lead to the host or client desiring a different data rate or type of interface mode. For example, a computer or other device communicating data could encounter loading conditions in processing data that causes a slow down in the preparation or presentation of packets. A client device receiving the data could change from a dedicated AC power source to a more limited battery power source, and either not be able to transfer in data as quickly, process commands as readily, or not be able to use the same degree of resolution or color depth under the more limited power settings. Alternatively, a restrictive condition could be abated or disappear allowing either device to transfer data at higher rates. This being more desirable, a request can be made to change to a higher transfer rate mode.If these or other types of known conditions occur or change, either the host or client may detect them and try to renegotiate the interface mode. This is shown in step 5420, where the host sends Interface Type Handoff Request Packets to the client requesting a handoff to another mode, the client sends Interface Type Acknowledge Packets confirming a change is sought, and the host sends Perform Type Handoff Packets to make the change to the specified mode.Although, not requiring a particular order of processing, the client and host can also exchange packets relating to data intended for or received from pointing devices, keyboards, or other user type input devices associated primarily with the client, although such elements may also be present on the host side. These packets are typically processed using a general processor type element and not the state machine (5502). In addition, some of the commands discussed above will also be processed by the general processor. (5504, 5508)After data and commands have been exchanged between the host and client, at some point a decision is made as to whether or not additional data is to be transferred or the host or client is going to cease servicing the transfer. This is shown in step 5422. If the link is to enter either a hibernation state or be shut down completely, the host sends a Link Shutdown packet to the client, and both sides terminate the transfer of data.The packets being transferred in the above operations processing will be transferred using the drivers and receivers previously discussed in relation to the host and client controllers. These line drivers and other logic elements are connected to the state machine and general processors discussed above, as illustrated in the overview of FIG. 55 . In Fig. 55 , a state machine 5502 and general processors 5504 and 5508 may further be connected to other elements not shown such as a dedicated USB interface, memory elements, or other components residing outside of the link controller with which they interact, including, but not limited to, the data source, and video control chips for view display devices.The processors, and state machine provide control over the enabling and disabling of the drivers as discussed above in relation to guard times, and so forth, to assure efficient establishment or termination of communication link, and transfer of packets. XVI. Display Frame Buffers Video data buffering requirements are different for moving video images compared to computer graphics. Pixel data is most often stored in a local frame buffer in the client so the image on the client can be refreshed locally.When full-motion video is being displayed (nearly every pixel in the display changes each Media Frame) it is usually preferred to store the incoming pixel data in one frame buffer while the image on the display is being refreshed from a second frame buffer. More than two display buffers may be used to eliminate visible artifacts as described below. When an entire image has been received in one frame buffer then the roles of the buffers can be swapped, and the newly received image is then used to refresh the display and the other buffer is filled with the next frame of the image. This concept is illustrated in FIG. 88A , where pixel data is written to the offline image buffer by setting the Display Update bits to "01."In other applications the host needs to update only a small portion of the image without having to repaint the entire image. In this situation it is desired to write the new pixels directly to the buffer being used to refresh the display, as illustrated in detail FIG. 88B .In applications that have a fixed image with a small video window it is easiest to write the fixed image to both buffers (display update bits equal to "11") as shown in FIG. 88C , and subsequently write the pixels of the moving image to the offline buffer by setting the display update bits to "01."The following rules describe the useful manipulation of buffer pointers while simultaneously writing new information to the client and refreshing the display. Three buffer pointers exist: current_fill points to the buffer currently being filled from data over the MDDI link. Just_filled points to the buffer that was most recently filled. being_displayed points to the buffer currently being used to refresh the display. All three buffer pointers may contain values from 0 to N-1 where N is the number of display buffers, and N ≥ 2. Arithmetic on buffer pointers is mod N, e.g. when N=3 and current_fill=2, incrementing current_fill causes current_fill to be set to 0. In the simple case where N=2, just_filled is always the complement of current_fill. On every MDDI Media Frame boundary (Sub-frame Header Packet with the Sub-frame Count field equal so zero) perform the following operations in the order specified: set just_filled equal to current_fill, and set current_fill equal to current_fill + 1.MDDI Video Stream Packets update the buffers according to the structure or methodology of: when Display Update Bits equal to '01', pixel data is written to the buffer specified by current_fill; when Display Update Bits equal to '00', pixel data is written to the buffer specified by just_filled; and when Display Update Bits equal to "11," pixel data is written to all buffers. The display is refreshed from the buffer specified by the being_displayed pointer. After the display refreshes the last pixel in one frame refresh epoch and before it begins to refresh the first pixel in the next frame refresh epoch the display update process performs the operation of setting being_refreshed equal to just_filled;The Video Stream Packet contains a pair of Display Update Bits that specify the frame buffer where the pixel data is to be written. The Client Capability Packet has three additional bits that indicate which combinations of the Display Update Bits are supported in the client In many cases, computer-generated images need to be incrementally updated based on user input or derived from information received from a computer network. Display Update Bit combinations "00" and "11" support this mode of operation by causing the pixel data to be written to the frame buffer being displayed or to both frame buffers.When accommodating video images, FIG. 89 illustrates how video images are displayed using a pair of frame buffers when video data is transmitted over the MDDI link with the Display Update Bits equal to "01." After a media-frame boundary is detected on the MDDI link, the display refresh process will begin refreshing from the next frame buffer when the refresh process for the frame currently being refreshed is completed.An important assumption related to FIG. 89 is that the image is received from the host as a continuous stream of pixels that are transmitted in the same order that the client uses to read the pixels from the frame buffer to refresh the display (usually upper-left, reading row by row, to the bottom-right corner of the screen. This is an important detail in the cases where the Display Refresh and Image Transfer operations reference the same frame buffer.It is necessary for the display refresh frame rate to be greater than the image transfer frame rate to avoid displaying partial images. FIG. 90 shows how image fragmentation can occur with a slow display refresh rate that is the display refresh is slower than the image transfer.In an image that contains a combination of computer graphic images and moving video pictures the video pixel data might occupy a small portion of a media-frame. This could be significant in situations where the display refresh operation and the image transfer reference the same frame buffer. These situations are shown by a cross-hatched shading in FIG. 91 , where the pixels read from the buffer to refresh the display might be the pixels written to the buffer two frames ago, or they may correspond to the frame immediately being written to the same frame buffer.The use of three frame buffers in the client will resolve the problem of the small window of contention for access to a frame buffer as shown in FIG. 92 .However, there is still a problem if the display refresh rate is less than the media-frame rate over the MDDI link as shown in FIG. 93 .The use of a single buffer for moving video images is somewhat problematic as shown FIG. 94 . With the display refresh faster than the image transfer into the buffer, the image being refreshed sometimes will show the upper portion of the frame being written and the lower portion of the image will be the frame previously transferred. With the display refresh faster than the image transfer (the preferred mode of operation) there will be more frequent instances of frames showing a similar split image. XVII. Delay Value Table The Packet Processing Delay Parameters Packet uses a table-lookup function to calculate the predicted delay to process certain commands in the client. Values in the table increase in a logarithmic fashion to provide a very wide dynamic range of delay values. An exemplary table of delay values useful for implementing embodiments of the invention is found in Table XX below, with corresponding index values versus delay values.Table XX0 - no_delay37-1.5ns74 - 51ns111 - 1.8us148 - 62us185 - 2.2ms222 - 75ms1- 46ps38 - 1.6ns75 - 56ns112 - 2.0us149 - 68us186 - 2.4ms223 - 83ms2 - 51ps39 - 1.8ns76 - 62ns113 - 2.2us150 - 75us187 - 2.6ms224 - 91ms3 - 56ps40 - 2.0ns77 - 68ns114 - 2.4us151 - 83us188 - 2.9ms225 - 100ms4 - 62ps41 - 2.2ns78 - 75ns115 - 2.6us152 - 91us189 - 3.2ms226 - 110ms5 - 68ps42 - 2.4ns79 - 83ns116 - 2.9us153 - 100us190 - 3.5ms227 - 120ms6 - 75ps43 - 2.6ns80 - 91ns117 - 3.2us154 - 110us191 - 3.8ms228 - 130ms7 - 83ps44 - 2.9ns81 - 100ns118 - 3.5us135 -120us192 - 4.2ms229 - 150ms8 - 91ps45 - 3.2ns82 - 110ns119 - 3.8us156 - 130us193 - 4.6ms230 - 160ms9 - 100ps46 - 3.5ns83 - 120ns120 - 4.2us157 - 150us194 - 5.1ms231 - 180ms10 - 110ps47 - 3.8ns84 - 130ns121 - 4.6us158 - 160us195 - 5.6ms232 - 200ms11 - 120ps48 - 4.2ns85 - 150ns122 - 5.1us159 - 180us196 - 6.2ms233 - 220ms12 - 130ps49 - 4.6ns86 - 160ns123 - 5.6us160 - 200us197 - 6.8ms234 - 240ms13 - 150ps50 - 5.1ns87 - 180ns124 - 6.2us161 - 220us198 - 7.5ms235 - 260ms14 - 160ps51 - 5.6ns88 - 200ns125 - 6.8us162 - 240us199 - 8.3ms236 - 290ms15 - 180ps52 - 6.2ns89 - 220ns126 - 7.5 us163 - 260us200 - 9.1ms237 - 320ms16 - 200ps53 - 6.8ns90 - 240ns127 - 8.3us164 - 290us201 - 10ms238 - 350ms17 - 220ps54 - 7.5ns91 - 260ns128 - 9.1us165 - 320us202 - 11ms239 - 380ms18 - 240ps55 - 8.3ns92 - 290ns129 - 10us166 - 350us203-12ms240 - 420ms19 - 260ps56 - 9.1ns93 - 320ns130 - 11us167 - 380us204 - 13ms241 - 460ms20 - 290ps57 - 10ns94 - 350ns131 - 12us168 - 420us205 - 15ms242 - 510ms21 - 320ps58 - 11ns95 - 380ns132 - 13us169 - 460us206 - 16ms243 - 560ms22 - 350ps59 - 12ns96 - 420ns133 - 15us170 - 510us207 - 18ms244 - 620ms23 - 380ps60 - 13ns97 - 460ns134 - 16us171 - 560us208 - 20ms245 - 680ms24 - 420ps61 - 15ns98 - 510ns135 - 18us172 - 620us209 - 22ms246 - 750ms25 - 460ps62 - 16ns99 - 560ns136 - 20us173 - 680us210 - 24ms247 - 830ms26 - 510ps63 - 18ns100 - 620ns137 - 22us174 - 750us211 - 26ms248 - 910ms27 - 560ps64 - 20ns101 - 680ns138 - 24us175 - 830us212 - 29ms249 - 1.0sec28 - 620ps65 - 22ns102 - 750ns139 - 26us176 - 910us213 - 32ms250 - 1.1sec29 - 680ps66 - 24ns103 - 830ns140 - 29us177 - 1.0ms214 - 35ms251 - 1.2sec30 - 750ps67 - 26ns104 - 910ns141 - 32us178 - 1.1ms215 - 38ms252 - 1.3sec31 - 830ps68 - 29ns105 - 1.0us142 - 35us179 - 1.2ms216 - 42ms253 - 1.5sec32 - 910ps69 - 32ns106 - 1.1us143 - 38us180 - 1.3ms217 - 46ms254 - 1.6s33 - 1.0ns70 - 35ns107 - 1.2us144 - 42us181 - 1.5ms218 - 51ms255 - indefinite34 - 1.1ns71 - 38ns108 - 1.3us145 - 46us182 - 1.6ms219 - 56ms35 - 1.2ns72 - 42ns109 - 1.5us146 - 51us183 - 1.8ms220 - 62ms36 - 1.3ns73 - 46ns110 - 1.6us147 - 56us184 - 2.0ms221 - 68msThe delay is computed by performing a table lookup using the specified parameter as an index into the table. This means a delay is equal to PacketProcessingTable(index). For example: if one of the parameters from the Delay Parameters List Item is an 8-bit value equal to 134, then the delay is equal to PacketProcessingTable(134) which is 16 µsec. The value 255 indicates that the command completion time cannot be determined by calculation, and that the host will check the Graphics Busy Flags in the Client Request and Status Packet or MCCS VCP Control Parameter B7h.In some cases this delay is multiplied by the height, width, or number of pixels in the destination image and added to other delays to compute the overall packet processing delay. XVIII. Multiple Client Support The current protocol version does not appear to directly support multiple client devices. However, most packets contain a reserved Client ID field that can be used to address specific client devices in a system with multiple clients. Currently, for many applications this client ID or these client IDs are set to be zero. The sub-frame header packet also contains a field to indicate whether or not the host supports a multiple client system. Therefore, there is a manner in which multiple client devices would likely be connected and addressed in future applications of the MDDI or protocol to aid system designers to plan for future compatibility with multiple client hosts and clients.In systems having multiple clients it is useful for clients to be connected to the host using a daisy-chain of clients, or using hubs, as shown in Fig. 95 , or using a combination of these techniques as shown in FIG. 96 . XIX. Addendum In addition to the formats, structures, and contents discussed above for the various packets used to implement the architecture and protocol for embodiments of the invention, more detailed field contents or operations are presented here for some of the packet types. These are presented here to further clarify their respective use or operations to enable those skilled in the art to more readily understand and make use of the invention for a variety of applications. Only a few of the fields not already discussed are discussed further here. In addition, these fields are presented with exemplary definitions and values in relation to the embodiments presented above. However, such values are not to be taken as limitations of the invention, but represent one or more embodiments useful for implementing the interface and protocol, and not all embodiments need be practiced together or at the same time. Other values can be used in other embodiments to achieve the desired presentation of data or data rate transfer results, as will be understood by those skilled in the art. A. For Video Stream Packets In one embodiment, the Pixel Data Attributes field (2 byte) has a series of bit values that are interpreted as follows. Bits 1 and 0 select how the display pixel data is routed. For bit values of '11' pixel data is displayed to or for both eyes, for bit values '10', pixel data is routed only to the left eye, and for bit values '01', pixel data is routed only to the right eye, and for bit values of '00' the pixel data is routed to an alternate display as may be specified by bits 8 through 11 discussed below. If the primary display in or being used or operated by a client does not support stereo images or imaging in some form, then these commands cannot effectively be implanted to have an impact as desired by the display. In this situation or configuration the client should route pixel data to a primary display regardless of the bit values or for any of the bit combinations '01,' '10,' or '11,' since the resulting commands or control won't be implemented by the display. It is recommended, but not required by the embodiments, that the value '11' be used to address the primary display in those clients that do not support stereo display capability.Bit 2 indicates whether or not the Pixel Data is presented in an interlace format, with a value of '0' meaning the pixel data is in the standard progressive format, and that the row number (pixel Y coordinate) is incremented by 1 when advancing from one row to the next. When this bit has a value of '1', the pixel data is in interlace format, and the row number is incremented by 2 when advancing from one row to the next. Bit 3 indicates that the Pixel Data is in alternate pixel format. This is similar to the standard interlace mode enabled by bit 2, but the interlacing is vertical instead of horizontal. When Bit 3 is '0' the Pixel Data is in the standard progressive format, and the column number (pixel X coordinate) is incremented by 1 as each successive pixel is received. When Bit 3 is '1' the Pixel Data is in alternate pixel format, and the column number is incremented by 2 as each pixel is received.Bit 4 indicates whether or not the Pixel data is related to a display or a camera, as where data is being transferred to or from an interval display for a wireless phone or similar device or even a portable computer, or such other devices as discussed above, or the data is being transferred to or from a camera built into or directly coupled to the device. When Bit 4 is '0' the Pixel data is being transferred to or from a display frame buffer. When Bit 4 is '1' Pixel data is being transferred to or from a camera or video device of some type, such devices being well known in the art.Bit 5 is used to indicate when the pixel data contains the next consecutive row of pixels in the display. This is considered the case when Bit 5 is set equal to '1'. When bit 5 is set to '1' then the X Left Edge, Y Top Edge, X Right Edge, Y Bottom Edge, X Start, and Y Start parameters are not defined and are ignored by the client. When Bit 15 is set at a logic-one level, this indicates that the pixel data in this packet is the last row of pixels in the image. Bit 8 of the Client Feature Capability Indicators field of the Client Capability Packet indicates whether this feature is supported.Bits 7 and 6 are Display Update Bits that specify a frame buffer where the pixel data is to be written. The more specific effects are discussed elsewhere. For bit values of '01' Pixel data is written to the offline image buffer. For bit values of '00' Pixel data is written to the image buffer used to refresh the display. For bit values of '11' Pixel data is written to all image buffers. The bit values or combination of '10' is treated as an invalid value or designation and Pixel data is ignored and not written to any of the image buffers. This value may have use for future applications of the interface.Bits 8 through 11 form a 4-bit unsigned integer that specifies an alternate display or display location where pixel data is to be routed. Bits 0 and 1 are set equal to '00' in order for the display client to interpret bits 8 through 11 as an alternate display number. If bits 0 and 1 are not equal to '00' then bits 8 through 11 are set to logic-zero levels.Bits 12 through 14 are reserved for future use and are generally set to logic-zero levels. Bit 15, as discussed, is used in conjunction with bit 5, and setting bit 15 to logic-one indicates that the row of pixels in the Pixel Data field is the last row of pixels in a frame of data. The next Video Stream Packet having bit 5 set to logic-one will correspond to the first row of pixels of the next video frame.The 2-byte X Start and Y Start fields specify the absolute X and Y coordinates of the point (X Start, Y Start) for the first pixel in the Pixel Data field. The 2-byte X Left Edge and Y Top Edge fields specify the X coordinate of the left edge and Y coordinate of the top edge of the screen window filled by the Pixel Data field, while the X Right Edge and Y Bottom Edge fields specify the X coordinate of the right edge, and the Y coordinate of the bottom edge of the window being updated.The Pixel Count field (2 bytes) specifies the number of pixels in the Pixel Data field below.The Parameter CRC field (2 bytes) contains a CRC of all bytes from the Packet Length to the Pixel Count. If this CRC fails to check then the entire packet is discarded.The Pixel Data field contains the raw video information that is to be displayed, and which is formatted in the manner described by the Video Data Format Descriptor field. The data is transmitted one "row" at a time as discussed elsewhere. When Bit 5 of the Pixel Data Attributes field is set at logic level one, then the Pixel Data field contains exactly one row of pixels, with the first pixel being transmitted corresponding to the left-most pixel and the last pixel transmitted corresponding to the right-most pixel.The Pixel Data CRC field (2 bytes) contains a 16-bit CRC of only the Pixel Data. If a CRC verification of this value fails then the Pixel Data can still be used, but the CRC error count is incremented. B. For Audio Stream Packets In one embodiment, the Audio Channel ID field (1 byte) uses an 8 bit unsigned integer value to identify a particular audio channel to which audio data is sent by the client device. The physical audio channels are specified in or mapped to physical channels by this field as values of 0, 1, 2, 3, 4, 5, 6, or 7 which indicate the left front, right front, left rear, right rear, front center, sub-woofer, surround left, and surround right channels, respectively. An audio channel ID value of 254 indicates that the single stream of digital audio samples is sent to both the left front and right front channels. This simplifies communications for applications such as where a stereo headset is used for voice communication, productivity enhancement apps are used on a PDA, or other applications where a simple User Interface generates warning tones. Values for the ID field ranging from 8 through 253, and 255 are currently reserved for use where new designs desire additional designations, as anticipated by those skilled in the art.The Reserved 1 field (1 byte) is generally reserved for future use, and has all bits in this field set to zero. One function of this field is to cause all subsequent 2 byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.The Audio Sample Count field (2 bytes) specifies the number of audio samples in this packet.The Bits Per Sample and Packing field contains 1 byte that specifies the packing format of audio data. In one embodiment, the format generally employed is for Bits 4 through 0 to define the number of bits per PCM audio sample. Bit 5 then specifies whether or not the Digital Audio Data samples are packed. As mentioned above, FIG. 12 illustrates the difference between packed and byte-aligned audio samples. A value of '0' for Bit 5 indicates that each PCM audio sample in the Digital Audio Data field is byte-aligned with the interface byte boundary, and a value of '1' indicates that each successive PCM audio sample is packed up against the previous audio sample. This bit is effective only when the value defined in bits 4 through 0 (the number of bits per PCM audio sample) is not a multiple of eight. Bits 7 through 6 are reserved for use where system designs desire additional designations and are generally set at a value of zero.The Audio Sample Rate field (1 byte) specifies the audio PCM sample rate. The format employed is for a value of 0 to indicate a rate of 8,000 samples per second (sps), a value of 1 indicates 16,000 sps., value of 2 for 24,000 sps, value of 3 for 32,000 sps, value of 4 for 40,000 sps, value of 5 for 48,000 sps, value of 6 for 11,025 sps, value of 7 for 22,050 sps, and value of 8 for 44,100 sps, respectively, with values of 9 through 255 being reserved for future use, so they are currently set to zero.The Parameter CRC field (2 bytes) contains a 16-bit CRC of all bytes from the Packet Length to the Audio Sample Rate. If this CRC fails to check appropriately, then the entire packet is discarded. The Digital Audio Data field contains the raw audio samples to be played, and is usually in the form of a linear format as unsigned integers. The Audio Data CRC field (2 bytes) contains a 16-bit CRC of only the Audio Data. If this CRC fails to check, then the Audio Data can still be used, but the CRC error count is incremented. C. For User-Defined Stream Packets In one embodiment, the 2-byte Stream ID Number field is used to identify a particular user defined stream. The contents of the Stream Parameters and Stream Data fields, are typically defined by the MDDI equipment manufacturer. The 2-byte Stream Parameter CRC field contains a 16-bit CRC of all bytes of the stream parameters starting from the Packet Length to the Audio Coding byte. If this CRC fails to check, then the entire packet is discarded. Both the Stream Parameters and Stream Parameter CRC fields may be discarded if not needed by an end application of the MDDI, that is, they are considered optional. The 2-byte Stream Data CRC field contains a CRC of only the Stream Data. If this CRC fails to check appropriately, then use of the Stream Data is optional, depending on the requirements of the application. Use of the stream data contingent on the CRC being good, generally requires that the stream data be buffered until the CRC is confirmed as being good. The CRC error count is incremented if the CRC does not check. D. For Color Map Packets The 2-byte hClient ID field contains information or values that are reserved for a Client ID, as used previously. Since this field is generally reserved for future use, the current value is set to zero, by setting the bits to '0'.The 2-byte Color Map Item Count field uses values to specify the total number of 3-byte color map items that are contained in the Color Map Data field, or the color map table entries that exist in the Color Map Data in this packet. In this embodiment, the number of bytes in the Color Map Data is 3 times the Color Map Item Count. The Color Map Item Count is set equal to zero to send no color map data. If the Color Map Size is zero then a Color Map Offset value is generally still sent but it is ignored by the display. The Color Map Offset field (4 bytes) specifies the offset of the Color Map Data in this packet from the beginning of the color map table in the client device.A 2-byte Parameter CRC field contains a CRC of all bytes from the Packet Length to the Audio Coding byte. If this CRC fails to check then the entire packet is discarded.For the Color Map Data field, the width of each color map location is a specified by the Color Map Item Size field, where in one embodiment the first part specifies the magnitude of blue, the second part specifies the magnitude of green, and the third part specifies the magnitude of red. The Color Map Size field specifies the number of 3-byte color map table items that exist in the Color Map Data field. If a single color map cannot fit into one Video Data Format and Color Map Packet, then the entire color map may be specified by sending multiple packets with different Color Map Data and Color Map Offsets in each packet. The number of bits of blue, green, and red in each color map data item is generally the same as specified in the Color Map RGB Width field of the Display Capability Packet.A 2-byte Color Map Data CRC field contains a CRC of only the Color Map Data. If this CRC fails to check then the Color Map Data can still be used but the CRC error count is incremented.Each color map data item is to be transmitted in the order blue, green, red, with the least significant bit of each component transmitted first. The individual red, green, and blue components of each color map item are packed, but each color map item (the least significant bit of the blue component) should be byte-aligned. Fig. 97 illustrates an example of color map data items with 6 bits of blue, 8 bits of green, and 7 bits of red. For this example, the Color Map Item Size in the Color Map Packet is equal to 21, and the Color Map RGB Width field of the Client Capability Packet is equal to 0x0786. E. For Reverse Link Encapsulation Packets The Parameter CRC field (2 bytes) contains a 16-bit CRC of all bytes from the Packet Length to the Turn-Around Length. If this CRC fails to check, then the entire packet is discarded.In one embodiment, the Reverse Link Flags field (1 byte) contains a set of flags to request information from the client. If a bit (for example, Bit 0) is set to a logic-one level, then the host requests the specified information from the display using the Client Capability Packet. If the bit is set to a logic-zero level, then the host does not need the information from the client. The remaining bits (here Bits 1 through 7) are reserved for future use and are set to zero. However, more bits can be used as desired to set flags for the reverse link.The Reverse Rate Divisor field (1 byte) specifies the number of MDDI_Stb cycles that occur in relation to the reverse link data clock. The reverse link data clock is equal to the forward link data clock divided by two times the Reverse Rate Divisor. The reverse link data rate is related to the reverse link data clock and the Interface Type on the reverse link. In this embodiment, for a Type 1 interface the reverse data rate equals the reverse link data clock, for Type 2, Type 3, and Type 4 interfaces the reverse data rates equal two times, four times, and eight times the reverse link data clock, respectively.The All Zero 1 field contains a group of bytes, here 8, that is set equal to zero in value by setting the bits at a logic-zero level, and is used to ensure that all MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering clock using only MDDI_Stb prior to disabling the host's line drivers during the Turn-Around 1 field. In one embodiment, the length of the All Zero 1 field is greater than or equal to the number of forward link byte transmission times in the round-trip delay of the cable.The Turn-Around 1 Length field (1 byte) specifies the total number of bytes that are allocated for Turn-Around 1, establishing the first turn-around period. The Turn-Around 1 field employs the number of bytes specified by the Turn-Around 1 Length parameter are allocated to allow the MDDI_Data line drivers in the client to enable, before the line drivers in the host are disabled. The client enables its MDDI_Data line drivers during bit 0 of Turn-Around 1 and the host disables its outputs so as to be completely disabled prior to the last bit of Turn-Around 1. The MDDI_Stb signal behaves as though MDDI_Data0 were at a logic-zero level during the entire Turn Around 1 period. A more complete description of the setting of Turn-Around 1 is given above.The Reverse Data Packets field contains a series of data packets transferred from the client to host. The client may send filler packets or drive the MDDI_Data lines to a logic-zero state or level when it has no data to send to the host. In this embodiment, if the MDDI_Data lines are driven to zero, the host will interpret this as a packet with a zero length (not a valid length) and the host will accept no additional packets from the client for the duration of the current Reverse Link Encapsulation Packet.The Turn-Around 2 Length field (1 byte) specifies the total number of bytes that are allocated for Turn-Around 2, for establishing a second turn-around period. The recommended length of Turn-Around 2 is the number of bytes required for the round-trip delay plus the time required for the host to enable its MDDI_Data drivers. Turn-Around 2 Length may be also be a value larger than the minimum required or calculated value to allow sufficient time to process reverse link packets in the host.The Turn Around 2 field consists of the number of bytes as specified by the Turn-Around Length parameter. The host waits for at least the round trip delay time before it enables its MDDI_Data line drivers during Turn-Around 2. The host enables its MDDI_Data line drivers so that they are generally completely enabled prior to the last bit of Turn-Around 2, and the client disables its outputs so that they are generally completely disabled prior to the last bit of Turn-Around 2. The purpose of the Turn-Around 2 field is to allow the remaining amount of data from the Reverse Data Packets field to be transmitted or transferred from the client. Variations in different systems implementing the interface and the amount of safety margin allocated, it is possible that neither the host nor client will be driving the MDDI_Data signals to a logic-zero level during some parts of the Turn Around 2 field period, as seen by the line receivers in or at the host. The MDDI_Stb signal behaves as though the MDDI_Data0 were at a logic-zero level during substantially the entire Turn Around 2 period. A description of the setting of Turn-Around 2 is given above.The Reverse Data Packets field contains a series of data packets being transferred from the client to a host. As stated earlier, Filler packets are sent to fill the remaining space that is not used by other packet types.The All Zero 2 field contains a group of bytes (8 in this embodiment) that is set equal to zero in value by setting the bits at a logic-zero level, and is used to ensure that all MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering clock using both MDDI _Data0 and MDDI_Stb after enabling the host's line drivers following the Turn-Around 2 field. F. For Client Capability Packets As illustrated for one embodiment, the Protocol Version field uses 2 bytes to specify a protocol version used by the client. The initial version is currently set equal to one, and will be changed over time as new versions are generated as would be known, while the Minimum Protocol Version field uses 2 bytes to specify the minimum protocol version that the client can employ or interpret. In this case, a zero value is also a valid value. The Data Rate Capability field (2 bytes) specifies the maximum data rate the client can receive on each data pair on the forward link of the interface, and is specified in the form of megabits per second (Mbps). The Interface Type Capability field (1 byte) specifies the interface types that are supported on the forward and reverse links. A bit set to '1' indicates that a specified interface type is supported, and a bit set to '0' indicates that the specified type is not supported. Hosts and clients should support at least Type 1 on the forward and reverse lines. There is no requirement to support a contiguous range of interface types. For example, it would be perfectly valid to support only Type 1 and Type 3, but not Type 3 and Type 4 in an interface. It is also not necessary for the forward and reverse links to operate with the same interface type. However, when a link comes out of hibernation both forward and reverse links should commence operating in Type 1 mode, until other modes may be negotiated, selected, or otherwise approved for use by both the host and client.The supported interfaces are indicated in one embodiment by selecting Bit 0, Bit 1, or Bit 2 to select either a Type 2 (2 bit), Type 3 (4 bit), or Type 4 (8 bit) mode on the forward link, respectively; and Bit 3, Bit 4, or Bit 5 to select either a Type 2, Type 3, or Type 4 mode on the reverse link, respectively; with Bits 6 and 7 being reserved and generally set to zero at this time. The Bitmap Width and Height fields, here each being 2 bytes, specify the width and height of the bitmap, respectively, in pixels.The Monochrome Capability field (1 byte) is used to specify the number of bits of resolution that can be displayed in a monochrome format. If a display cannot use a monochrome format then this value is set at zero. Bits 7 through 4 are reserved for future use and are, thus, set as zero. Bits 3 through 0 define the maximum number of bits of grayscale that can exist for each pixel. These four bits make it possible to specify values of 1 to 15 for each pixel. If the value is zero then monochrome format is not supported by the display.The Bayer Capability field uses 1 byte to specify the number of bits of resolution, pixel group, and pixel order that can be transferred in Bayer format. If the client cannot use the Bayer format then this value is zero. The Bayer Capability field is composed of the following values: Bits 3 through 0 define the maximum number of bits of intensity that exist in each pixel, while Bits 5 through 4 define the pixel group pattern that is required, while Bits 8 through 6 define the pixel order that is required; with Bits 14 through 9 being reserved for future use and generally set to zero in the meantime. Bit 15, when set to a logic-one level indicates that the client can accept Bayer pixel data in either packed or unpacked format. If bit 15 is set to zero this indicates that the client can accept Bayer pixel data only in unpacked format.The Color Map Capability field (3 bytes) specifies the maximum number of table items that exist in the color map table in the display. If the display cannot use the color map format then this value is set at zero.The RGB Capability field (2 bytes) specifies the number of bits of resolution that can be displayed in RGB format. If the display cannot use the RGB format then this value is equal to zero. The RGB Capability word is composed of three separate unsigned values where: Bits 3 through 0 define the maximum number of bits of blue, Bits 7 through 4 define the maximum number of bits of green, and Bits 11 through 8 define the maximum number of bits of red in each pixel. Currently, Bits 14 through 12 are reserved for future use and are generally set to zero. Bits 14 through 12 are reserved for future use and generally set to zero. Bit 15, when set to a logic-one level indicates that the client can accept RGB pixel data in either packed or unpacked format. If bit 15 is set to a logic-zero level, this indicates that the client can accept RGB pixel data only in unpacked format.The Y Cr Cb Capability field (2 bytes) specifies the number of bits of resolution that can be displayed in Y Cr Cb format. If the display cannot use the Y Cr Cb format then this value is set equal to zero. The Y Cr Cb Capability word is composed of three separate unsigned values where: Bits 3 through 0 define the maximum number of bits in the Cb sample, Bits 7 through 4 define the maximum number of bits in the Cr sample, Bits 11 through 8 define the maximum number of bits in the Y sample, and Bits 15 through 12 are currently reserved for future use and are set to zero.The Client Feature Capability field uses 4 bytes that contain a set of flags that indicate specific features in the client that are supported. A bit set to a logic-one level indicates the capability is supported, while a bit set to a logic-zero level indicates the capability is not supported. In one embodiment, the value for Bit 0 indicates whether or not Bitmap Block Transfer Packet (packet type 71) is supported. The value for Bits 1, 2, and 3 indicate whether or not Bitmap Area Fill Packet (packet type 72), Bitmap Pattern Fill Packet (packet type 73), or Communication Link Data Channel Packet (packet type 74), respectively, are supported. The value for Bit 4 indicates whether or not the client has the capability to make one color transparent using the Transparent Color Enable Packet, while values for Bits 5 and 6 indicate if the client can accept video data or audio data in packed format, respectively, and the value for Bit 7 indicates whether or not the client can send a reverse-link video stream from a camera. The value for Bit 8 indicates whether or not the client has the ability to receive a full line of pixel data and ignore display addressing as specified by bit 5 of the Pixel Data Attributes field of the Video Stream Packet, and the client can also detect frame sync or end of video frame data using bit 15 of the Pixel Data Attributes Field.The value for Bits 11 and 12 indicate when the client is communicating either with a pointing device and can send and receive Pointing Device Data Packets, or with a keyboard and can send and receive Keyboard Data Packets, respectively. The value for Bit 13 indicates whether or not the client has the ability to set one or more audio or video parameters by supporting the VCP Feature packets: Request VCP Feature Packet, VCP Feature Reply Packet, Set VCP Feature Packet, Request Valid Parameter Packet, and Valid Parameter Reply Packet. The value for Bit 14 indicates whether or not the client has the ability to write pixel data into the offline display frame buffer. If this bit is set to a logic-one level then the Display Update Bits (bits 7 and 6 of the Pixel Data Attributes field of the Video Stream Packet) may be set to the values '01'.The value for Bit 15 indicates when the client has the ability to write pixel data into only the display frame buffer currently being used to refresh the display image. If this bit is set to one then the Display Update Bits (bits 7 and 6 of the Pixel Data Attributes field of the Video Stream Packet) may be set to the values '00'. The value for Bit 16 indicates when the client has the ability to write pixel data from a single Video Stream Packet into all display frame buffers. If this bit is set to one then the Display Update Bits (bits 7 and 6 of the Pixel Data Attributes field of the Video Stream Packet) may be set to the value '11'.The value for Bit 17 indicates when a client has the ability to respond to the Request Specific Status Packet, the value for Bit 18 indicates when the client has the ability to respond to the Round Trip Delay Measurement Packet, and the value for Bit 19 indicates when the client has the ability to the Forward Link Skew Calibration Packet.The value for Bit 21 indicates when the client has the ability to interpret the Request Specific Status Packet and respond with the Valid Status Reply List Packet. The client indicates an ability to return additional status in the Valid Parameter Reply List field of the Valid Status Reply List Packet as described elsewhere.The value for Bit 22 indicates whether or not the client has the ability to respond to the Register Access Packet. Bits 9 through 10, 20, and 23 through 31 are currently reserved for future use or alternative designations useful for system designers and are generally set equal to zero.The Display Video Frame Rate Capability field (1 byte) specifies the maximum video frame update capability of the display in frames per second. A host may choose to update the image at a slower rate than the value specified in this field.The Audio Buffer Depth field (2 bytes) specifies the depth of the elastic buffer in a Display which is dedicated to each audio stream.The Audio Channel Capability field (2 bytes) contains a group of flags that indicate which audio channels are supported by the client or client connected device. A bit set to one indicates the channel is supported, and a bit set to zero indicates that channel is not supported. The Bit positions are assigned to the different channels, for example Bit positions 0, 1, 2, 3, 4, 6, and 7 in one embodiment, indicate the left front, right front, left rear, right rear, front center, sub-woofer, surround left, and surround right channels, respectively. Bits 8 through 14 are currently reserved for future use, and are generally set to zero. In one embodiment Bit 15 is used to indicate if the client provides support for the Forward Audio Channel Enable Packet. If this is the case, Bit 15 set to a logic-one level. If, however, the client is not capable of disabling audio channels as a result of the Forward Audio Channel Enable Packet or if the client does not support any audio capability, then this bit is set to a logic-zero level or value.A 2-byte Audio Sample Rate Capability field, for the forward link, contains a set of flags to indicate the audio sample rate capability of the client device. Bit positions are assigned to the different rates accordingly, such as Bits 0, 1, 2, 3, 4, 5, 6, 7, and 8 being assigned to 8,000, 16,000, 24,000, 32,000, 40,000, 48,000, 11,025, 22,050, and 44,100 samples per second (SPS), respectively, with Bits 9 through 15 being reserved for future or alternative rate uses, as desired, so they are currently set to '0'. Setting a bit value for one of these bits to '1' indicates that that particular sample rate is supported, and setting the bit to '0' indicates that that sample rate is not supported.The Minimum Sub-frame Rate field (2 bytes) specifies the minimum sub-frame rate in frames per second. The minimum sub-frame rate keeps the client status update rate sufficient to read certain sensors or pointing devices in the client.A 2-byte Mic Sample Rate Capability field, for the reverse link, contains a set of flags that indicate the audio sample rate capability of a microphone in the client device. For purposes of the MDDI, a client device microphone is configured to minimally support at least an 8,000 sample per second rate. Bit positions for this field are assigned to the different rates with bit positions 0, 1, 2, 3, 4, 5, 6, 7, and 8, for example, being used to represent 8,000, 16,000, 24,000, 32,000, 40,000, 48,000, 11,025, 22,050, and 44,100 samples per second (SPS), respectively, with Bits 9 through 15 being reserved for future or alternative rate uses, as desired, so they are currently set to '0'. Setting a bit value for one of these bits to '1' indicates that that particular sample rate is supported, and setting the bit to '0' indicates that that sample rate is not supported. If no microphone is connected then each of the Mic Sample Rate Capability bits are set equal to zero.The Keyboard Data Format field (here 1 byte) specifies whether or not a keyboard is connected to the client system and the type of keyboard that is connected. In one embodiment, the value established by Bits 6 through 0 is used to define the type of keyboard that is connected. If the value is zero (0) then the keyboard type is considered as unknown. For a value of 1, the keyboard data format is considered to be a standard PS-2 style. Currently values in the range of 2 through 125 are not in use, being reserved for use of system designers and interface incorporators or product developers to define specific keyboard or input devices for use with the MDDI, and corresponding clients or hosts. A value of 126 is used to indicate that the keyboard data format is user-defined, while a value of 127 is used to indicate that a keyboard cannot be connected to this client. In addition, Bit 7 can be used to indicate whether or not the keyboard can communicate with the client. The intended use of this bit is to indicate when the keyboard can communicate with the client using a wireless link. Bit 7 would be set to a zero level if bits 6 through 0 indicate that a keyboard cannot be connected to the client. Therefore, for one embodiment, when the value of Bit 7 is 0, the keyboard and client cannot communicate, while if the value of Bit 7 is 1, the keyboard and client have acknowledged that they can communicate with each other.The Pointing Device Data Format field (here 1 byte) specifies whether or not a pointing device is connected to the client system and the type of pointing device that is connected. In one embodiment, the value established by Bits 6 through 0 is used to define the type of pointing device that is connected. If the value is zero (0) then the pointing device type is considered as unknown. For a value of 1, the pointing device data format is considered to be a standard PS-2 style. Currently values in the range of 2 through 125 are not in use, being reserved for use of system designers and interface incorporators or product developers to define specific pointing device or input devices for use with the MDDI and corresponding clients or hosts. A value of 126 is used to indicate that the pointing device data format is user-defined, while a value of 127 is used to indicate that a pointing device cannot be connected to this client. In addition, Bit 7 can be used to indicate whether or not the pointing device can communicate with the client. The intended use of this bit is to indicate when the keyboard can communicate with the client using a wireless link. Bit 7 would be set to a zero level if bits 6 through 0 indicate that a pointing device cannot be connected to the client. Therefore, for one embodiment, when the value of Bit 7 is 0, the pointing device and client cannot communicate, while if the value of Bit 7 is 1, the pointing device and client have acknowledged that they can communicate with each other.The Content Protection Type field (2 bytes) contains a set of flags that indicate the type of digital content protection that is supported by the Display. Currently, bit position 0 is used to indicate when DTCP is supported and bit position 1 is used to indicate when HDCP is supported, with bit positions 2 through 15 being reserved for use with other protection schemes as desired or available, so they are currently set to zero.The Mfr Name field (here 2 bytes) contains the EISA 3-character ID of the manufacturer, packed into three 5-bit characters in the same manner as in the VESA EDID specification. The character 'A' is represented as 00001 binary, the character 'Z' is represented as 11010 binary, and all letters between 'A' and 'Z' are represented as sequential binary values that correspond to the alphabetic sequence between 'A' and 'Z'. The most significant bit of the Mfr Name field is unused and is generally set to logic-zero for now until a use is made in the future implemtnaitons. Example: a manufacturer represented by the string "XYZ" would have a Mfr Name value of 0x633a. If this field is not supported by the client it is generally set to zero. Product Code field uses 2 bytes to contain a product code assigned by the display manufacturer. If this field is not supported by the client it is generally set to zero.Reserved 1, Reserved 2, and Reserved 3 fields (here 2 bytes each) are reserved for future use in imparting information. All bits in these field are generally be set to a logic-zero level. The purpose of such fields is currently to cause all subsequent 2 byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.Serial Number filed uses 4 bytes in this embodiment to specify the serial number of the display in numeric form. If this field is not supported by the client it is generally set to zero. The Week of Manufacture field uses 1 byte to define the week of manufacture of the display. This value is typically in the range of 1 to 53 if it is supported by the client If this field is not supported by the client it is set to zero. The Year of Manufacture field is 1 byte that defines the year of manufacture of the display. This value is an offset from the year 1990. Years in the range of 1991 to 2245 can be expressed by this field. Example: the year 2003 corresponds to a Year of Manufacture value of 13. If this field is not supported by the client it is set to zero.The CRC field (here 2 bytes) contains a 16-bit CRC of all bytes in the packet including the Packet Length. G. For Client Request and Status Packets The Reverse Link Request field (3 byte) specifies the number of bytes the client needs in the reverse link in the next sub-frame to send information to the host.The CRC Error Count field (1 byte) indicates how many CRC errors have occurred since the beginning of the media-frame. The CRC count is reset when a subframe header packet with a Sub-frame Count of zero is sent. If the actual number of CRC errors exceeds 255 then this value generally saturates at 255.The Capability Change field uses 1 byte to indicate a change in the capability of the client. This could occur if a user connects a peripheral device such as a microphone, keyboard, or display, or for some other reason. When Bits[7:0] are equal to 0, then the capability has not changed since the last Client Capability Packet was sent. However, when Bits[7:0] are equal to 1 to 255, the capability has changed. The Client Capability Packet is examined to determine the new display characteristics.The Client Busy Flags field uses 2 bytes to indicate that the client is performing a specific function and is not ready to yet accept another packet related to that function. A bit set to a logic-one level or value indicates that the particular function is currently being performed by the client and that the related function in the client is busy. If the related function in the client is ready, the bit is set t a logic-zero. The client should return a busy status (bit set to one) for all functions that are not supported in the client.In one embodiment these bytes are interpreted according to the relationship: if Bit 0 is a '1' then the bitmap block transfer function is busy, while if Bit 1 is a '1', then a bitmap area fill function is busy, and if Bit 2 is a '1', then a bitmap pattern fill function is busy. Currently, Bits 3 through 15 remain reserved for future use and are generally set to a logic-one level or state to indicate a busy status in case these bits are assigned in the future. H. For Bit Block Transfer Packets The Window Upper Left Coordinate X Value and Y Value fields use 2 bytes each to specify the X and Y value of the coordinates of the upper left corner of the window to be moved. The Window Width and Height fields use 2 bytes each to specify the width and height of the window to be moved. The Window X Movement and Y Movement fields use 2 bytes each to specify the number of pixels that the window is to be moved horizontally and vertically, respectively. Typically, these coordinates are configured such that positive values for X cause the window to be moved to the right, and negative values cause movement to the left, while positive values for Y cause the window to be moved down, and negative values cause upward movement. I. For Bitmap Area Fill Packets Window Upper Left Coordinate X Value and Y Value fields use 2 bytes each to specify the X and Y value of the coordinates of the upper left corner of the window to be filled. The Window Width and Height fields (2 bytes each) specify the width and height of the window to be filled. The Video Data Format Descriptor field (2 bytes) specifies the format of the Pixel Area Fill Value. The format is the same as the same field in the Video Stream Packet. The Pixel Area Fill Value field (4 bytes) contains the pixel value to be filled into the window specified by the fields discussed above. The format of this pixel is specified in the Video Data Format Descriptor field. J. For Bitmap Pattern Fill Packets Window Upper Left Coordinate X Value and Y Value fields use 2 bytes each to specify the X and Y value of the coordinates of the upper left corner of the window to be filled. The Window Width and Height fields (2 bytes each) specify the width and height of the window to be filled. The Pattern Width and Pattern Height fields (2 bytes each) specify the width and height, respectively, of the fill pattern. The Horizontal Pattern Offset field (2 bytes) specifies a horizontal offset of the pixel data pattern from the left edge of the specified window to be filled. The value being specified is to be less than the value in the Pattern Width Field. The Vertical Pattern Offset field (2 bytes) specifies a vertical offset of the pixel data pattern from the top edge of the specified window to be filled. The value being specified is to be less than the value in the Pattern Height field.The 2-byte Video Data Format Descriptor field specifies the format of the Pixel Area Fill Value. FIG. 11 illustrates how the Video Data Format Descriptor is coded. The format is the same as the same field in the Video Stream Packet.The Parameter CRC field (2 bytes) contains a CRC of all bytes from the Packet Length to the Video Format Descriptor. If this CRC fails to check then the entire packet is discarded. The Pattern Pixel Data field contains raw video information that specifies the fill pattern in the format specified by the Video Data Format Descriptor. Data is packed into bytes, and the first pixel of each row is to be byte-aligned. The fill pattern data is transmitted a row at a time. The Pattern Pixel Data CRC field (2 bytes) contains a CRC of only the Pattern Pixel Data. If this CRC fails to check then the Pattern Pixel Data can still be used but the CRC error count is incremented. K. Communication Link Data Channel Packets The Parameter CRC field (2 bytes) contain a 16-bit CRC of all bytes from the Packet Length to the Packet Type. If this CRC fails to check then the entire packet is discarded.The Communication Link Data field contains the raw data from the communication channel. This data is simply passed on to the computing device in the display.The Communication Link Data CRC field (2 bytes) contains a 16-bit CRC of only the Communication Link Data. If this CRC fails to check then the Communication Link Data is still used or useful, but the CRC error count is incremented. L. For Interface Type Handoff Request Packets The Interface Type field (1 byte) specifies the new interface type to use. The value in this field specifies the interface type in the following manner. If the value in Bit 7 is equal to '0' the Type handoff request is for the forward link, if it is equal to '1', then the Type handoff request is for the reverse link. Bits 6 through 3 are reserved for future use, and are generally set to zero. Bits 2 through 0 are used to define the interface Type to be used, with a value of 1 meaning a handoff to Type 1 mode, value of 2 a handoff to Type 2 mode, a value of 3 a handoff to Type 3 mode, and a value of 4 a handoff to Type 4 mode. The values of '0' and 5 through 7 are reserved for future designation of alternative modes or combinations of modes. M. For Interface Type Acknowledge Packets The Interface Type field (1 byte) has a value that confirms the new interface type to use. The value in this field specifies the interface type in the following manner. If Bit 7 is equal to '0' the Type handoff request is for the forward link, alternatively, if it is equal to '1' the Type handoff request is for the reverse link. Bit positions 6 through 3 are currently reserved for use in designating other handoff types, as desired, and are generally set to zero. However, bit positions 2 through 0 are used define the interface Type to be used with a value of '0' indicating a negative acknowledge, or that the requested handoff cannot be performed, values of '1', '2', '3', and '4' indicating handoff to Type 1, Type 2, Type 3, and Type 4 modes, respectively. Values of 5 through 7 are reserved for use with alternative designations of modes, as desired. N. For Perform Type Handoff Packets The 1-byte Interface Type field indicates the new interface type to use. The value present in this field specifies the interface type by first using the value of Bit 7 to determine whether or not the Type handoff is for the forward or reverse links. A value of '0' indicates the Type handoff request is for the forward link, and a value of '1' the reverse link. Bits 6 through 3 are reserved for future use, and as such are generally set to a value of zero. However, Bits 2 through 0 are used to define the interface Type to be used, with the values 1, 2, 3, and 4 specifying the use of handoff to Type 1, Type 2, Type 3, and Type 4 modes, respectively. The use of values 0 and 5 through 7 for these bits is reserved for future use. O. For Forward Audio Channel Enable Packets The Audio Channel Enable Mask field (1 byte) contains a group of flags that indicate which audio channels are to be enabled in a client. A bit set to one enables the corresponding channel, and a bit set to zero disables the corresponding channel Bits 0 through 5 designate channels 0 through 5 which address left front, right front, left rear, right rear, front center, and sub-woofer channels, respectively. Bits 6 and 7 are reserved for future use, and in the mean time are generally set equal to zero. P. For Reverse Audio Sample Rate Packets The Audio Sample Rate field (1 byte) specifies the digital audio sample rate. The values for this field are assigned to the different rates with values of 0, 1, 2, 3, 4, 5, 6, 7, and 8 being used to designate 8,000, 16,000, 24,000, 32,000, 40,000, 48,000, 11,025, 22,050, and 44,100 samples per second (SPS), respectively, with values of 9 through 254 being reserved for use with alternative rates, as desired, so they are currently set to '0'. A value of 255 is used to disable the reverse-link audio stream.The Sample Format field (1 byte) specifies the format of the digital audio samples. When Bits[1:0] are equal to '0', the digital audio samples are in linear format, when they are equal to 1, the digital audio samples are in µ-Law format, and when they are equal to 2, the digital audio samples are in A-Law format. Bits[7:2] are reserved for alternate use in designating audio formats, as desired, and are generally set equal to zero. Q. For The Digital Content Protection Overhead Packets The Content Protection Type field (1 byte) specifies the digital content protection method that is used. A value of '0' indicates Digital Transmission Content Protection (DTCP) while a value of 1 indicates High-bandwidth Digital Content Protection System (HDCP). The value range of 2 through 255 is not currently specified but is reserved for use with alternative protection schemes as desired. The Content Protection Overhead Messages field is a variable length field containing content protection messages sent between the host and client. R. For The Transparent Color Enable Packets The Transparent Color Enable field (1 byte) specifies when transparent color mode is enabled or disabled. If Bit 0 is equal to 0 then transparent color mode is disabled, if it is equal to 1 then transparent color mode is enabled and the transparent color is specified by the following two parameters. Bits 1 through 7 of this byte are reserved for future use and are typically set equal to zero.The Video Data Format Descriptor field (2 bytes) specifies the format of the Pixel Area Fill Value. FIG. 11 illustrates how the Video Data Format Descriptor is coded. The format is generally the same as the same field in the Video Stream Packet.The Pixel Area Fill Value field uses 4 bytes allocated for the pixel value to be filled into the window specified above. The format of this pixel is specified in the Video Data Format Descriptor field. S. For The Round Trip Delay Measurement Packets The 2-byte Packet Length field specifies the total number of bytes in the packet not including the packet length field, and in one embodiment is selected to have a fixed length of 159. The 2-byte Packet Type field that identifies this packet type with a value of 82, identifying a packet as a Round Trip Delay Measurement Packet. The hClient ID field, as before is reserved for future use as a Client ID, and is generally set to zero.In one embodiment, the Parameter CRC field (2 bytes) contains a 16-bit CRC of all bytes from the Packet Length to the Packet Type. If this CRC fails to check then the entire packet is discarded.The Guard Time 1 field (here 64 bytes) is used to allow the MDDI_Data line drivers in the client to enable before the line drivers in the host are disabled. The client enables its MDDI_Data line drivers during bit 0 of Guard Time 1 and the host disenables its line drivers so as to be completely disabled prior to the last bit of Guard Time 1. The host and client both drive a logic-zero level during Guard Time 1 when they are not disabled. Another purpose of this field is to ensure that all MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering a clock or clock signal using only MDDI_Stb prior to disabling the host's line drivers.The Measurement Period field is a 64 byte window used to allow the client to respond with two bytes of 0xff, and 30 bytes of 0x00 at half the data rate used on the forward link. This data rate corresponds to a Reverse Link Rate Divisor of 1. The client returns this response immediately at the time it perceives as being the beginning of the Measurement Period. This response from the client will be received at a host at precisely the round trip delay of the link plus logic delay in the client after the beginning of the first bit of the Measurement Period at the host.The All Zero 1 field (2 bytes) contains zeroes to allow the MDDI_Data line drivers in the host and client to overlap so that MDDI_Data is always driven. The host enables MDDI_Data line drivers during bit 0 of the All Zero 1 field, and the client also continues to drive the signal to a logic-zero level as it did at the end of the Measurement Period.The value in the Guard Time 2 field (64 bytes) allows overlap of the Measurement Period driven by the client when the round trip delay is at the maximum amount that can be measured in the Measurement Period. The Client disables its line drivers during bit 0 of Guard Time 2 and the Host enables its line drivers immediately after the last bit of Guard Time 2. The host and client both drive a logic-zero level during Guard Time 2 when they are not disabled. Another purpose of this field is to ensure that all MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering a clock signal using both MDDI_Data0 and MDDI_Stb after enabling the line drivers for a host. T. For The Forward Link Skew Calibration Packets In one embodiment, the Parameter CRC field (2 bytes) contains a 16-bit CRC of all bytes from the Packet Length to the Packet Type. If this CRC fails to check then the entire packet is discarded.The All Zero 1 field uses 1 byte to ensure that there will be an transitions on the MDDI_Stb at the end of the Parameter CRC field.The Calibration Data Sequence field contains a data sequence that causes the MDDI_Data signals to toggle at every data period. The length of the Calibration Data Sequence field is determined by the interface being used on the forward link. During the processing of the Calibration Data Sequence, the MDDI host controller sets all MDDI_Data signals equal to the strobe signal. The client clock recovery circuit should use only MDDI_Stb rather than MDDI_Stb Xor MDDI_Data0 to recover the data clock while the Calibration Data Sequence field is being received by the client Display. Depending on the exact phase of the MDDI_Stb signal at the beginning of the Calibration Data Sequence field, the Calibration Data Sequence will generally be one of the following based on the interface Type being used when this packet is sent:Type 1- (64 byte data sequence) 0xaa, 0xaa ... or 0x55, 0x55...Type 2 - (128 byte data sequence) 0xcc, 0xcc ... or 0x33, 0x33...Type 3 -(256 byte data sequence) 0xf0, 0xf0 ... or 0x0f, 0x0f ...Type 4 - (512 byte data sequence) 0xff, 0x00, 0xff, 0x00 ... or 0x00, 0xff, 0x00, 0xff ...An example of the possible MDDI_Data and MDDI_Stb waveforms for both the Type 1 and Type 2 Interfaces are shown in FIGS. 62A and 62B , respectively. XX. CONCLUSION While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. XXI. Further aspects: According to one aspect, a digital data interface for transferring digital presentation data at a high rate between a host device and a client device over a communication path comprising a plurality of packet structures linked together to form a communication protocol for communicating a pre-selected set of digital control and presentation data between a host and a client over said communication path; and at least one link controller residing in said host device coupled to said client through said communications path, being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets. The interface may further comprising said packets grouped together within media frames that are communicated between said host and client having a pre- defined fixed length with a predetermined number of said packets have differing and variable lengths. The interface may further comprising a Sub-frame Header Packet positioned at the beginning of transfers of packets from said host. In the interface, said link controller may be a host link controller and may further comprising at least one client link controller residing in said client device coupled to said host through said communications path, being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets. The interface may further comprising a plurality of transfer modes, each allowing the transfer of different maximum numbers of bits of data in parallel over a given time period, with each mode selectable by negotiation between said host and client link drivers; and wherein said transfer modes are dynamically adjustable between said modes during transfer of data. The interface may further comprising a Link Shutdown type packet for transmission by said host to said client to terminate the transfer of data in either direction over said communication path. The interface may further comprising means for said client to wake up said host from a hibernation state.According to one aspect, a method of transferring digital data at a high rate between a host device and a client device over a communication path for presentation to a user, comprising generating one or more of a plurality of predefined packet structures and linking them together to form a pre-defined communication protocol; communicating a pre-selected set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; coupling at least one host link controller residing in said host device to said client device through said communications path, the host link controller being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets; and transferring data in the form of packets over said communications path using said link controllers. The method may further comprising grouping said packets together within media frames for communication between said host and client, the media frames having a predefined fixed length with a pre-determined number of said packets have differing and variable lengths. The method may further comprising commencing transfer of packets from said host with a Sub-frame Header type packet. The method may further comprising generating, transmitting, and receiving packets forming said communications protocol through at least one client link controller residing in said client device coupled to said host device through said communications path. The method may further comprising negotiating between host and client link drivers the use of one of a plurality of transfer modes in each direction, each allowing the transfer of different maximum numbers of bits of data in parallel over a given time period; and dynamically adjusting between said transfer modes during transfer of data. The method may further comprising waking up a communication link by driving a data line to a high state for at least 10 clock cycles and starting to transmit a strobe signal as if the data line was zero, by said host. The method may further comprising driving the data line low for a predetermined number of clock cycles by said host while continuing to transmit a strobe signal after the host has driven the data line high for about 150 clock cycles. The method may further comprising beginning to transmit the first sub- frame header packet by said host. The method may further comprising counting at least 150 continuous clock cycles of the data line being high, followed by at least 50 continuous clock cycles of the data line being low, by said client. The method may further comprising stopping driving the data line high by said client after the client counts 70 continuous clock cycles of the data being high. The method may further comprising counting another 80 continuous clock cycles of the data line being high to reach the 150 clock cycles of the data line being high by said client, and looking for about 50 clock cycles of the data line being low, and looking for the unique word. The method may further comprising counting the number of clock cycles occurring until a one is sampled by said host, by sampling the data line on both the rising and falling edges during the reverse timing packet. The method may further comprising counting the number of clock cycles occurring until a one is sampled by said host, by sampling the data line on both the rising and falling edges during the reverse timing packet. The method may further comprising terminating the transfer of data in either direction over said communication path using a Link Shutdown type packet for transmission by said host to said client. The method may further comprising waking up said host from a hibernation state by communication with said client.According to one aspect, an apparatus for transferring digital data at a high rate between a host device and a client device over a communication path for presentation to a user, comprising at least one host link controller disposed in said host device for generating one or more of a plurality of pre-defined packet structures and linking them together to form a pre-defined communication protocol, and for communicating a pre-selected set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; at least one client controller disposed in said client device and coupled to said host link controller through said communications path; and each link controller being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets. In the apparatus, said host controller comprises a state machine. In the apparatus, said host controller may comprise a general purpose signal processor. The apparatus may further comprising a Sub-frame Header type packet at the commencing of transfer of packets from said host. In the apparatus, said host controller may comprise one or more differential line drivers; and said client receiver comprises one or more differential line receivers coupled to said communication path. In the apparatus, said host and client link controllers may be configured to use of one of a plurality of transfer modes in each direction, each allowing the transfer of different maximum numbers of bits of data in parallel over a given time period; and being capable of being dynamically adjusting between said transfer modes during transfer of data. In the apparatus, said host controller may be configured to transmit a Link Shutdown type packet to said client means for terminating the transfer of data in either direction over said communication path.According to one aspect, for use in an electronic system for transferring digital data at a high rate between a host device and a client device over a communication path for presentation to a user, a computer program product comprising a computer usable medium having computer readable program code means embodied in said medium for causing an application program to execute on the computer system, said computer readable program code means comprising a computer readable first program code means for causing the computer system to generate one or more of a plurality of pre-defined packet structures and link them together to form a pre-defined communication protocol; a computer readable second program code means for causing the computer system to communicate a pre-selected set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; a computer readable third program code means for causing the computer system to couple at least one host link controller disposed in said host device to at least one client controller disposed in said client device through said communications path, the link controllers being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets; and a computer readable fourth program code means for causing the computer system to transfer data in the form of packets over said communications path using said link controllers.According to one aspect, an apparatus for transferring digital data at a high rate between a host device and a client device over a communication path for presentation to a user, comprising means for generating one or more of a plurality of pre-defined packet structures and linking them together to form a pre-defined communication protocol; means for communicating a pre-selected set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; means for coupling at least two link controllers together through said communications path, one in each of said host and client and each being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets; and means for transferring data in the form of packets over said communications path using said link controllers. The apparatus may further comprising means for commencing transfer of packets from said host with a Sub-frame Header type packet. The apparatus may further comprising means for requesting display capabilities information from the client by a host link controller so as to determine what type of data and data rates said client is capable of accommodating through said interface.According to one aspect, a processor for use in an electronic system for transferring digital data at a high rate between a host device and a client device over a communication path, the processor configured to generate one or more of a plurality of pre-defined packet structures and link them together to form a predefined communication protocol; to form digital presentation data into one or more types of data packets; communicate a pre-selected set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; and transfer data in the form of packets over said communications path.According to one aspect, a state machine for use in obtaining synchronization in an electronic system transferring digital data at a high rate between a host device and a client device over a communication path, the state machine configured to have at least one Async Frames State synchronization state, at least two Acquiring Sync States synchronization states, and at least three In-Sync States synchronization states.According to one aspect, a state machine for use in obtaining synchronization in an electronic system transferring digital data at a high rate between a host device and a client device over a communication path, the state machine configured to have at least one Acquiring Sync States synchronization states, and at least two In-Sync States synchronization states. In the state machine, one condition for shifting between an Acquiring Sync State and a first In-Sync State may be detecting the presence of a synchronization pattern in the communication link. In the state machine, a second condition for shifting between an Acquiring Sync State and a first In-Sync State may be detecting the presence of a sub-frame header packet and good CRC value at a frame boundary. In the state machine, one condition for shifting between a first In-Sync State and an Acquiring Sync State may be detecting the presence of no synchronization pattern or a bad CRC value at a sub-frame boundary. In the state machine, one condition for shifting between a first In-Sync State and a second In-Sync State may be detecting the presence of no synchronization pattern or a bad CRC value at a sub-frame boundary. |